BlogAI and Voice

How to avoid the AI tells: a writer's checklist for 2026

How to avoid the AI tells in your writing in 2026 is the remediation companion to the diagnostic. Nine canonical tells become nine active-avoidance practices, each with constructed before/after examples. Em-dash density, AI vocabulary cluster, symmetric two-clause hook, the not-just-X-but-Y frame, beige bullet middle, generic closing CTA, symmetric paragraph rhythm, voice-flat coherence, missing taboos. Plus the two-minute pre-publish scan.

· 8 min read

How to avoid the AI tells in your writing in 2026 is a different question from how to spot them. The diagnostic (read for AI, identify the nine tells, run the byline-removal test) lives at how to spot AI-generated content in 2026: the em-dash and 8 other tells. This piece is the remediation companion: the same canonical nine tells, reframed as active-avoidance practices the writer applies during drafting, with constructed before/after examples for each. Same nine tells. Different operational use. Run the diagnostic on writing you already shipped; run this checklist on writing you are drafting now.

Two notes before the checklist. First, all before/after examples below are constructed, not pulled from real creators. They are illustrative of the pattern and the fix, not quotes of any specific writer's work. Second, this checklist sticks to the canonical nine tells. The diagnostic piece named them; adding new tells in the remediation companion would create framework drift across the corpus. The full word-level vocabulary substitution table that pairs with tell two (the AI vocabulary cluster) is at the words AI overuses (and how to ban them from your writing forever).

How to use this checklist

Two passes. The first pass is the read-before-drafting pass: skim the nine sections once a quarter so the avoidance practices are loaded in working memory. The second pass is the pre-publish scan: run the abbreviated two-minute version on every draft just before clicking publish. The two-minute scan catches roughly 80 percent of the tells; the read-before-drafting pass catches the rest by shaping the draft as you write it. The full audit checklist with the diagnostic framing is in the companion piece; this piece is the active-while-drafting version of the same nine items.

Tell 1: Em-dash density

What it is: two or more em-dashes in a sub-100-word paragraph. The strongest single AI tell in 2026 because general LLMs over-produce the em-dash by default and writers who use AI for editing-only passes still leak em-dashes through.

Active avoidance: zero em-dashes in your writing. Period. The replacement options that read as voice-rich rather than as em-dash-substituted: a period (start a new sentence), a colon (introduce the elaboration), a comma (subordinate the clause), or a parenthetical (set off the aside). Each option matches a different structural intent the em-dash was being used for; pick the one that matches the move.

Constructed before-and-after (illustrative). Before: "The marketing team shipped 47 posts last quarter, most of which used the same hook template, which the audience now reads through." After: "The marketing team shipped 47 posts last quarter. Most used the same hook template. The audience reads through it now." Same content. Three sentences instead of one comma-spliced one. Zero em-dashes (the original also had none, intentionally, to model the point).

Tell 2: AI vocabulary cluster

What it is: three or more uses of the AI-overused word cluster in a single post. The cluster: leverage as a verb, delve, unlock, navigate, harness, foster, elevate, embark, robust, seamless, comprehensive, holistic, plus frame openers like "in today's fast-paced world" and bridge connectors like moreover/furthermore/additionally/that-being-said. The full list with substitutions is at the words AI overuses.

Active avoidance: hard ban on the cluster in your writing. Substitute with the simpler equivalents. Leverage (as a verb) becomes use. Delve becomes look at, examine, dig into. Unlock becomes enable or make possible. Navigate becomes handle or work through. Harness becomes use. Foster becomes build or grow. Elevate becomes improve or strengthen. Embark becomes start. Robust becomes strong or reliable. Seamless becomes smooth or easy. Comprehensive becomes complete or thorough. Holistic becomes whole or full.

Constructed before-and-after (illustrative). Before: "We leveraged the new framework to navigate the comprehensive onboarding redesign and unlock a more seamless user experience." After: "We used the new framework to handle the full onboarding redesign and produce a smoother user experience." Same meaning. Different register. The cluster is gone.

Tell 3: Symmetric two-clause hook template

What it is: the "Most people think X. The reality is Y." hook pattern (and its variants: "It is not about X, it is about Y," "Everyone says X, but the truth is Y," "Forget X, focus on Y"). One of the strongest AI-template defaults because general LLMs converge on this opening pattern when asked for engaging content.

Active avoidance: refuse the symmetric two-clause opener entirely. Replace with a specific-observation opener ("the thing that surprised me when we ran this last quarter"), a named-context opener ("three deals I watched close last week"), a confession opener with a concrete confession ("I shipped the framework-first hook 40 times before I realized"), or a direct-claim opener (state the claim and stop).

Constructed before-and-after (illustrative). Before: "Most people think founder content is about credentials. The reality is that founder content is about voice." After: "I read 200 founder posts last week. The ones that landed all had one thing: a specific observation only the founder could make." The first opener is template. The second is voice (specific number, specific timeframe, specific claim).

Tell 4: The not-just-X-but-Y frame

What it is: the "It is not just about X but about Y" framing pattern. Fine once. Fingerprint when used three times in one post. The pattern combines two AI defaults: the contrast structure plus the upward-reframing move ("the real point is bigger than you think").

Active avoidance: use the not-just-X-but-Y frame at most once per post, and only when the move is genuinely doing reframing work (narrowing toward a sharper claim, not broadening into a vaguer one). When you find yourself reaching for the pattern a second time in the same post, the reach is the AI default; stop and rewrite.

Constructed before-and-after (illustrative). Before: "This is not just about marketing, it is about culture. It is not just about culture, it is about leadership. It is not just about leadership, it is about identity." After: "This is about culture. Specifically, about whether leadership in this company treats marketing as a downstream signal of culture or as the upstream input that creates it. The framing matters because it changes who owns the work." The pattern was doing zero work in the original; the rewrite picks one frame and develops it.

Tell 5: Beige bullet middle

What it is: four or five evenly weighted bullets in the middle of a post where each bullet could appear in any post on any topic. The second-strongest AI tell in long-form output after em-dash density. The bullets read interchangeable across writers ("consistency is the multiplier," "quality beats quantity," "the compound effect is real").

Active avoidance: every bullet must be unmistakably specific. If a bullet could appear in any other writer's post on the same topic, cut it or rewrite. The right test: would a stranger reading this bullet alone, with no context, be able to tell which writer's post it came from? If no, the bullet is beige. If yes, the bullet is voice-rich.

Constructed before-and-after (illustrative). Before bullets: "Consistency is the multiplier. Quality beats quantity. The compound effect is real. Show up every day. Trust the process." After bullets: "My third week posting daily, the engagement dropped 60 percent. Week four it recovered. Week five it dropped again. The pattern only stabilized at week 11. The compound effect is real, and it does not look like a smooth curve at the front of the timeline." The before is five interchangeable lines. The after is one specific timeline observation with a number and a non-smooth pattern.

Tell 6: Generic closing CTA register

What it is: closing lines like "what's your take?", "save this for later," "retweet if you found this useful," "follow for more like this," "thoughts?". The engagement-bait template that worked in 2020 and reads as template in 2026 regardless of how strong the body above it was.

Active avoidance: refuse the generic CTA close entirely. Three working alternatives: end on the last sentence of the argument (let the writing carry the close), end on a specific question that you would actually want the answer to ("how do you handle the third-week drop?"), or end on nothing (the post just ends on a strong line). The audience reads close-with-CTA as a post written for engagement rather than thought; the read recolors everything that came before.

Constructed before-and-after (illustrative). Before close: "What's your take? Drop a comment below and let me know your thoughts!" After close: "The third-week drop is the part most posting-cadence guides do not warn you about. If you have run a daily-post experiment that survived past it, I would like to know what the recovery looked like." The first close asks for nothing specific. The second asks a specific question to a specific reader audience.

Tell 7: Symmetric paragraph rhythm

What it is: every paragraph the same length, every sentence the same length, every paragraph the same structural shape. Default model output produces symmetric paragraph rhythm because the training optimizes for fluent-helpful-assistant prose, which is rhythmically uniform. Real writer prose is uneven by default.

Active avoidance: vary paragraph and sentence length deliberately. Short paragraph. Then a longer paragraph that meanders, builds, and lands on a specific phrase. Then a one-line fragment. The unevenness reads as human before the content registers because the audience pattern-matches symmetry as machine-shaped at the structural level.

Constructed before-and-after (illustrative). Before: three paragraphs of roughly equal length, each three sentences, each sentence roughly equal length. After: a one-sentence paragraph ("The team shipped on Friday."), then a four-sentence paragraph that builds and qualifies, then a two-sentence paragraph that lands. The visual structure of the post on screen tells the reader unevenness exists before they read a word; the prose then earns the unevenness.

Tell 8: Voice-flat coherence

What it is: writing that is fluent, on-topic, internally consistent, and forgettable. The post reads as if a competent generic professional wrote it; nothing identifies it as written by you specifically. The most insidious AI tell because the writer cannot self-diagnose it without external read.

Active avoidance: insert a voice-signal sentence into every post. A specific named example from your work. A repeated phrase that is identifiably yours. A refusal that no general AI would produce. An aside that is on-topic but specific to your lens. The voice signal does not have to be in the hook; it has to appear somewhere in the post. If you cannot point to a sentence in the draft that could only have been written by you, the post is voice-flat.

Constructed before-and-after (illustrative). Before sentence in a draft: "Effective marketing requires a deep understanding of the customer's needs and behaviors." After: "Marketing that works in our category comes from the third call with the same prospect, not the first. The first call is performance; the third is where the actual objection surfaces." The first sentence could appear in any marketing post by any writer. The second is specific to a writer who has done sales discovery calls and noticed the pattern.

Tell 9: Missing taboos

What it is: writing with no refusals. No words you will not use. No frames you will not reach for. No formats you will not ship. AI-drafted content has no taboos by default because the underlying model has no taboos beyond safety filters. Real writer voice is partly defined by what the writer refuses to do, and the refusal is observable in the absence of the patterns the writer rejects.

Active avoidance: write down your taboo list and enforce it in drafts. Vocabulary bans (the AI-overused cluster plus any words that are not in your natural speech). Hook bans (the symmetric two-clause, the autobiographical-credentials, the framework-count-without-specifics). Format bans (the listicle in your category if listicles are not in your voice, the engagement-bait close, the thread-emoji-and-counter). The taboo list is the writer-side equivalent of the model-level refusals a voice-trained tool enforces.

Constructed before-and-after at the taboo-list level (illustrative). Before: no taboo list, post drifts into AI register on the third paragraph. After: written taboo list ('no em-dashes, no leverage as a verb, no symmetric two-clause hook, no framework-count opener without specific items, no generic CTA close'), post stays in voice across the full body. The taboo list is short, specific, and on the wall above the desk. The discipline is to enforce it at draft time, not at audit time.

The two-minute pre-publish scan

  • Em-dash count. Should be zero in your writing. Find-and-replace if any leaked in.
  • Vocabulary cluster scan. Zero instances of the leverage / delve / unlock / navigate / harness / foster / elevate / embark / robust / seamless / comprehensive / holistic cluster.
  • Hook check. Not symmetric two-clause. Not autobiographical-credentials. Not framework-count without specifics. Not thread-emoji-and-counter.
  • Not-just-X-but-Y count. Zero or one. Two or more, rewrite.
  • Bullet check. Every bullet specific to your work. If any bullet could appear in another writer's post, cut or rewrite.
  • Close check. Not generic CTA. Last sentence of the argument, a specific question, or nothing.
  • Rhythm scan. Visual unevenness on the page (varied paragraph lengths, varied sentence lengths). Symmetric rhythm is rewrite.
  • Voice signal check. Point to one sentence that could only have been written by you. If you cannot, add one.
  • Taboo enforcement. Run your written taboo list against the draft. Anything that violates the list comes out.

The full diagnostic checklist with the read-existing-writing framing (rather than the draft-new-writing framing) is at how to spot AI-generated content in 2026: the em-dash and 8 other tells. The two pieces together: this one for the draft, that one for the audit. The audience-perception companion that addresses which fraction of your audience detects these tells and whether they care is at can your audience tell you're using AI? an honest 2026 analysis. The thread-format version of this scan, with the AI tells most relevant to multi-tweet threads specifically, is at how to write a viral Twitter thread in 2026 (without the same tired formulas).

The one-line answer

How do you avoid the AI tells in your writing in 2026? Refuse all nine canonical tells at draft time. Zero em-dashes. Zero AI-vocabulary-cluster words. No symmetric two-clause hook. No more than one not-just-X-but-Y frame per post. Every bullet specific to your work. No generic CTA close. Uneven paragraph and sentence rhythm. One voice-signal sentence per post. Written taboo list enforced. Run the two-minute pre-publish scan before clicking publish. The diagnostic for what AI-shaped writing looks like in the wild is the companion at how to spot AI-generated content in 2026; the perception read on whether audiences detect and care is at can your audience tell you're using AI. For the operationalized five-stage human-AI workflow this checklist plugs into at the Stage 3 human-edit step (with the two load-bearing constraints and three failure-mode workflows to recognize), see the hybrid human-AI writing workflow that actually works in 2026.

If you want a writing partner that enforces these nine refusals at the model level (drafts in your voice with the AI-overused vocabulary cluster on the taboo list by default, refuses the symmetric two-clause hook patterns, scores every draft against your voice baseline), Auden, the brain inside VoiceMoat, is built for this. Train Auden on your full profile of 100 to 200 posts, replies, threads, and images across the 9 dimensions of Voice DNA, and every draft comes back with a voice match score against your baseline. Drafts that reach for the tells in this checklist get refused at the model level. Auden suggests. You decide.

Want content that actually sounds like you?

VoiceMoat trains an AI on your full profile (posts, replies, threads, and images) and refuses to draft anything off-voice. Free for 7 days.

Related posts

Growth

The reply guy playbook: how to use AI for Twitter replies (without sounding like a bot) in 2026

Reply automation at scale is voice-corrosive at the structural level; the audience pattern-matches automated reply patterns within scrolling distance and the writer's reputational capital collapses faster than any other content failure mode. The conviction-led playbook for AI-assisted Twitter replies in 2026 that does not sound like a bot: the voice-corrosive-versus-voice-rich split in reply tooling, the inline Chrome extension workflow that keeps the writer in the loop, three illustrative reply examples clearly labeled constructed, and the operational discipline that compounds reputational capital instead of collapsing it.

Growth

How to repurpose tweets into LinkedIn posts (without sounding generic) in 2026

Cross-platform repurposing fails most often when the writer optimizes for LinkedIn's surface conventions and loses the voice that made the X content land. The tactical, example-rich playbook for repurposing tweets into LinkedIn posts in 2026: three structural moves (format conversion 280-char to 3000-char native, tone calibration without LinkedInfluencer cliches, audience-context adjustment from feed-scrolling to professional reading), illustrative before/after transformations clearly labeled constructed, and the voice-fidelity discipline that holds across both platforms.

Growth

The 10 best Chrome extensions for Twitter/X creators in 2026

Chrome extensions sit inside x.com itself, which removes the tab-switching friction that kills sustained content cadence. Ten Chrome extensions serious Twitter/X creators run in 2026: voice-trained reply drafting, AI growth platforms, scheduler-from-feed, two-platform parity for LinkedIn-and-X, viral-metrics overlay, multi-channel publisher, reply automation at the voice-corrosive edge, and the utility extensions that round out the stack. VoiceMoat's Chrome extension is in the list at position two with the placement-discipline reasoning on page; pricing is verified where publicly surfaced as of May 2026.