The Justin Welsh 'playing the hits' repurposing system, read through a voice-first lens
Justin Welsh's repurposing system is identify top performers, swipe-file them, repurpose at 6 and 12 months. The model works. The 'AI-variations' step is where most creators flatten their voice without noticing. Here's the voice-first version of the same system.
· 8 min read
Justin Welsh's content repurposing system is one of the cleanest playbooks in the creator space. Identify top-performing posts from your last 6 to 12 months. Save them to a swipe file. Repurpose and re-schedule at 6 and 12 months. The justification is correct: roughly 90% of your audience didn't see the original post the first time, so the second time is the first time for most readers. Welsh's own line, 'After a few years, I'm mostly playing the hits,' is the right framing for a writer with a deep enough catalog.
The model works. The catch is where the standard adaptation of the model fails. Most creators implement the system via 'AI-generate variations from the original,' which is the step the tools sell as the labor-saver. The variation generation is also the step where voice quietly dies. This piece reads each step of the system through a voice-first lens and proposes the version that survives at 6 to 12 months of compounding.
Step 1, voice-first: identify by voice fit, not by impressions alone
Standard advice: sort by impressions, take the top ~20% of posts from the last year, save them. The impressions filter is reasonable but voice-blind. Some of the top-impression posts in any catalog are voice anomalies: a hook you reached for that you wouldn't normally use, a template variation that happened to spike, a piggyback on a trend that doesn't represent your usual register. Resurfacing those at 6 months teaches the algorithm and your audience that the anomaly is your representative voice, not the anomaly.
The voice-first filter: of the top-impression posts in the last year, keep the ones that also sound recognizably like the rest of your timeline. Drop the anomalies even if they performed. The right swipe file is roughly 60% of the high-impression list, not 100%. The other 40% are useful as learning artifacts (which hooks reached people, even if the voice was wrong), but not as resurface candidates.
Step 2, voice-first: the swipe file is a voice-pattern library, not a hooks library
What you save into the swipe file shapes what comes out 6 months later. The standard pattern is to save the post itself plus the hook and structure. The voice-first pattern adds two more fields: what voice signal carried the post (specificity, contrarian register, dry observation, etc.), and the post's voice match score against your typical profile. The result is a swipe file that lets you resurface posts by voice intent, not just by topic. 'I want to ship one of my dry-observational pieces this week' becomes a query the file answers.
The general repurposing case is covered in how to repurpose content for Twitter without flattening your voice. The Welsh-specific case is narrower: you're repurposing your own posts, not someone else's content. The voice-flattening risk is different (and more subtle): you have the source voice; the question is whether the variation step preserves it.
Step 3, voice-first: variation by hand, not by AI-from-scratch
This is where most creators fail without noticing. The standard tool flow is to hit a 'generate variations' button and pick the one closest to your voice. The output sounds approximately right and has a few specific voice-flat tells: filler-y connective tissue ('it's important to remember that'), category-default rhythm, the same 30 hooks the model has overfit to. Voice 8 out of 10 isn't recognizable as voice anymore; it's the helpful-assistant default with your topic plugged in.
Voice-first variation rules:
- Re-write the post by hand from the same idea. Don't feed the original to an AI as a prompt. Your hands generate your voice; an AI prompted with your prior output generates a regression toward the mean.
- Change one element deliberately. A different opening hook, a different example, a different framing. Not three changes; one. The post should read as a sibling to the original, not a remix.
- Keep the voice signature in the changed element. If the original ran a dry-observational close, the variation runs a dry-observational close in a different register.
- Pass the radio test on the variation. Read the variation out loud. If it sounds like a stranger wrote it, it isn't a variation; it's a derivative.
If you're using a voice-trained tool (more on this below), the AI-from-scratch trap is partly avoided because the model is trained on your voice, not the general assistant default. But even with a voice-trained model, the by-hand-rewrite of the most-resurfaced posts produces noticeably better results than a one-click variation. The ratio worth keeping: hand-rewrite the top 20% of resurface candidates; voice-trained-AI-rewrite the next 50%; skip the bottom 30%.
Step 4, voice-first: the scheduling cadence and the never-schedule list
Welsh's system schedules the variations 6 to 12 months ahead. This part is well-calibrated for voice-first creators with one caveat: don't fill the schedule. The native schedule lives are about half of your output, not 80%. The remaining half is live posts (reactive observations, reply threads, time-bound takes). The voice-first take on scheduling tools covers the broader principle: heavy schedulers reduce the marginal cost of skipping live posting, and the live posting is where most of the voice work happens.
The never-schedule list still applies inside the Welsh system. Replies, customer service, crisis posts, reactive observations, time-bound calls. The repurposing engine is for evergreen voice samples, not for everything that ships from your account.
Where the 90% claim is right and where it's misleading
Welsh's '90% of your audience hasn't seen it' line is true on the median. The catch is that the 10% who did see it the first time are disproportionately your most-engaged followers, your repeat readers, your DM correspondents. They notice the resurface. The right test isn't 'will most readers have missed this'; it's 'is the post worth the most-engaged 10% seeing it again.' Voice-rich evergreen passes this test (your most-engaged readers re-read it with pleasure). Template-resurfaced content fails it (your most-engaged readers register the rebottling).
A useful heuristic: would your top-100 most-engaged followers reply to this post if you shipped it again today? If yes, resurface. If no, it's not actually a hit; it had a moment.
Where Auden fits in the Welsh system
Auden, the brain inside VoiceMoat, trains on a creator's full profile (100 to 200 posts, replies, threads, and images across 9 signals of voice) and produces drafts that match the writer's register, with a voice match score attached. The fit with Welsh's system is in two places: (1) Voice match scoring on the swipe-file candidates, so voice anomalies are filtered out structurally rather than by judgment alone. (2) Voice-trained variation generation for the middle 50% of resurface candidates where hand-rewriting is over budget. The top 20% still gets hand-rewritten; the bottom 30% gets skipped.
The system is correct in its bones. The 6-month resurface is the right cadence; the 90%-haven't-seen-it claim is mostly right; the swipe-file structure is the right operating model. The voice-first version protects the system from its own most-common failure mode: a swipe file that quietly converts into a hook-recycler and a feed that quietly converts into a content-account.