I publish ai-weekly every week or two. Each edition starts with about 300 liked tweets and filters down to the 8-12 stories worth reading. People keep asking how, so here’s the whole thing.
🎯 The filter that matters most you
The most important part of this pipeline has nothing to do with code. It’s who I follow on X.
There are fancier ways to filter for signal: keyword monitors, algorithmic feeds, RSS aggregators. The thing that actually worked for me was being ruthless about my follow list. If my home timeline is mostly noise, no amount of automation fixes that. If it’s high-signal, everything downstream gets easier.
I also maintain a public list called LLMSignal with about 30 of the highest signal AI/LLM accounts I’ve found on X. You can follow it directly if you want a similar starting point.
📱 Like tweets throughout the week you
No special workflow. I use X normally and like or bookmark things I think are worth knowing about. That’s the entire input.
- Likes and bookmarks both get captured
- Bookmarks are for things I want to reference but might not “like” publicly
- No manual tagging or categorizing at this stage
📥 Fetch and dedupe automated
A script pulls my recent likes and bookmarks, deduplicates them, and expands all the shortened URLs so each tweet points to its actual source.
- Runs on a schedule via a Cloudflare Worker, stores results in cloud storage
- I originally tried X’s real-time webhook API, but it uses an older API version that cuts off almost half of longer tweets. Polling on a schedule turned out to be simpler and more reliable.
📋 Build the research brief automated
Another script takes the raw tweets and organizes them into a structured brief. It groups stories into two tracks (“Dev / Vibe Coding” and “Consumer AI”) and writes a short summary of each.
- The output is a research document, not a blog post. It’s organized by topic with source links, designed to hand off to the next step.
🔍 Deep research Claude
The brief goes into a Claude session with web access. Claude researches each story: finds the original announcements, checks claims against primary sources, pulls in benchmarks or details that tweets left out.
- This is where most of the “journalism” happens. A tweet might say “X just launched Y” and Claude finds the actual blog post, changelog, or paper.
- Output is structured findings per story, still split by track.
✍️ Write the first draft Claude Code
The research output goes into Claude Code, which produces a first draft of the full blog post in my voice.
- Sections get stable anchor IDs so links from the newsletter and Twitter thread won’t break if I rename a heading later.
✂️ Editorial review you
This is where I take over. The AI draft is usually 80-90% of the way there, but the last 10-20% is what makes it worth reading.
- I reorder stories by what I think matters most that week
- I rewrite intros, cut anything that feels like filler, and add my own takes
- I’ll sometimes send sections back to Claude with specific instructions (“this is burying the lede” or “this misses the real significance”)
- Nothing publishes until I’ve read through the whole thing and I’m happy with it
🚀 Publish and distribute you
I preview locally, push to GitHub, and CI/CD handles the rest. Then a few more things go out:
- Cover image generated with Gemini 3 Pro via OpenRouter. I give it a prompt describing the post’s concept, it produces two options in different illustration styles, and I pick the one I like. (The image at the top of this post was made that way.)
- Email newsletter via Buttondown. It’s a shorter version of the post, not a copy-paste. Each item links back to the full write-up.
- Twitter thread posted with a script that chains tweets as replies. The thread gets a humanizer pass before posting (scrubbing AI writing patterns). Blog link goes in the last tweet, not the first.
This is the pipeline behind every ai-weekly post. The source code is public if you want to poke around.
