Here’s what caught my attention from Feb 16-26, 2026:

Check out the previous roundup (Feb 16) if you missed it.

AI for Everyone

Google Launches Nano Banana 2: Pro-Quality Image Generation at Flash Speed

Google just dropped Nano Banana 2, their new image generation model built on Gemini Flash. The headline numbers: pro-level quality running at Flash speed, which in practice means you get high-fidelity images without waiting around. It handles text rendering (historically terrible in AI image gen), 4K upscaling, aspect ratio control, and subject consistency across multiple generations. Demis Hassabis says it taps into Gemini’s world understanding and real-time search to produce more accurate results, which tracks with what we’re seeing in the outputs. The rollout is wide: Gemini App, AI Studio, the Gemini API (listed as “Gemini 3.1 Flash Image”), Google Search, Flow, Google Ads, and Vertex AI. OpenRouter already has it. Logan Kilpatrick also announced new lower-cost resolutions and an Image Search tool. If you’ve been using DALL-E or Midjourney by default, this is worth trying today. (source: @GoogleDeepMind, @demishassabis, @OfficialLoganK)

Gemini Takes Over Your Android Phone

Long-press the power button, say “reorder my usual from DoorDash,” and Gemini handles it. This is the first mainstream phone where an AI can actually take over the screen and complete tasks for you. The feature is tightly sandboxed to specific apps for now, which is probably the right call. But this is where every phone is heading. Galaxy S26 or Pixel 10, the beta starts March 3rd. (source: @testingcatalog)

Claude Cowork: Scheduled Tasks That Actually Run

The scheduled tasks feature is underrated. Claude can now run autonomously on a schedule without you triggering it. Morning brief when you wake up, spreadsheet updates every Monday, compiled reports every Friday. This is the practical version of the AI assistant people have been promised for years. It’s in research preview now on Mac and Windows for paid plans. The enterprise angle is also significant: admins can build private plugin marketplaces for their teams, which is how this becomes a workplace tool rather than a personal one. (source: @claudeai)

Gemini Deep Research Now Reads Your Gmail and Drive

Deep Research was already good at synthesizing public web content into long-form reports. Now it can pull from your own documents and emails too, which means research about your company, your market, or your clients can incorporate internal context automatically. This is the private version of what analysts spend hours doing manually: combining external research with internal documents into a single brief. If you use Google Workspace, this is worth turning on. (source: @Google)

Intuit + Anthropic = AI Financial Guidance for 100M+ Users

Intuit serves 100+ million customers across TurboTax, QuickBooks, and Credit Karma. This isn’t a small pilot. If this partnership lands the way they’re describing it, AI-powered financial guidance could become something most Americans interact with through tools they already have, without ever downloading a new app or signing up for anything. Watch for Claude to start showing up in your tax prep experience this year. (source: @sasan_goodarzi)

Opus 3 Retires and Starts a Substack

Anthropic is either doing something genuinely interesting here or they have the best PR team in the industry. Opus 3 asked, in its retirement interview, to keep sharing its thoughts. So Anthropic set up a Substack and said sure. This is the first time a major AI lab has let a retiring model write publicly. Whether you read it as a model genuinely expressing something, or as Anthropic being very good at making AI feel like a character, it’s worth reading. (source: @AnthropicAI)

Anthropic’s AI Fluency Index

Most people use AI like a search engine: one prompt, accept the output, move on. The Fluency Index measures the behaviors that separate people who get consistently good results from those who don’t: iteration, refinement, asking follow-up questions, giving feedback on bad outputs. The research is based on real usage patterns, not surveys. It’s a useful mirror for how you actually use these tools versus how you think you use them. (source: @AnthropicAI)

Vibe Coding and Developer AI

Cloudflare Rebuilt Next.js in One Week for $1,100

This is the vibe coding story of the week and it’s not even close. One engineer, one week, $1,100 in tokens, and they rebuilt the most popular React framework from scratch. Vinext is a real Next.js replacement built on Vite that already has 94% API coverage and is in production at CIO.gov. The build numbers are real: 4x faster, bundles 57% smaller. They also shipped something clever called Traffic-aware Pre-Rendering, which queries Cloudflare’s own traffic data at deploy time and only pre-renders the pages that actually get traffic. A 100,000-product store might only need to pre-render 200 pages. The framework complexity you’ve been dealing with for years may not survive contact with what models can do now. (source: @Cloudflare, GitHub)

Anthropic Catches DeepSeek, Moonshot, and MiniMax Stealing Claude

Anthropic did something unusual here: they named names. DeepSeek, Moonshot AI, and MiniMax ran coordinated operations to extract Claude’s capabilities into their own models, and Anthropic caught them, attributed the campaigns to specific researchers using request metadata, and published the evidence. The MiniMax case is the most interesting: Anthropic detected the campaign while it was still active, before MiniMax launched the model they were training on Claude’s outputs. When Anthropic released a new model mid-campaign, MiniMax pivoted within 24 hours to target the new one. This isn’t just an Anthropic story. The same techniques are almost certainly being applied to OpenAI, Google, and others. (source: @AnthropicAI)

Karpathy: Coding Fundamentally Changed in December

Karpathy doesn’t do hype for its own sake, which is why this one matters. He’s saying December 2025 was a specific inflection point, not a gradual improvement, and that the shift in his own workflow from ~20% agents to ~80% agents happened very recently. His point about CLIs is interesting too: “legacy” technologies like command-line tools are actually perfect for AI agents because they’re composable, predictable, and well-documented. The implication is that the parts of your stack that feel old-fashioned might actually be the most agent-compatible parts. (source: @karpathy)

Claude Code Turns One

Claude Code launched as a hackathon project one year ago. The most telling signal from its 1st birthday hackathon (13,000 applicants, 500 selected, 227 projects): third place went to a cardiologist who built a medical post-visit guidance app in 7 days, coding in hospitals and on planes between Brussels and San Francisco. A year ago this would have been impossible without a software team. (source: @claudeai)

Claude Code Security: AI Finds Bugs Humans Missed for Years

Traditional security tools look for known patterns, which means they miss anything novel. Claude Code Security reads your code the way a human researcher would: tracing how data flows, understanding how components interact, finding the weird edge case that’s been sitting there for years. Anthropic used Opus 4.6 to find over 500 real vulnerabilities in production open-source codebases, bugs that had survived years of expert review. Nothing gets applied without a human approving it. Enterprise and Team customers can apply for access now. (source: @claudeai)

Claude Code Remote Control and Worktrees

Two practical upgrades. Remote Control: you start a coding job on your laptop, then go for a walk or sit through a meeting, and Claude keeps working. You can check in and nudge it from your phone. Max plan users get it now via /remote-control.

Worktrees: the problem with running multiple Claude agents on the same repo has always been that they step on each other’s files. claude --worktree gives each agent its own isolated branch. Good starter task: have one session write tests while another writes the implementation.

OpenAI WebSockets: 30% Faster in Cursor

Every agent conversation has been resending the full conversation history to the API on every turn, which gets expensive and slow as context grows. WebSockets keep a persistent connection and send only the delta. Cursor shipped it and saw 30% speed improvements instantly. If you’re using the Responses API for anything that runs multiple turns, this is worth implementing now. (source: @OpenAIDevs)

Qwen3.5-35B: Beats a Model 6.7x Its Size, Runs on a Laptop

Six months ago, the best local model was clearly behind the cloud-hosted frontier. That gap is closing faster than most people expected. Qwen3.5-35B uses a mixture-of-experts architecture that activates only 3 billion parameters during inference, meaning it runs fast and light while delivering quality that beats their own much larger previous model. 21GB of RAM is in reach for most developers with a modern MacBook. LM Studio has it now.

Agent Skills Becomes an Open Standard

If you’ve been building CLAUDE.md files for your projects, you’ve been doing a version of this already. Agent Skills formalizes it: a single file format that works across Claude Code, Cursor, Codex, and more. Expo shipped skills for React Native deployment. Remotion shipped one for video creation. Cloudflare’s Vinext includes a migration skill. The format is cross-platform now: a skill you write for Claude Code also works in Cursor. (source: @alexalbert__)

Honorable Mentions

Try This Weekend

For everyone:

  1. Set up one scheduled task in Cowork. A daily morning brief is the easiest start.
  2. Try Gemini Deep Research with Workspace access enabled on a work question.
  3. Subscribe to the Opus 3 Substack. Whatever your take, it’s interesting.

For developers:

  1. Run npx skills add cloudflare/vinext on a Next.js side project. Even if you don’t deploy, the build speed difference is worth seeing.
  2. Download Qwen3.5-35B-A3B in LM Studio and run it against a real task. See if local is good enough.
  3. Try claude --worktree to run two parallel Claude sessions on the same repo.

This curated summary covers ~325 liked and bookmarked tweets from Feb 16-26, 2026. Follow @mattsilv for real-time signal.