Anthropic Claws Back, Google Closes the Loop, and NVIDIA's $20B Insurance Policy
Hey friends 👋 Happy Sunday.
Here’s your weekly dose of AI and insight.
Every Wednesday, Signal Pro members get a step-by-step AI workflow they can apply immediately. No fluff, just practical guides to upskill you and your team. If you’re only reading the Sunday issue, you’re getting half the picture. Upgrade to paid today.
Today’s Signal is brought to you by Athyna.
LLM quality comes from people who can read a model’s output critically, write prompts that expose its weaknesses, and annotate responses with the nuance that generic pipelines miss.
Our trainers are bilingual, trained on real projects, and ready to integrate into your evaluation workflows.
Prompt design, evaluation and adversarial testing
Response scoring, ranking and preference data collection
Multilingual annotation across Spanish, Portuguese, and English
Part of the Athyna Intelligence LATAM talent network, working in U.S.-aligned time zones.
Sponsor The Signal to reach 55,000+ professionals.
AI Highlights
My top-3 picks of AI news this week.
Anthropic
1. Anthropic Claws Back
Anthropic shipped eight major features in five days, headlined by Dispatch, Claude Code Channels, and Projects in Cowork, effectively building its own version of the viral open-source agent OpenClaw before OpenAI could ship its own.
Dispatch: A new Cowork feature that lets you text Claude from your phone while it works on your desktop computer. Assign tasks like pulling data from spreadsheets, drafting reports, or organising files, then walk away and come back to finished work. Available on Pro ($20/month) and Max ($100/month) plans.
Claude Code Channels: Developers can now message Claude Code directly through Telegram and Discord bots. Your coding session runs in the background on your machine, and you interact with it like texting a colleague, no need to be sitting at your computer.
Projects in Cowork: Tasks and context now live in one place, organised by area of work. Files and instructions stay on your computer, giving users both structure and privacy. Existing projects can be imported in one click.
Alex’s take: Eight features in five days for a product used by millions of people. That shipping pace alone is insane. Anthropic is now racing towards an AI agent you can text from anywhere that does real work on your actual computer. OpenClaw proved the demand earlier this year. We had a thousand engineers queuing outside Tencent HQ in Shenzhen just to get it installed. Anthropic saw this unfold, and built a safer, more polished version inside their own product before OpenAI (who hired OpenClaw's creator) could respond. Now the real competitive landscape goes beyond benchmarks and instead rests on who can become your most reliable remote worker first.
2. Google Closes the Loop
Google shipped two major product launches in a single week, upgrading AI Studio into a full-stack app builder and redesigning Stitch into an AI-native design canvas, covering everything from first sketch to deployed product inside Google’s ecosystem.
Vibe coding goes full-stack: AI Studio’s new Antigravity coding agent turns plain-language prompts into production-ready applications with databases, user login, and payment integrations built in. It detects what your app needs, sets up the backend infrastructure automatically, and deploys with one click, all inside the browser.
Vibe design arrives: Stitch has been rebuilt as an infinite canvas where you describe a business goal or feeling, and it generates high-fidelity app designs. You can now talk to the canvas using voice, generate five screens at once, and export directly to code or Figma format. It’s free with 350 generations per month.
Consolidation and casualties: Google is sunsetting Firebase Studio (the cloud development tool it launched barely a year ago) to funnel users into this new setup. Meanwhile, Figma shares dropped 8% in the two days following the Stitch announcement, signalling the market sees this as a direct competitive threat.
Alex’s take: This is Google doing what Google does best: bundling. Design, code, backend and hosting are now all connected and all free to start. It’s a smart play at large here, leveraging their full stack capability. Start with users prototyping for free in AI Studio, then scale on paid Google Cloud infrastructure. This puts serious pressure on standalone tools like Figma, Replit, and Bolt that only cover one piece of the puzzle as Google has now made the full journey from idea to live product totally frictionless.
NVIDIA
3. NVIDIA’s $20B Insurance Policy
NVIDIA held its annual GTC conference in San Jose this week, the AI industry’s biggest hardware event. Among a wave of announcements, the standout was the Groq 3 LPU, NVIDIA’s first chip built specifically for AI inference, born from a $20 billion licensing deal with chip startup Groq struck on Christmas Eve.
First non-GPU chip: The Groq 3 LPU uses on-chip memory that moves data seven times faster than NVIDIA's own GPUs, purpose-built for generating AI responses at ultra-low latency. It ships in Q3 2026 in dedicated rack systems holding 256 LPU chips, delivering up to 35x more throughput per megawatt than its predecessor.
The bigger picture: The Groq 3 LPU slots into NVIDIA’s new Vera Rubin platform, a seven-chip, five-rack AI supercomputer that succeeds Blackwell. Jensen Huang projected $1 trillion in orders through 2027 and declared that the “inflection point of inference” has arrived, as AI shifts from chatbots to autonomous agents generating tokens at an exponential rate.
Deal structure: Framed as a “non-exclusive licensing agreement,” the Groq transaction saw its founder Jonathan Ross and most of its engineering talent move to NVIDIA, while Groq technically remains independent. Analysts at Bernstein called it a structure designed to keep the fiction of competition alive and avoid antitrust scrutiny, a playbook already used by Microsoft, Meta, and Amazon.
Alex’s take: NVIDIA spent $20 billion to solve a problem most people don’t know it has. Its GPUs are the gold standard for training AI models, but inference (the bit where AI actually does useful work) is a whole different game. As AI agents multiply and every software company becomes a token factory, the ability to generate fast, cheap responses at scale becomes the bottleneck. NVIDIA bought itself the best inference technology on the market before anyone else could. Three months later, the first chip is already in production. At a third of its quarterly cash flow, that’s pocket change for a company staring down a trillion-dollar pipeline.
Content I Enjoyed
What 81,000 People Actually Want From AI
Anthropic, the company behind Claude, just published what it believes is the largest qualitative study ever conducted. Over one week in December, 80,508 Claude users across 159 countries and 70 languages sat down with an AI interviewer to share their hopes, fears, and real experiences with the technology.
The headline finding was that when asked to describe their ideal vision for AI, 19% said “professional excellence.” 14% wanted help managing life’s admin, 11% wanted time freedom, 10% wanted financial independence. But digging a little deeper, roughly a third of all responses describe a life where work takes up less of who they are. The underlying desire across all these groups lies in the idea of liberation.
81% of respondents said AI had already taken a step towards their stated vision. But I found the study’s most compelling insight to be what Anthropic calls “light and shade”. The idea that AI’s benefits and harms are entangled, not opposed. Someone who values AI for emotional support is three times more likely to worry about becoming dependent on it. There’s a certain tension here, pulling in opposite directions, coexisting within the same person.
On the concerns side, unreliability topped the list at 27%, followed by jobs and the economy (22%) and autonomy and agency (22%). Concern about jobs was the single strongest predictor of negative AI sentiment in the entire study. Regionally, optimism skewed heavily toward developing countries, with Sub-Saharan Africa, Central Asia, and South Asia being roughly twice as likely to express zero concerns compared to North America and Western Europe.
In those regions, AI is framed as a way to bypass traditional barriers, following stories like a former butcher in Chile with almost no computer experience who built a functioning business with AI, and a software engineer in Ukraine who learned C# through Claude and landed a job that provided military deferment.
The overwhelming takeaway from 81,000 responses is that people have a deep desire to reclaim time and use it for the things that make us human. Cooking with an ageing parent, picking up young children from daycare, even the simple pleasure of reading a book. In the age of AI, what people want most is to be more present.
Idea I Learned

$350 Billion In. Basically Zero Out.
Goldman Sachs Chief Economist Jan Hatzius told the Atlantic Council that AI investment spending made “basically zero” difference to US economic growth in 2025. Economists at Morgan Stanley and JPMorgan Chase reached similar conclusions.
February alone set the record for the biggest startup funding month ever. OpenAI raised $110 billion (the largest private funding round in history). Anthropic raised $30 billion. Bezos is in talks to raise $100 billion to buy and automate manufacturers with AI. Big Tech revealed combined capex plans exceeding $650 billion for 2026.
And yet, of the 2.2% US GDP growth in 2025, analyst Joseph Politano calculated that AI contributed just 0.2 percentage points. The reason is actually quite simple. Roughly 75% of the cost of an AI data centre is computing hardware and chips, most of it manufactured in Taiwan and South Korea. The US writes the cheques, Asia captures the GDP.
The Goldman analysis measures direct capital flow. Chips, data centres, and the hardware that underpins AI. What it misses, and is very difficult to quantify, is what happens when millions of knowledge workers get two or more hours back every week. These are the compounding productivity gains across every department in every company who are slowly integrating these tools.
Goldman’s own 2023 research forecast that AI wouldn’t have a measurable impact on US GDP until 2027. By that timeline, we’re exactly where their models predicted. The money is flowing before the returns show up, just as it did with electricity, personal computers, and the internet.
The second and third-order effects are the ones that actually transform economies. And they just so happen to take years to surface. Whilst they’re fundamentally changing how people work today, they haven’t yet reached the GDP data.
Quote to Share
Terence Tao on the Dwarkesh Podcast on what advice he’d give someone considering a career in math in 2026:
When the world’s smartest mathematician tells you the gates are coming down, you pay attention.
The barriers that used to separate amateurs from experts, whether it be years of training, institutional access, or expensive equipment, are now collapsing across fields.
Take Paul Conyngham. He’s a data engineer from Sydney with zero background in biology. When his rescue dog Rosie was diagnosed with terminal cancer and chemo wasn’t working, he opened ChatGPT and started asking questions. He used AlphaFold to model the mutated proteins driving Rosie’s tumours, had her DNA sequenced for $3,000, and worked with researchers at UNSW to design a personalised mRNA vaccine. Within a month of the first injection, the main tumour shrank by 75%. Rosie went from barely being able to move to jumping fences and chasing rabbits.
Patrick Collison offered some important nuance on the story that I feel is worth noting. The cancer hasn’t been cured, and we can’t just synthesise magic mRNA treatments on demand. But he added that the system of regulators and manufacturers is “far too conservative” and that small-scale experimentation is much harder than it should be.
But it’s all coming. The tools to do extraordinary things are becoming available to anyone with curiosity and persistence. The bottleneck is shifting from knowledge and access to agency and willingness to take action. You can just do things.
Source: Terence Tao on the Dwarkesh Podcast (via Andrew Curran on X), Patrick Collison on X
Question to Ponder
“What’s the best way to actually start adopting AI if I’ve been stuck on the sidelines?”
Most people freeze in front of a blank AI chat window. They think they need a course first or a prompt engineering guide.
You don’t need any of that.
Open the voice memo app on your phone. Talk for 10 minutes about what you’re working on, what problems you’re trying to solve, what you want to accomplish. Don’t structure it. Just talk. When you think you’re done, keep going.
Then transcribe that audio using a free tool like ElevenLabs and paste it straight into Claude (using Opus 4.6 with extended thinking).
This works because when you speak, you think differently from when you type. You ramble. You circle back. You surface ideas you didn’t know you had. That messy, unfiltered stream of consciousness gives a far richer context than a carefully typed five-line prompt ever could.
Your brain can outpace your keyboard. So let it.
The people getting the most from AI right now are those who are most curious. They started before they felt ready, experimented without a playbook, and built the habit through persistence over perfection.
So stop waiting for the right course or the right tool. The interface is natural language, which we speak every single day.
Pick up your phone, hit record, and start talking.
Already a subscriber? Get your whole team on board. Signal Pro group subscriptions give everyone access to weekly AI workflows and tutorials, practical upskilling that pays for itself. It’s the kind of thing L&D budgets were made for. Share this with your manager today.
💡 If you enjoyed this issue, share it with a friend.
See you next week,
Alex BanksP.S. Humanoid playing tennis.








Anthropic continues to ship at warp speed. It’s feeling more and more like a two-horse race between them and Google right now.