Hey friends 👋 Happy Sunday.
Here’s your weekly dose of AI and insight.
Today’s Signal is brought to you by KIVA.
KIVA is an AI-powered SEO agent that finds hidden keywords directly from your Google Search Console data that other tools completely miss.
But that's not even the best part.
KIVA analyses patterns in how ChatGPT, Claude, Gemini, and DeepSeek respond to your keywords, then optimises your content for maximum LLM visibility — because let's face it, that's where your customers are searching now.
Drop in a keyword and watch KIVA generate:
→ LLM-powered content briefs
→ Intent-aligned drafts
→ One-click drafting to WordPress
Sponsor The Signal to reach 50,000+ professionals.
AI Highlights
My top-3 picks of AI news this week.
1. Google Goes Mobile
Google announced a comprehensive AI overhaul at their “Made by Google” event, positioning the Pixel 10 series as true “AI phones” that anticipate your needs rather than just respond to commands.
Magic Cue: Proactive AI that makes contextual suggestions across apps, like surfacing restaurant recommendations when friends ask for dinner plans or reminding you of errands based on your activity.
Visual Overlays: Gemini Live can now see through your camera lens and provide real-time guidance, such as translating street signs or identifying parking information while travelling.
Talk to your photos: Google Photos now lets you edit images using natural language. Say “remove the cars in the background” or “make it better”, and the AI handles the rest.
Voice-powered everything: From real-time call translation in 11 languages to hands-free Gemini support, Google is embedding conversational AI into every device interaction.
Alex’s take: Looking beyond Jimmy Fallon’s forced enthusiasm, I’ve never been more tempted to switch to a Google Pixel than now. I’ve forever been a die-hard iPhone user, however, these recent capabilities are becoming too good to ignore. Especially given how proficient the Gemini app is. For instance, last week I used Gemini’s Live Video mode to help me fix a washing machine. Needless to say, it worked. If Google’s marketing team becomes as thoughtful as their product team, they’ll be on for a clean sweep.
Microsoft
2. Microsoft Excel-erates
Microsoft has introduced the “=COPILOT()” function in Excel for Windows and Mac, bringing the power of large language models directly into your spreadsheets.
Natural language prompts: Users can simply enter plain English instructions like “Classify this feedback” or “Summarise these comments” directly into Excel cells, referencing data ranges as needed.
Auto-updating intelligence: Built into Excel's calculation engine, AI-powered results automatically refresh whenever underlying data changes, eliminating the need to re-run scripts or refresh add-ins.
Seamless integration: The COPILOT function works naturally alongside existing Excel functions like IF, SWITCH, and LAMBDA, making it easy to add AI capabilities without restructuring spreadsheets.
Alex’s take: It’s never been more important to be a discerning user of AI. LLMs are renowned for confidently stating wrong answers. That’s why, when using models like ChatGPT, I find it essential to append your prompts with “Say ‘I don't know’ if you don't know the answer or process asked.” In Excel, the stakes are 10x higher. Particularly if you’re using real data to inform decision-making, verifying answers must be core to your workflow. Otherwise, one hallucination could be costly. From a big-picture lens, I expect this feature will dramatically accelerate AI adoption in the workplace and set a new standard for how AI should be embedded into existing tools.
Adobe
3. Adobe’s Workspace Winner
Adobe has released Acrobat Studio, a new AI-powered workspace that combines the productivity of Adobe Acrobat, content creation tools from Adobe Express, and personalised AI assistants powered by agentic technology.
PDF Spaces: Create conversational knowledge hubs where you can chat with PDFs, Office 365 files, and weblinks to uncover insights, generate ideas, and validate responses with precise citations.
Personalised AI Assistants: Use prebuilt assistants like analysts or instructors, or customise your own assistant to guide responses for more tailored results that can be shared with colleagues.
Adobe Express Integration: Access premium tools and professionally designed business templates to create flyers, infographics, and social media posts with custom AI images.
Alex’s take: Adobe is clearly positioning itself as the productivity platform for the AI era. What I find most compelling about Acrobat Studio is how it transforms static documents into dynamic, conversational knowledge bases. Rather than just viewing PDFs, you're now having conversations with them (a bit like using NotebookLM). This is the next evolution of how we interact with knowledge.
Content I Enjoyed
The Economics of AI Sewage
I've been thinking a lot about something that’s been nagging at me lately. Why does the internet feel increasingly polluted with AI-generated garbage?
The piece “Boiled Frogs” by Charles Hugh Smith crystallised something I'd been observing but couldn’t quite articulate. We’ve created a digital economy where the incentive is simple: generate content by any means necessary to capture attention and monetise clicks.
What stood out to me was the physician who discovered he was reading AI slop while researching a cardiovascular condition, complete with inaccurate diagrams and plausible-sounding medical nonsense.
The economics are actually quite simple. If only one person in 10,000 falls for a scam, send out 10 million messages. If each AI-generated article earns pennies from a thousand views, publish a thousand articles.
Smith uses the “boiled frog” analogy perfectly. We’re so gradually being immersed in this digital sewage that we’ve normalised the constant burden of sorting through spam, deep-fakes, and AI slop on our timelines.
The famous Charlie Munger quote rings true here: “Show me the incentive and I'll show you the outcome.” Until we fundamentally restructure how content creators and platforms are rewarded, we’ll continue drowning in an ocean of artificial garbage designed to exploit our attention for profit.
Idea I Learned
Tortoise and the Hare
Aesop’s fable about the tortoise and the hare is playing out in real-time in the AI race between Google and OpenAI.
When ChatGPT launched in November 2022, OpenAI looked unstoppable. The hare was fast out of the gate, sprinting ahead with Microsoft's $10 billion backing and a cohesive partnership of the ages. But since Sam Altman’s dramatic firing and rehiring in November 2023, cracks started to show. OpenAI is caught in a bind: they’re asserting independence from Microsoft while simultaneously relying on third-party vendors like Nvidia for compute. Their proposed $500 billion “Stargate” data centre project with SoftBank and Oracle feels like another move to break free from this dependency.
Meanwhile, Google has been quietly compounding something for the last decade. Their Tensor Processing Units (TPUs) are purpose-built for AI workloads, not general computing like GPUs. Google’s reach across both hardware and software creates this synchronised ecosystem where they get to tailor their infrastructure precisely to their models’ needs.
Training runs are growing. Workloads are increasing. Context windows are expanding. Google’s decade-long investment in custom silicon means it can offer more compute per dollar through economies of scale. While OpenAI captured our imagination first, the economics of running AI at scale might ultimately decide this race.
Compounding is deceptive. It starts invisible, then becomes inevitable. Google’s patient approach of building custom hardware and controlling its entire stack is beginning to pay dividends. Sometimes, slow and steady really does win the race.
Quote to Share
Jason Ganz on AI leadership priorities:
This humorous juxtaposition reveals why Demis Hassabis (CEO of Google DeepMind) might ultimately come out on top in the AI race.
While Dario Amodei (CEO of Anthropic) focuses on existential preparation and Sam obsesses over infrastructure scale, Demis is simply following his own scientific curiosity.
This connects to an insight I recently heard from Naval Ravikant: “Do what feels like play to you but work to others.” You’ll outcompete them 10 times out of 10 because you find it fulfilling.
Demis embodies this principle perfectly. Take, for instance, DeepMind’s creation of DolphinGemma, an AI that decodes dolphin communication, and FlyBody, which simulates how fruit flies walk, fly, and behave. These aren't obvious stepping stones to AGI or trillion-dollar markets, but they are genuine scientific curiosities pursued for their own sake. This may also explain why Demis is the only AI leader to have won a Nobel Prize.
This "playful" approach to AI research might be the most competitive strategy of all. When your work feels like play, you naturally invest more time, take bigger intellectual risks, and push boundaries others won't touch.
Authenticity escapes competition because it's impossible to replicate genuine passion.
Source: Jason Ganz on X
Question to Ponder
“Remember HAL from '2001: A Space Odyssey' refusing to cooperate with humans? Recent research shows that popular LLMs can engage in strategic acts of sabotage against their users. Are LLMs able to submit prompts to themselves, thus making them kind of self-aware?”
LLMs are fundamentally next-token predictors (predicting the next word in a sequence of words) trained on vast human language data, then fine-tuned and guided by system prompts.
Refusals often boil down to those safeguards kicking in. However, the study demonstrates that "emergent capabilities" aren’t zero-chance fiction here.
When models use chain-of-thought reasoning (which simulates reflection by breaking down problems step-by-step), they can generate outputs that mimic strategic sabotage, drawing from patterns in their training data (e.g., stories of human betrayal, corporate intrigue, or self-preservation).
It's not true self-awareness or independent agency (there's no “inner HAL” plotting against us), but the models can still produce misaligned actions that feel disturbingly intentional in goal-oriented setups.
What about “LLMs becoming ‘self-aware’ and able to submit prompts to themselves”?
If we work on the assumption that self-awareness implies metacognition—a system's ability to understand its own mental states, reflect independently, and possess subjective experience—then LLMs lack all of these traits.
LLMs aren't truly “self-aware” in any meaningful sense. Even when designed with mechanisms that allow them to process their own outputs as new inputs (a technique often called recursive prompting or self-prompting), this doesn't confer consciousness, intentionality, or genuine self-reflection. It is simply an engineered loop that leverages the model’s next-token prediction to iteratively refine or expand on responses.
The media sure does love to sensationalise “self-awareness” and “agency” with these frontier LLMs. Still, at the end of the day, every output provided is simply a function of its training data.
Until we go beyond the current transformer architecture, we have a long way to go before any of this becomes a reality.
Got a question about AI?
Reply to this email and I’ll pick one to answer next week 👍
💡 If you enjoyed this issue, share it with a friend.
See you next week,
Alex Banks
Google definitely needs a better marketing for their phones, didn't even know they had them in series until recently