Hey friends 👋 Happy Sunday.
Here’s your weekly dose of AI and insight.
Today’s Signal is brought to you by INBOUND.
INBOUND 2025 is heading to San Francisco, Sept. 3–5, for a one-time-only West Coast edition. Join industry icons for bold insights, real strategy, and next-gen networking at the heart of the AI revolution. VIP tickets are sold out, and GA is going fast. Don't wait: secure your spot at INBOUND 2025!
Sponsor The Signal to reach 50,000+ professionals.
AI Highlights
My top-3 picks of AI news this week.
US Politics
1. Inside Trump’s AI Takeover
The Trump administration unveiled its comprehensive AI Action Plan this week, marking a dramatic pivot from Biden’s cautious approach to artificial intelligence governance.
Deregulation focus: The plan prioritises cutting “bureaucratic red tape” and environmental regulations to accelerate AI infrastructure development, including fast-tracking data centre construction on federal lands.
Anti-woke AI mandate: A new executive order requires federal agencies to only contract with AI developers who ensure their systems are “objective and free from top-down ideological bias,” targeting diversity and climate considerations in AI training.
China competition strategy: The administration aims to strengthen chip export controls while building “enduring global alliances” to prevent adversaries from benefiting from US AI innovation.
Alex’s take: This is the most significant shift in US AI policy since ChatGPT launched in November 2022, and perhaps the most important on to build truth seeking models. The real test will be whether this deregulation approach speeds up real infrastructure development or just creates new bureaucratic battles around defining “neutral” AI.
OpenAI
2. OpenAI’s Oracle Alliance
OpenAI and Oracle have announced a massive expansion of their Stargate AI infrastructure partnership, one of the largest data centre deals in history.
Record-breaking agreement: Oracle will provide 4.5 gigawatts of additional data centre capacity to OpenAI for $30 billion per year, equivalent to the power of two Hoover Dams and enough electricity for 4 million homes.
Job creation engine: The expansion is expected to create over 100,000 jobs across construction and operations roles in the U.S., with the first Stargate facility in Abilene, Texas already operational and running early AI training workloads.
Infrastructure milestone: Combined with existing partnerships, this brings OpenAI's total Stargate capacity to over 5 gigawatts under development, capable of running over 2 million chips and advancing its goal of $500 billion in AI infrastructure investment.
Alex’s take: The scale of this deal is staggering. $30 billion annually is more than Oracle's entire cloud revenue from all customers combined last year. When companies are willing to commit these kinds of resources to data centres, it signals we're still in the very early innings of the AI infrastructure race.
3. Google's Veo 3 Gets Sketchy
The Google team have discovered a new feature in Veo 3 that transforms how users interact with AI video generation by allowing direct drawing on images instead of complex text prompts.
Doodle-to-video: Users can now sketch instructions directly onto the first frame of an image, with Veo 3 interpreting and executing these visual cues in the generated video.
Intuitive controls: Simple drawings like adding glasses, camera movement arrows, or scribbles to remove objects are instantly understood and implemented by the AI.
Natural interaction: This approach mimics how humans would naturally communicate with an artist, replacing the need for precise text prompting with visual communication.
Alex’s take: We’ve been so focused on perfecting prompt engineering that we forgot the most natural way to communicate creative ideas is often visual. And I love how this discovery emerged as an experiment. Here are a couple of my favourite examples I’ve seen so far (1) (2).
Content I Enjoyed
Demis Hassabis on Lex Fridman
I was captivated by Demis Hassabis's 2 1/2 hour conversation with Lex Fridman, where the DeepMind CEO shared some piercing insights about the current state and future of AI development.
It seems as though there’s been a recent revelation when it comes to Google’s video generation model Veo 3, having intuitively learned physics just from watching YouTube videos. Whilst Hassabis drew from his background of writing physics engines for games, explaining how “painstakingly hard” it is to program realistic physics behaviour, Veo 3 can model complex phenomena like liquids flowing through hydraulic presses with surprising accuracy. The most remarkable aspect of this is that it’s achieved through passive observation alone.
As Hassabis puts it, this points to “some kind of lower dimensional manifold that can be learned,” and “that's maybe true of most of reality.” The implications of this extend beyond just video generation models. As we approach true world models, these systems will understand the mechanics and physics of reality itself. This capability, he argues, is “what you would need for a true AGI system.”
Perhaps most intriguingly, Hassabis holds a 50% chance of achieving AGI by 2030, with a characteristically high bar: matching all cognitive functions of the human brain across tens of thousands of tasks.
I particularly liked his proposed test for this. Give the system “a few hundred of the world's top experts” for months and see if they can find obvious flaws.
It’s clear that over the next decade, we’re climbing the verge of creating a digital mind that can understand reality as intuitively as we do, but potentially far more comprehensively.
Idea I Learned
Sam Altman’s Third AI Risk
Sam Altman was recently asked what worries him most about AI. He outlined three categories of risk.
The first involves bad actors gaining access to superintelligence. Think adversaries using AI to design bioweapons, take down power grids, or break into financial systems before the rest of the world has powerful enough AI to defend against it.
The second category is the classic sci-fi scenario: loss of control incidents where AI rebels and refuses to be turned off, essentially the Terminator-style AI uprising we’ve seen in movies.
But it's Altman’s third category of AI risk that I’ve been thinking about a lot recently, because it’s so quietly rational.
Unlike the sci-fi scenarios of Terminator, an AI rebellion or other bad actors weaponising the technology, this risk has no villains. Instead, it’s about a gradual handover of human decision-making to a system that is far more sophisticated than our own minds.
Altman used chess as an analogy. When Deep Blue beat Kasparov, the next evolution was AI + human. The AI would suggest 10 moves, and the human would pick the best one. This hybrid approach dominated for about three months. Then the AI got so good that human input only made things worse. The human couldn’t understand the deeper patterns the AI was seeing.
We’re already seeing this bleed through to general decision-making today. Altman points to young people who “can't make any decision in their life without telling ChatGPT everything.” Even if the AI gives better advice than any human therapist, something feels fundamentally wrong about outsourcing our decision-making entirely.
The example that drove this home was the following. Imagine the US President unable to make better decisions than following ChatGPT-7's recommendations, yet unable to understand the reasoning behind them. Or a CEO saying, “ChatGPT-7, you're in charge now.”
Whilst every individual decision might be perfectly rational and superior to human judgement, we've transitioned our critical thought to systems we don't fully understand.
Voluntarily stepping aside because we’re “not as good at the job anymore” is a thought that we must think deeply about. Our very own agency has defined us and will continue defining the most pivotal decisions in the future. But we must understand that we now have a choice to maintain it, even when it’s no longer the optimal decision.
Quote to Share
Bernie Sanders on Elon Musk’s vision for the future:
Two days after Tesla announced its 24/7 Tesla Diner & Supercharger in Hollywood, LA, Bernie Sanders raised a reaction that was part of a mixed bag (including inflatable Elon Musk figures).
I think it raises an interesting point, not on the importance of human progress but on the importance of human interaction.
Humans will crave human interaction more than ever in the age of AI. For some, saying “hi” to a human at the checkout is the most interaction they’ll get in their entire day. Many opt for a human-manned checkout vs a self-checkout for this exact reason.
When we lose these touchpoints, what does the future of human interaction look like?
Source: Bernie Sanders on X.
Question to Ponder
“Will the increased use of AI in our lives lead to a change in how we think?”
One hundred per cent.
We now need to turn your attention toward directing knowledge instead of memorising it.
AI is capable of writing anything you’ve ever written as long as your prompt is good enough. That is, providing the necessary context, examples, and framework to produce a great output.
But the idea that AI will replace thinking (and subsequently make humans brain-dead beings) is a totally flawed argument.
It reminds me of the age-old adage, “garbage in, garbage out”. In the context of AI, the quality of your output is only as good as the quality of your prompt. And the quality of your prompt is only as good as the quality of your thinking when it comes to applying the necessary context and asking the right questions to get the answer you desire.
Directing knowledge, critical thinking, and questioning assumptions are the core skills that have to be prioritised in the age of AI.
Got a question about AI?
Reply to this email and I’ll pick one to answer next week 👍
💡 If you enjoyed this issue, share it with a friend.
See you next week,
Alex Banks
Great newsletter and top notch content. There is one thing I would like to see. I don't like links to X. I don't have an account and I won't create one. Can't understand why people are still using it. It might be okay if the important thing is visible, but reading comments etc. won't work. X is not a good options!
Just found you in recent weeks Alex - brilliant Newsletter. Thanks for sharing.