1 September 2025
The AGI Mirage - Scaling LLMs Won’t Get Us There

The more I learn about AI, machine learning and deep learning more specifically the more I am convinced that Artificial General Intelligence will not come about through LLMs but rather a new kind of development.
Scaling up data and compute can yield remarkable emergent behaviors like reasoning, planning, coding, creativity but they’re still fundamentally statistical next-token predictors. They don’t ground their understanding in the world, and they don’t inherently have memory, agency, or goals.
No real intelligence in the world is trained on a corpus of text, we are born with nothing except instincts and we learn by seeing, hearing, experiencing, we have agency, drive, we build intuition, we make connections and are moved by our emotions and competition.
LLMs to me are not the foundation of Artificial General Intelligence but rather a component of it.
I believe most companies are aware of this and marketing the hell of out LLMs to see how far they can scale this technology whilst attempting to raise capital, sell products, gain political leverage and ultimately, make a profit.
What we’re seeing is a massive experiment, a research program on steroids and these companies pull you into their loop by saying AGI is around the corner. The more we use their tools the more data they have and the more data they have the more they can try to scale up further using even more compute.
What do you think? Are LLMs the foundation of AGI?