AGI is nowhere near, LLMs don't reason

  • By David Abram
AGI is nowhere near, LLMs don't reason

Hey, remember when devs few years ago were telling everyone that LLMs can’t think?

You probably don’t, because all the AGI hypebeast people took over the megaphone.

Well, Apple recently released a research paper titled “The Illusion of Thinking,” which highlights something I’ve been saying about AI models for a while:

They don’t truly reason, they’re just excellent at recognizing patterns.

Apple gave to a few advanced “reasoning” models, like DeepSeek-R1, Claude 3.7 Sonnet, and Gemini Thinking, multiple logical problems of increasing complexity. The results weren’t surprising to me: simple problems were handled fine, slightly harder ones gave mixed outcomes, but once the tasks became complex and nuanced, the models totally sh*t the bed, even with additional computational power or clear instructions.

As soon as problems get too novel or unfamiliar, these models weren’t helpful at all. You would expect that from something that’s awesome at pattern recognition but not doing any actual reasoning.

These models aren’t faulty, our expectations are straight off. Models can’t generalize ideas or categorize concepts as humans do, nor can they invent completely new solutions from abstract thinking.

Instead, they remix and replay patterns they’ve already learned.

But that’s okay! LLMs and similar models are still incredibly valuable. They rock at tasks like summarizing content, completing code snippets, and other pattern-driven jobs. The key is just being clear on their limitations.

If you’re building something using AI models, lean on them for tasks where pattern matching is powerful. Don’t rely on them for genuinely new logical reasoning.

AI models are tools, they simulate thinking but don’t genuinely think.

AGI is nowhere near, LLMs don't reason
David Abram

Spends his time untangling software architectures and doing DevOps. Likes to build stuff.
Connect with David on X and LinkedIn. You can also Book a meeting with David.

Recommended posts

  1. What should be included in an internal component library

    What should be included in an internal component library

  2. From simple Golang docker image to production-ready perfection

    From simple Golang docker image to production-ready perfection

  3. Supply Chain Attacks in the Golang Open-Source Ecosystem

    Supply Chain Attacks in the Golang Open-Source Ecosystem