Welcome back to the third newsletter edition of “Thinking Beyond AI”.

Today, I want to talk about something that’s fundamentally misunderstood, yet it’s shaping how we all think about artificial intelligence. It’s not a subtle mistake as you may believe, but rather one that has huge consequences for where we put our money, how we innovate, and even what we imagine for our future.

I’m talking about the dangerous (and frankly, quite stupid) habit of treating “AI” and “Large Language Models” (LLMs) as if they’re the same thing.

When people mention “AI” these days, particularly in our noisy but uneducated media, they almost exclusively talk about LLMs (Large Language Models) like ChatGPT, Claude, Gemini & Co.

And don’t get me wrong – they can do amazing things: write like humans, code like programmers, create stories or poems (like the oh-so-personal s**t flooding your inbox ad nauseam when it’s your birthday.)

Yes – they’re powerful, impressive, and they’re definitely changing things.

But they are not, by themselves, the whole story of Artificial Intelligence.

And believing they are the only path to Artificial General Intelligence (AGI) is a serious wrong turn.

The Illusion of Intelligence: What LLMs Really Are (and Aren’t)

Let’s be clear: LLMs are incredible statistical machines. I vividly remember the excited grunts I made when I first tried a (what would now would be considered pretty crude) version of ChatGPT.

In essence, they learn by sifting through massive amounts of text and code, figuring out how to predict the next word in a sentence based on patterns they’ve seen. Think of them as super-advanced autocomplete.

Or the most expensive knowledge regurgitators every created.

There’s nothing inherently “smart” about that.

They’re brilliant at recognizing patterns, filling in gaps, and producing text that sounds coherent, grammatically correct, and often makes sense in context.

They can mimic human conversation so well that it’s easy to confuse their smooth talk for real understanding, their consistent output for consciousness, or their vast knowledge for genuine intelligence.

But LLMs don’t think or reason like humans do. In fact, they don’t “think” at all, other than what the current versions try to make you believe.

They do not have common sense, they don’t understand cause and effect, and they don’t have a real grasp of how the world works.

They don’t “know” anything in the way a person knows something.

Let me show you what I mean.

Complete this sentence:

“Righty, tighty – lefty, …..” ??

if you said “loosey”, congratulations, you just did for free what Microsoft, Google and OpenAI are currently investing 500 Billion dollars per year into (but no, you still can’t apply for one of those 100-Milion-a-year jobs at Meta. )

LLM statistically complete text. That’s it.

They don’t have intentions on their own, they don’t have emotions, they don’t invent anything new.

When an LLM tells you a fact, it’s not because it has checked that fact against a true understanding of the world. It’s because that fact, or something very similar, showed up a lot in the data it was trained on.

When it writes a creative story (or one of your sh*tty birthday poems), it’s not because it has a spark of imagination, but because it’s cleverly putting together pieces and story structures it learned from millions of other stories (and lots of bad poetry it read on the internet. Poor LLMs.)

This difference is incredibly important.

Confusing LLMs with true AI is like thinking a really fancy calculator is the same as a brilliant mathematician.

Yes, the first one can do calculations incredibly fast, most likely faster than the mathematician; hoewver, the other one understands the deep rules and can dream up entirely new problems.

That’s not the same thing.

What is AI, then? And Why Should I Care?

So, if LLMs are just one part of the picture, what is “AI” really?

I have no idea.

No, really. I don’t know.

But nobody really does.

There’s about as many definitions as there opinions about it.

And it gets worse when we talk about things like “AGI” (Artificial General Intelligence), where definitions vary from “something that will generate 100 Billion in profit for us in an automated way” (which is roughly what OpenAI’s definition is) to some more sophisticated and philosophical discussions.

Let’s just say, Artificial Intelligence, at its heart, is the grandiose ambition to create “intelligent” machines that can perform tasks that typically require human intelligence.

And that isn’t just about processing language, like LLMs do. It encompasses a vast and diverse field of computer science dedicated to building systems, software, tools capable of learning, reasoning, problem-solving, perception, and even physical movement to interact with the real world.

“AI” is about creating systems that can think, innovate, adapt, and act intelligently in the world, drawing from approaches as varied as the logic-based systems of “Good Old-Fashioned AI” (GOFAI) to the neural networks that power today’s deep learning.

Think of AI as the entire univetrse of intelligent machines, with LLMs being just one bright, albeit powerful, star within it.

It includes everything from the chess-playing computers of decades past to the oh-my-god-that-totally-freaks-me-out robotics stuff from Boston Dynamics showing cool tricks that makes us think Judgement Day is near, from medical diagnostic systems to advanced image recognition software.

In short, AI is the pursuit of building truly intelligent agents, not just impressive mimics.

So why should you care?

Because confusing the part for the whole is like thinking the engine is the entire car.

An engine is a critical component, yes, but without the wheels, the steering, the brakes, and the chassis, it’s just a powerful piece of machinery with very limited value and application.

In fact, we don’t even know if LLMs are the “engine”. The real solution to true “intelligence” (however you want to define it), may come from a totally different technology we haven’t even invented yet.

You wouldn’t consider someone who can only repeat sentences from a textbook “smart”. There’s more to intelligence as we perceive it.

What’s clear is that LLMs are not made to do anything else than essentially repeat what they have read anywhere else, they’re not built to think, be truely creative and innovative, or to have any intent on their own.

And these are all crucial parts of intelligence.

So by focusing solely on LLMs, we are putting a lot of risk in one place.

This isn’t just a semantic argument; it impacts where investments flow, what skills we prioritize when replacing people in our workspaces, and ultimately, the kind of future we collectively build.

The AI Winter is Coming (Again)

This current, almost singular, focus on LLMs feels eerily similar to the booms that preceded previous “AI winters.”

Historically, periods of intense excitement and massive investment in AI were often followed by crashes when the technology failed to live up to its inflated promises. Consider the expert systems craze of the 1980s, where grand visions of AI replacing human experts in every field led to disillusionment when the systems proved too brittle and limited.

Or the early neural network hype that fizzled out due to computational limitations and a lack of data.

The current obsession with LLMs, and the widespread belief that they are on a direct, inevitable path to Artificial General Intelligence (AGI) simply by getting bigger, is setting us up for a similar, perhaps even more severe, fall.

When the inherent limitations of LLMs become more widely understood – their tendency to hallucinate, their lack of true reasoning, their still shocking inability to grasp even the simplest causalities – and the promised AGI doesn’t materialize from simply scaling up these models, we could see a significant backlash of disillusionment and a sharp withdrawal of funding.

At some point, people will realize that just throwing more compute at a text regurgitator doesn’t make it more smart, no many how many billions you throw at it.

(Another problem is that we’ve pretty much exhausted everything that AI can be trained on that’s human-made, from the entirety of the internet to all bits of content ever created, so “synthetic data” is now the next big thing, but that comes with its own set of limitations – but more about that in anothe rarticle).

At some point investors will ask: “What the hell are we actually doing here? Where is our AGI that you promised us for the end of this year?”

ChatGPT 5 was released a few dasy before this article, and the people were… mildly unimpressed, to say the least. Just making a model “bigger” doesn’t make it revolutionarily better, or smarter.

This would not only harm LLM research but could cast a long shadow over the entire field of AI, slowing down real, important progress across all its diverse branches and squandering the immense potential that true AI holds.

The Cracks in the Facade: When AI “Thinking” Falls Apart

Recent research continues to highlight the fundamental differences between the impressive looking, but relatively stupid LLM pattern-matching and genuine human reasoning.

Apple caused quite the shitstorm within the AI industry with their paper “The Illusion of Thinking”, shining a more realistic light on the limitations of LLM “reasoning.”

As reported by Mashable and many other media outlets, Apple researchers found that when LLMs were presented with logic puzzles that deviated even slightly from the patterns they’d seen in their vast training data, they often “collapsed.”

They couldn’t reason their way through a novel problem; instead, they could only replicate patterns they already knew.

Heck, they couldn’t even solve the puzzle when giving them detailed INSTRUCTIONS on how to do it.
Because they’re not “thinking” or “reasoning” – they’re regurgitating.

And what they haven’t seen, they can’t remember, and therefore cannot solve.

This is a critical finding that underscores the difference between true reasoning – the ability to adapt, infer, and solve problems never before encountered – and (admittedly very impressive and) sophisticated pattern-matching.

It’s a clear sign that LLMs are not “thinking” in the human sense, and that we need to be wary of entrusting them with tasks that demand genuine understanding, novel problem-solving, or common sense.

This isn’t about intelligence; it’s about a highly advanced form of mimicry that breaks down when faced with the truly unfamiliar.

The Unseen Cost of Outsourcing Creativity and the Human Spark

This brings us to the topic of creativity, a uniquely human trait often misunderstood in the AI discourse. While LLMs can generate impressive text, images, and even music, it’s a form of creativity that is fundamentally different from human creativity. Human creativity is born from lived experience, from emotion, from a deep understanding of the world, and from the messy, unpredictable process of trial and error. It’s about connecting disparate ideas, challenging assumptions, expressing a unique point of view, and often, breaking rules to create something truly new and meaningful. It’s a reflection of our consciousness, our desires, and our unique individual journeys. Think of a painter staring at a blank canvas, not knowing what will emerge, but driven by an inner impulse. Or a scientist making a leap of intuition that defies existing data.

LLM-generated content, on the other hand, is a sophisticated remix of what already exists. It’s a high-tech collage, a statistical interpolation of patterns, not a true act of original creation. The danger here is that as we become more reliant on AI for creative output, we risk devaluing our own creative instincts. We might start to favor the polished, predictable output of an algorithm over the raw, authentic, and sometimes flawed, expression of a human being. This could lead to a homogenization of culture, a world where everything looks and sounds the same, and where the unique spark of human creativity – the very essence of innovation and progress – is dimmed. We risk losing the unexpected, the truly groundbreaking, the art that challenges and moves us because it comes from a place of genuine human experience, not from a statistical probability. Our human capacity for true originality, for the unexpected leap, for the art that truly moves the soul, is something no algorithm can replicate.

The Real Road to AGI: Beyond Just Words

Many top AI researchers argue that LLMs, despite their brilliance, have fundamental limits when it comes to reaching AGI. They are pattern finders, not world builders. They work based on correlations, not causes. They can’t form abstract ideas, do real scientific research, or understand the physical world by interacting with it.

Reaching AGI will likely need breakthroughs in areas far beyond what current LLMs can do. This might involve symbolic reasoning, which is about the ability to work with abstract ideas and rules, much like how humans solve logic puzzles or do math. It could also require causal understanding, knowing why things happen, not just that they happen together. Learning by doing, gaining common sense and understanding basic physics by interacting with the real world, will also be crucial. Furthermore, continuous learning, the ability to keep learning new things without forgetting old ones, is a big challenge for today’s AI. And while still a mystery, many believe true AGI would need some form of personal experience and awareness.

“The current obsession with LLMs as the only way to AGI is like thinking that building taller and taller ladders will eventually get us to the moon,” Dr. Rob Konrad points out. “Ladders are great for reaching high places, but space travel needs a completely different set of ideas and technologies. We need to open up our scientific imagination, not limit it.”

Thinking Beyond the Hype: What This Means for You

Understanding this difference isn’t just for academics; it has real-world implications for everyone navigating the AI landscape. It means you should invest smartly, not getting swept away by the hype. Put your money into AI solutions that genuinely solve your problems, knowing what they can and can’t do.

Don’t expect an LLM to do a job it wasn’t built for.

It also means you should develop human skills. Focus on building human skills that work well with all kinds of AI, not just LLMs. Critical thinking, creativity, ethical judgment, and solving complex problems is something that’s unlikely to be fully replaced anytime soon, even if the money-hungry AI behemoths make it sound differently.

Encourage true innovation by looking beyond just generating text. Explore how other AI approaches – like robotics, computer vision, reinforcement learning, and symbolic AI – can create value and solve tough challenges.

When you deploy AI, do so ethically. Be very aware of the biases and limitations that come with LLMs. Make sure there’s strong human oversight and clear ethical rules, especially when using AI in sensitive areas. Finally, stay humble and curious. Remember that our understanding of intelligence, both artificial and natural, is still growing. Be open to new ideas and don’t be afraid to question what everyone else assumes.

The current fascination with LLMs is understandable. They are powerful tools that have opened up new possibilities. But let’s not confuse one impressive tool for the entire toolbox, or the map for the actual journey. The real promise of AI lies in its vast, unexplored potential, far beyond what language models alone can do. By truly understanding what LLMs are, and what they are not, we can make sure our vision for the future of AI is clear, realistic, and genuinely intelligent.

From Human to Human,

Thank you for taking the time to read this post. Stay tuned for more updates!
signature

Share

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.