Do large language models qualify as artificial intelligence? No.

Intelligence is multifaceted. It understands, it reasons, it learns, and it innovates. An LLM, on the other hand, mimics the facade of comprehension without the substance. Saying an LLM understands the text is like saying a calculator understands mathematics. Both tools process inputs through predefined operations but don’t ‘grasp’ the concepts involved.

Imagine a library as vast as the horizon, filled with every book ever written. An LLM is like a librarian who’s never read a single book but can find you quotes on any topic by following the Dewey Decimal System. This librarian is efficient but has no understanding of literature, history, or science beyond the labels on the book spines.

Understanding is an active, conscious process, but the responses from an LLM are the result of passive, statistical modeling. They are echoes of human thought, not the thought itself. For example, an LLM can produce an essay on quantum mechanics, but it cannot comprehend the subject. It cannot engage with the content beyond what its algorithms predict to be the most likely next word or sentence based on past data.

The bedrock of intelligence is the ability to learn from a few examples and then apply that learning broadly. Human children do this remarkably well. They can learn a concept from a few instances and then recognize it in a variety of contexts. LLMs, in contrast, need to be fed with enormous datasets to ‘learn,’ and even then, they are only replaying patterns contained within that data. They don’t learn in the active sense; they don’t have the ‘aha’ moments that lead to understanding beyond their initial programming.

Reasoning is another pillar of true intelligence. It’s the ability to connect disparate bits of knowledge in a meaningful way. When we reason, we do more than follow a script; we create new narratives and solutions that never existed before. LLMs are not capable of this type of innovation. They generate responses based on what’s been seen before, not on a reasoned process that can create new insights from old information.

These systems don’t possess the spark of curiosity that ignites the human quest for knowledge. They don’t wonder, they don’t hypothesize, they don’t contemplate. They are to genuine curiosity what a wax fruit is to fresh produce —- convincing at a glance, but upon closer inspection, clearly inanimate.

So, while LLMs can produce work that feels human-created, their operational principles are closer to complex reflexes than to conscious thought. They are reflections of intelligence, not its embodiment.

This is not to say that these are not the precursors to AI, or protoAI if you will, but let’s not mistake the smoke for the fire. LLMs are sophisticated mirrors reflecting the brilliance of the data they’ve been fed, but the light of understanding and the warmth of consciousness originate from the human mind.

Except Q*, that might be something scary.