Edulov

Does ChatGPT Understand Human Language?



Lately, I’ve been reading AI Snake Oil by Arvind Narayanan and Sayash Kapoor, and it’s been quite an insightful ride. The book dives deep into both the promises and pitfalls of AI, and it got me reflecting—again—on a question I’ve explored before: Does ChatGPT really understand human language? And if it does, how?

I already touched on this from a technical perspective in my book ChatGPT for Teachers, but Narayanan and Kapoor’s take adds another layer to the discussion—one that’s both eye-opening and thought-provoking.

The “Bullshitter” Analogy

One of the most striking lines in the book is this:

“Philosopher Harry Frankfurt defined bullshit as speech that is intended to persuade without regard for the truth. In this sense, chatbots are bullshitters. They are trained to produce plausible text, not true statements. ChatGPT is shockingly good at sounding convincing on any conceivable topic.” (p. 139)

That comparison stopped me in my tracks. I mean, I’ve written before about how AI doesn’t think like we do, and how its understanding of language is purely statistical—but framing it this way makes it crystal clear.

ChatGPT doesn’t process meaning the way humans do. Instead, it sees language as a web of interconnected tokens, each statistically predicted based on the one before it. When you ask it a question, it doesn’t “think” about the answer—it calculates the most probable next word based on trillions of past examples.

To put this in perspective: according to Narayanan and Kapoor, just generating one word requires about a trillion arithmetic operations. A poem with a few hundred words? That’s a quadrillion calculations. Let that sink in for a moment. It’s mind-blowing—and it wouldn’t even be possible without the insane power of modern GPUs.

But What About “Understanding”?

So, does ChatGPT understand language? Not in the way we do. Human language is more than just grammar and syntax—it’s layered with meaning, intent, culture, and context. That’s why a sentence like Noam Chomsky’s famous “Colorless green ideas sleep furiously” is grammatically correct but semantically nonsensical. An AI might recognize it as a proper sentence, but does it grasp why it doesn’t make sense in human conversation?

Surprisingly, the answer seems to be yes—to some extent. As Narayanan and Kapoor put it:

“Understanding is not all or nothing. Chatbots may not understand a topic as deeply or in the same way as a person—especially an expert—might, but they might still understand it to some useful degree.” (p. 137)

In other words, ChatGPT isn’t clueless. It can recognize patterns, structure language coherently, and produce text that feels meaningful. Otherwise, its responses would just be random gibberish. And let’s be honest—GPT-4’s output is often eerily human-like.

The Hidden Depths of Neural Networks

One of the more fascinating points in AI Snake Oil is that ChatGPT wasn’t explicitly trained on grammar or syntax. And yet, through its deep neural networks, it has somehow learned to understand language structure and even nuances that its developers didn’t directly teach it.

The authors explain this phenomenon as follows:

“Chatbots ‘understand’ in the sense that they build internal representations of the world through their training process. Again, those representations might differ from ours, might be inaccurate, and might be impoverished because they don’t interact with the world in the way that we do. Nonetheless, these representations are useful, and they allow chatbots to gain capabilities that would be simply impossible if they were merely giant statistical tables of patterns observed in the data.”(p. 138)

This is where things get both fascinating and a little unsettling. ChatGPT’s ability to grasp language isn’t fully understood—not even by the engineers who built it. AI luminaries like Geoffrey Hinton have openly admitted that we don’t yet have a clear picture of how deep neural networks develop their internal representations.

The Real Question

So, does ChatGPT understand human language? It’s not a simple yes-or-no answer. It’s a how question.

If by “understand” we mean the ability to produce grammatically coherent, contextually relevant text, then yes, ChatGPT passes the test—at least at an average level. But if we mean the kind of deep, human-like comprehension that involves experience, emotions, and cultural awareness, then no—AI still falls short.

And here’s the bigger, more intriguing issue: even as AI gets more advanced, our understanding of how it really works lags behind. As Narayanan and Kapoor point out, far more research goes into building AI than into reverse-engineering its inner workings. That gap is only growing, and it raises some big questions about trust, transparency, and the future of AI in human communication.

What do you think? Does ChatGPT “understand” in a way that matters? Or is it just a really sophisticated guesser that mimics meaning without actually grasping it?

References

  • Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can do, what it can’t, and how to tell the difference (Kindle Edition). Princeton University Press
  • 60 Minutes. (2023, March 25). “Godfather of AI” Geoffrey Hinton: The 60 Minutes Interview [Video]. YouTube: https://www.youtube.com/watch?v=qrvK_KuIeJk&ab_channel=60Minutes
  • Chomsky, N. (1957). Syntactic structures. Mouton.



Source link

Leave a Reply