How "meaningful" is artificial intelligence?
From a language perspective, AI is actually quite "meaningless"
Here’s a conversation I would like to imagine: Noam Chomsky and me, reflecting – really, railing and ranting – on the world’s artificial intelligence hangups.
To hear Chomsky hold out on what machines can do (limited) versus what humans do (unlimited) in terms of language and learning would be incredibly illuminating. It would help me separate wheat from chaff in all the hype and hucksterism that defines AI discourses these days.
I started thinking about such a chat when I unearthed a book from grad school. It’s called Syntactic Structures and was written before I was born. And it has everything to do with critical distinctions between language, linguistics and semantics – how we as humans learn to speak through language construction and infuse meaning into our linguistic interactions.
I have seven artificial intelligence tools on my mobile device. I speak to them, and they speak to me. They help me in cool ways. And they screw me in cool ways.
They’re somehow perfectly imperfect. And it’s understanding those imperfections through what language is and what it isn’t that at least to me is fundamental to defining my own relationships, professionally and personally, to AI.
That includes the risks of becoming too reliant on turning to Perplexity.ai at the drop of a hat.
So, back to Chomsky.
His is a powerful influence in the AI space, for sometimes complementary but more often disparate, views on how he sees AI unleashed on generations of users who weren’t even born when he started thinking about this stuff back in the 1950s.
Yes, Gen Zers, AI was around when your grandparents were your age.
They’re using the same tools I am. But because they grew up in the 21st century, they’re burdened by a knowledge deficit with it comes to things like language and linguistics. Yes, that old rant that “they don’t teach grammar in schools these days.”
Now, I think a lot about how my various tools “converse” with me. And how I in turn “converse” with them.
I keep language semantics – the “meaning” that language generates – front and centre. AI tools, it turns out, are great “semantics fakers.”
It’s not a term Chomsky uses, but I think he might like it. It gets to the point that AI cannot generate “meaning” via language. It might seem like that superficially, but it’s not “human-made meaning.”
If you get what I mean.
Chomsky was the first to really think hard about a concept call “transformational-generative grammar.” Now, you have to think about grammar not the way your Grade Six teacher did, but the way a linguist does: it’s all the “rules” you learned, but much, much more in the way language structures are governed by a complex range of rules most people can live their whole lives without ever thinking about.
Chomsky devised a set of formal rules for generating all grammatical sentences in a language. Not just all the grammatical sentences, but only the grammatically sound sentences. There’s an important distinction. And it’s about syntax, the rules and regs of how sentences are structured.
He’s credited with starting the modern "cognitive revolution" in linguistics, establishing syntax as a rule-governed generative system rather than a purely behavioral or statistical one. This is a critical point, for as Chomsky points out, you can have a syntactically correct sentence that is completely bereft of meaning.
His famous example, "Colorless green ideas sleep furiously" makes the point.
I’m doing a disservice to Chomsky by trying to simplify some very complex and layers of language realities. That’s why I would like to chat with him.
I want a deeper, more nuanced understanding of what I’m doing when I hit that application icon on my phone. I want him to help me understand are these tools friend or foe?
Here’s the Chomsky paradox.
His syntactic thinking had massive influence on the early days of AI thinking. Many early AI efforts in language, and even neural network designs, were influenced by aspirations to computationally capture both the “surface” and “deep” structure of language as envisioned by Chomsky’s generative grammar framework.
Chomsky's idea that a finite rule set can generate an infinite set of sentences inspired early symbolic and rule-based natural language processing. The quest to "work out the deep rules that generate language," central to Chomsky’s work, set the intellectual groundwork for later attempts at computational language modeling.
You might think that for all that influence, Chomsky might be a big fan of contemporary Large Language Models (LLMs) – the technology underpinning the increasingly diverse range of tools now so ubiquitous in our daily lives.
Turns out that’s not the case. And as I have investigated further, it’s apparent he has some very real concerns, which are well worth contemplating.
They confirm for me a growing unease I’ve had with various tools via the tests I’ve put to them on a broad range of topics and contexts.
Check these out to see if they resonate with you. As a journalist, writer and researcher, they certainly do with me.
Imitation vs. Understanding: Chomsky argues that LLMs do not truly “know" language” – they imitate language patterns statistically without understanding grammar or meaning. He contends that their “knowledge of the deep rules of language ... is a statistical mess, not a meaningful analysis”
Lack of Distinction: Chomsky claims that LLMs cannot distinguish between possible and impossible languages and are “incapable of distinguishing the possible from the impossible.” This means they can generate both grammatical and ungrammatical sentences indiscriminately, unlike human language learners.
No Advancement in Linguistics: Chomsky insists that, despite their fluency, LLMs have not advanced scientific understanding of language structure or acquisition. He maintains that, since LLMs work by exposure and data-driven learning, they miss the underlying universal, innate principles he theorizes as crucial to language.
While LLMs do not explicitly implement Chomsky's formal rules or theoretical grammar, the challenges he outlines (such as distinguishing sense from nonsense and understanding underlying generative mechanisms) still shape debates in AI language research. Many see current LLMs as engineering achievements (they are) rooted in statistical prediction, separate from the kind of rule-based cognitive modeling Chomsky envisioned, and sometimes even as arguments against his more rigid theories of innate grammar.
Chomsky’s view of AI as “powerful mimicry” rather than true linguistic or cognitive understanding is worth keeping in mind as these tools burrow their way deeper into our personal and professional lives.
Over the next week, I will touch Perplexity.ai, DeepSeek, Gemini, Claude, Co-Pilot et al for a variety of reasons. Each use, however, will only hasten my anxiety to talk to Chomsky.
There’s nothing artificial about his intelligence.

