Meta's Yann LeCun (right) and Google DeepMind's Demis Hassabis are at the forefront of AI… [+] It's a revolution. But are they asking the most important questions?
AFP via Getty Images
“Well, we're fooled by that fluency, right? We assume that if a system can handle language fluently, it has all the characteristics of human intelligence. But that impression is wrong. We're really fooled by it.”
Those words come from Yann LeCun, head of AI at Meta, who, in a nearly three-hour interview with Lex Fridman, recently argued that large language models cannot provide deep understanding of the world.
A few weeks later, he told the Financial Times that he was working on developing an entirely new generation of AI systems that would enable machines to achieve human-level intelligence within the next decade.
The big question for big tech companies
LeCun isn't the only one making predictions about AI. A few months ago Ray Kurzweil He reminded us that in 1999, he predicted that AI would achieve human-level intelligence by 2029. Now, Demis Hassabis, CEO and co-founder of Google's DeepMind, predicts that human-level AI may be just a few years away.
But is it even a good question to ask when and how AI will reach human-level intelligence, in an era when existing AI tricks us into believing it has properties it does not have? If the LLM-driven AI that already exists fools us with its fluency, shouldn't we expect the future AI that LeCun and others are working on to fool us even further, giving us even more of a false impression of its capabilities?
“It's so pointless it's not even worth discussing.”
When Fridman asked LeCun what he thought Alan Turing would say if he had the chance to hang out with today's law school chatbot, LeCun was quick to reply: “Alan Turing would say that the Turing Test is a really terrible test.”
As the 2018 Turing Award winner, LeCun seems like the right person to ask Turing what he would think about AI today, but Turing's own words suggest that LeCun and his AI colleagues at Meta, Deepmind, OpenAI, and other major tech companies around the world misunderstand the purpose of the Turing Test.
According to his seminal 1950 paper “Computing Machinery and Intelligence”, Turing had no intention of creating a machine that could think like a human being. In fact, he made it clear that he considered the question of whether a machine could think “so meaningless that it is not worth discussing”. Instead, he asked what would be required for a machine to play the so-called imitation game as convincingly as a human being.
How AI became a fool's game
As the name suggests, the Imitation Game is all about pretending to be something or someone you're not. To play this game convincingly, you need to convince (or trick) others into believing that you are the thing or person you're pretending to be.
For a machine, the imitation game is about tricking other players into believing it is human. So when LeCun says he is fooled by LLM fluency, he is not proving that the Turing Test is a “very bad test.” Rather, he is rediscovering a fundamental and long-forgotten truth about AI: AI has always been and will always be a game of deception and not being fooled.
The goal of the Turing Test was not to create a machine with human-level intelligence, but to create a machine that could convince humans that they were interacting with something as intelligent, or on the way to being as intelligent, as they were. This is exactly what ChatGPT and other large-scale language models have done, and so, as Fridman says, LLM not only passed the Turing Test with flying colors, it also made humans stupid.
Turing would say, “Testing is our responsibility.”
Rather than saying that the Turing Test is a really bad test, I think that dealing with things like ChatGPT will make Turing rethink what we mean by AI.
He himself predicted that “the use of language and general cultivated opinion will have changed so much by the end of the century that we will be able to talk about what machines think without expecting to be contradicted,” but he probably would prefer to see his successors contradicted by their fellow humans rather than be fooled by his own inventions.
Despite tech experts predicting when and how AI will reach human-level intelligence, they all seem to fail to understand Turing’s fundamental premise for building intelligent machines: that it is pointless to compare machines with humans.
And Turing would probably say that the difference between the “child machines” (his words) of the 1950s and the AI we know today is that the test is no longer on the machines but on us: whether we have the necessary capabilities to avoid being fooled.
How to win the copycat game
While political and corporate leaders fret over what will happen to humans as AI becomes faster and better at everything, Turing didn't see AI as a threat to human roles and responsibilities. Rather, he seemed to think that for every one task that AI solves, humans would have to solve two.
When the Turing Test is administered to humans rather than machines, the question becomes not what is needed for a machine to play the imitation game as convincingly as a human, but what is needed for humans not to be fooled by a machine.
In his original description of the imitation game, Turing introduced three players.
Player A is a man who pretends to be a woman, Player B is a woman who tells the truth to Player C and helps him discover the truth, and Player C is responsible for questioning Players A and B to determine who is the man and who is the woman.
Turing only proposed replacing Player A with a machine – the only player who was already pretending to be someone he wasn't. In fact, he never even considered replacing Player B and Player C.
This makes the question of what it takes for humans to win the Imitation Game easy to answer: we need to 1) tell each other the truth and help each other discover the truth (like Player B), and 2) question everything we hear and see in order to determine what information is trustworthy and what is not (like Player C).
These fundamental human characteristics are rarely discussed at big tech companies, but they may be in the future.