Technology

Google’s Powerful AI Spotlights a Human Cognitive Glitch Mistaking Fluent Speech for Fluent Thought

Google’s Powerful AI Spotlights a Human Cognitive Glitch Mistaking Fluent Speech for Fluent Thought

Your prior knowledge informs you that this statement was written by a thinking, feeling human when you read it. And in this instance, a person is in fact typing these words: [Hello!] But today, artificial intelligence systems that have been taught on vast quantities of human material are producing some phrases that look extremely humanlike. Evidence to the contrary might be hard to accept since people are so used to believing that fluent language is the result of a thinking, feeling person. How are people likely to go via this largely unexplored region?

It is natural – but possibly erroneous – to believe that if an AI model can articulate itself fluently, it suggests it thinks and feels much as humans do due to a persistent inclination to link fluent expression with fluent thought. The fact that a former Google engineer recently asserted that LaMDA, Google’s AI system, has a sense of self since it can elegantly write language about its own sentiments, is maybe not unexpected. This incident and the ensuing media coverage gave rise to a number of articles and postings that were understandably dubious of the assertion that computer representations of human language are sentient, that is, that they are able to think, feel, and experience things.

We are not attempting to resolve the complex issue of what it would mean for an AI model to be sentient (see, for example, our colleague’s position). However, as linguists, we can utilize the findings of cognitive science and linguistics to explain why it is all too simple for people to make the mistaken assumption that a being that can speak fluently is sentient, conscious, or intelligent.

Creating human-like language with AI, It may be challenging to identify writing created by models like Google’s LaMDA from content produced by people. This outstanding accomplishment is the end result of a multi-decade project to develop models that produce grammatical, meaningful language.

Early iterations, known as n-gram models, that date back to at least the 1950s, simply totaled up the occurrences of various phrases and used them to infer what words were most likely to appear in given circumstances. It’s simple to see that “peanut butter and jelly” is a more likely combination than “peanut butter and pineapples,” for example. The words “peanut butter and jelly” will appear several times in English-language texts, whereas “peanut butter and pineapples” may never appear. The collections of information and rules used in modern models, which approach human language, differ significantly from those used in earlier efforts. They receive training on almost the whole internet at the beginning.

Second, they may discover connections between words that are not close by but also ones that are far off. Third, there are so many internal “knobs” that they are tuned in such a way that even the engineers who created them struggle to comprehend why they produce one set of words rather than another. To decide which word is most likely to occur next, the models’ work is still the same as it was in the 1950s. They are now so adept at this task that nearly all of the sentences they produce appear natural and correct.