Is there intelligence in artificial intelligence?

,
intelligence artificielle, artificial intelligence

Jean-Louis Dessalles, Télécom Paris – Institut Mines-Télécom (IMT)

Nearly a decade ago, in 2012, the scientific world was enthralled by the achievements of deep learning.  Three years later, this technique enabled the AlphaGo program to beat Go champions. And this frightened some people. Elon MuskStephen Hawking and Bill Gates were worried about an imminent end to the human race, replaced by out-of-control artificial intelligence.

Wasn’t this a bit of an exaggeration? AI thinks so. In an article it wrote in 2020 in The Guardian, GPT-3, a gigantic neural network with 175 billion parameters explains:

“I’m here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”

At the same time, we know that the power of computers continues to increase. Training a network like GPT-3 was literally unconceivable just five years ago. It is impossible to know what its successors may be able to do five, ten or twenty years from now. If current neural networks can replace dermatologists, why would they not eventually replace all of us? Let’s turn the question around.

Are there any human mental abilities that remain strictly out of reach for artificial intelligence?

The first thing that comes to mind are skills involving our “intuition” or “creativity.” No such luck – AI is coming for us in these areas too. This is evidenced by the fact that works created by programs are sold at high prices, reaching nearly half a million dollars at times. When it comes to music, everyone will obviously form their own opinion, but we can already recognize acceptable bluegrass or works that approach Rachmaninoff in imitations by the MuseNet program created, like GPT-3, by OpenAI.

Should we soon submit with resignation to the inevitable supremacy of artificial intelligence? Before calling for a revolt, let’s take a look at what we’re up against. Artificial intelligence relies on many techniques,  but its recent success is due to one in particular: neural networks, especially deep learning ones. Yet a neural network is nothing more than a matching machine. The deep neural network that was much discussed in 2012 matched images –  a horse, a boat, mushrooms – with corresponding words. Hardly a reason to hail it as a genius.

Except that this matching mechanism has the rather miraculous property  of being “continuous.” If you present the network with a horse it has never seen, it recognizes it as a horse. If you add noise to an image, it does not disturb it. Why? Because the continuity of the process ensures that if the input to the network changes slightly, its output will change slightly as well. If you force the network, which always hesitates, to opt for its best response, it will probably not vary: a horse remains a horse, even if it is different from the examples learned, even if the image is noisy.

Matching is not enough

But why is such matching behavior referred to as “intelligent?” The answer seems clear: it makes it possible to diagnose melanoma, grant bank loans, keep a vehicle on the road, detect disorders in physiological signals and so forth. Through their matching ability, these networks acquire forms of expertise that require years of study for humans. And when one of these skills, for example, writing a press article, seems to resist for a while, the machine must simply be fed more examples, as was the case with GPT-3, so that it can start to produce convincing results.

Is this really what it means to be intelligent? No, this type of performance represents only a small aspect of intelligence, at best. What the neural networks do resembles learning by heart. It isn’t, of course, since networks continuously  fill in the gaps between the examples with which they have been presented. Let’s call it’s almost-by heart. Human experts, whether doctors, pilots or Go players, often act the same way when they decide instinctively, based on the large number of examples learned during their training. But humans have many other powers too.

Learning to calculate or reason over time  

Neural networks cannot learn to calculate. There are limits to matching operations like 32+73 and their result. They can only reproduce the strategy of the struggling student who tries to guess the result and sometimes happens upon the right answer. If calculating is too difficult, what about a basic IQ test like: continue the sequence 1223334444. Matching based on continuity is of no help to see that the structure, n repeated n times, continues with 5 fives. Still too difficult? Matching programs cannot even guess that an animal that is dead on Tuesday will not be alive on Wednesday. Why? What do they lack?  

Modeling in cognitive science has shown the existence of several mechanisms, other than matching based on continuity, which are all components of human intelligence. Since their expertise is entirely precalculated, neural networks cannot reason over time to determine that a dead animal remains dead or to understand the meaning of the sentence “he still isn’t dead” and the oddity of this other sentence: “he is not still dead.” And digesting large amounts of data in advance is not enough to allow them to recognize new structures that are very simple for us, such as groups of identical numbers in the sequence 1223334444. Their almost-by-heart strategy is also blind to unprecedented anomalies.

Detecting anomalies is an interesting example, since we often judge others’ intelligence based precisely on this. A neural network will not “see” that a face is missing a nose. Based on continuity, it will continue to recognize the person, or may confuse him or her with someone else. But it has no way of realizing that the absence of a nose in the middle of a face represents an anomaly.

There are many other cognitive mechanisms that are inaccessible to neural networks. Research is being conducted on the automation of these mechanisms. It implements operations carried out at the time of processing,  while neural networks simply make associations learned in advance.

With a decade of perspective on deep learning, the informed public is starting to see neural networks  more as “super-automation” and less as intelligent. For example, the media recently reported on the astonishing performances of the DALL-E program, which produces creative images based on a verbal description – for example, images that DALL-E imagined based on the terms “avocado-shaped chair” on the OpenAI site. We now hear much more tempered assessments than the alarmist reactions following the release of AlphaGo: “It is quite impressive, but we must not forget that it is an artificial neural network, trained to perform a task; there is no creativity or form of intelligence.” (Fabienne Chauvière, France Inter, 31 January 2021)

No form of intelligence? Let’s not be too demanding, but at the same time, let’s remain clear-sighted about the huge gap that separate neural networks from what would be a true artificial intelligence.

Jean‑Louis Dessalles wrote “Des intelligences très artificielles” (Very Artificial Intelligence)  published by Odile Jacob (2019).

Jean-Louis Dessalles, Associate professor at Télécom Paris – Institut Mines-Télécom (IMT)

This article has been republished from The Conversation under a Creative Commons license. Read the original article in French.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *