skip to Main Content

Brains, language, and why AI might still be science fiction

Google’s AlphaGo triumph seemed to herald an artificial intelligence breakthrough. This, we thought, was the point where machines started to be able to use intuition – to think like we do.

Since then, however, other news stories have shown that we still have a long way to go before creating machines that can think, talk and act like we do. Microsoft’s AI Twitter account, Tay, was a perfect example of this, admirably learning all of the horrible things that the internet threw at her without being able to apply any sort of process to determine what was appropriate to be used in her own tweets.

Both AlphaGo and Tay distinctly lack humanity. For some, this is a cause for fear. What if the machines become too intelligent and wipe us all out? The less dramatic conclusion, however, is that we are still a long way off from the AI we see in films and books.

AlphaGo may have beaten Lee Sedol, but it didn’t ‘win’ the game…not in the way that a human would have, anyway. There was no sense of achievement; it had no knowledge, even, that it was playing a game. It performed the task that it had been programmed to do, just like millions of other machines already do around the world. That’s not to say that what it did was not impressive, but I do think it should give us pause for thought.

Tay, on the other hand, lacked any sort of empathy, or ‘Theory of Mind’ (ToM). ToM is not a scientific theory about minds, rather, it refers to the way that humans are able to assume that other humans have minds. This is what allows us to understand people’s intentions in the things that they say, and to recognise their emotions from physical cues (such as avoiding eye contact when embarrassed). Had Tay had any sort of ToM, it would have been able to realise that people were simply trying to abuse it.

And, more recently, a study from the University of Berkeley, CA, has indirectly thrown another spanner in the works for advanced AI. The study aimed to map the way our brains respond to words that are associated with each other. This has important implications for the way that we learn language, and the associations that we make subconsciously when we hear different words.

Prior to the study, the assumption had been that human brains have a designated area, called the semantic system, that dealt with word association. Had this been shown to be the case, it would have been a green flag to AI research to study that region of the brain, and build AI that could mimic the way that humans learn language, ultimately maybe even creating AI that could learn, think and communicate like us.

However, the results of the study showed that this semantic mapping was carried out across many different regions of the brain, and that there were significant individual differences. As my Cognitive Poetics professor, Peter Stockwell, explained to my class earlier today, this suggests that the way individuals learn language is not uniform, but related to individual experiences that vary from person to person, and shape the associations that we make with different words.

For AI, this means that machine learning has a long way to go. Unless we have some way to embody AI, and give it a way to feel and experience the things that we do when we grow up, it won’t be able to learn in the way that we do. When you add this to the lack of human experience, empathy and ToM that previous AI experiments have shown, it seems like we’re still a long way from science fiction becoming science fact.

That’s not to say that AIs will never reach this point, and it’s not to say that the field is not exciting. I, for one, will continue to follow developments in the field with great interest, and it may yet surprise us.

Back To Top