Advertisement

Machines may beat us in debate, but will they ever have the human touch?

Noa Ovadia, left, and Dan Zafrir, right, prepare for their debate against the IBM Project Debater.
Noa Ovadia, left, and Dan Zafrir, right, prepare for their debate against the IBM Project Debater. Photograph: Eric Risberg/AP

So a machine can now not only demolish you at chess or devastate you in Jeopardy, it can also outwit you on Question Time. Last week, in a public debate in San Francisco, IBM pitted its Project Debater software program against human opponents, including Noa Ovadia, Israel’s national debating champion in 2016.

Each participant had four minutes in which to make an opening statement, followed by a four-minute rebuttal and a two-minute conclusion. Drawing on a library of hundreds of millions of newspaper articles and academic papers, and some pre-installed arguments, the machine held its own. It even cracked jokes. “I can’t say it makes my blood boil, because I have no blood,” it quipped, “but it seems some people naturally suspect technology because it’s new.”

Project Debater is a remarkable achievement. In chess or Jeopardy, there are defined rules and a fixed end point, making it easier to programme a machine. What matters in a debate is the ability to respond both to the subject and to one’s opponent in a manner not fixed by rules but shaped by an understanding of the subject and of the audience. It requires the ability to listen to one’s opponent, understand the psychology of the audience and weigh the impact of different arguments in specific contexts. It requires nous.

And, brilliant though the programming is, it is in these areas that Debater is weakest. It takes sentences from its library of documents and prebuilt arguments and strings them together. This can lead to the kinds of errors no human would make. At one point, Debater arbitrarily added the word “voiceover” to the end of a sentence, probably because it was borrowing from a video transcript.

Such wrinkles will no doubt be ironed out, yet they point also to a fundamental problem. As Kristian Hammond, professor of electrical engineering and computer science at Northwestern University, put it: “There’s never a stage at which the system knows what it’s talking about.”

What matters is not just the outside of a string of symbols, but its inside too, not just the syntax but the semantics

A cynic might suggest that this makes a machine more human. IBM’s robot might sit very well alongside the human robots on Question Time. What Hammond is referring to, however, is the question of meaning and meaning is central to what distinguishes the least intelligent of humans from the most intelligent of machines.

A computer manipulates symbols. Its program specifies a set of rules, or algorithms, to transform one string of symbols into another. But it does not specify what those symbols mean. Indeed, to a computer, meaning is irrelevant. Humans, in thinking and talking and reading and writing, also manipulate symbols. But for humans, meaning is everything. When we communicate, we communicate meaning. What matters is not just the outside of a string of symbols, but its inside too, not just the syntax but the semantics.

It is certainly possibly to create algorithms that help define a word or concept. A “semantic network” relates one concept to others, so a machine comes to understand it as part of a web of interrelations. “Cat” is related to “animal”, to words that define its parts such as “four legs” and “two eyes” and to behaviour such as “miaow” and “drinks milk”. But relating one word to another doesn’t tell you what either means. Such algorithms are still engaged with the outside of symbols, not their insides.

Humans, too, understand meaning through relating one word or concept with others. But for humans, meaning comes not simply through such relations but is linked to our existence as social beings.

I only make sense of myself insofar as I live in, and relate to, a community of other thinking, feeling, talking beings. The translation of the mechanical brain processes that underlie thoughts into what we call meaning requires a social world and an agreed convention to make sense of that experience. Language for humans is not merely a set of symbols to manipulate, as it is for a machine. It is something that transforms our ability to participate in communities.

Meaning emerges through a process of social interaction, not of computation, interaction that shapes the content – inserts the insides, if you like – of the symbols in our heads. The rules that ascribe meaning lie not just inside our heads, but also outside, in society, in social memory, social conventions and social relations.

It is this that distinguishes humans from machines. And that’s why, however astonishing the Project Debater may seem, the tradition that began with Socrates and Confucius will not end with IBM.

• Kenan Malik is an Observer columist