Artificial intelligence (AI) has proven time and time again that it can trounce even the best human players once it has absorbed enough data on a specific topic. The best Chess, Go and even Starcraft 2 players have fallen foul to DeepMind in recent years, suggesting that strategy is AI’s forte.
However, some games require more than strategy. They demand softer skills, such as the ability to be diplomatic or duplicitous — skills that it’s easy to assume AI can’t easily mimic. Even this idea might be human arrogance, though, because Meta has created a new AI bot, dubbed Cicero, that has become one of the top 10 per cent of players in the world at the popular online game, Diplomacy – without blowing its non-human facade. Meta has recently spilled the beans on how this all played out in a research paper.
This begs the question of whether Cicero might herald more than strategic prowess? Could this new AI inform real-life diplomacy, even in war? Or at least create smarter customer-service bots that do more than merely guide us towards the FAQ on a website. That’d be a good start.
How did a bot master the diplomatic arts?
Diplomacy, as the name suggests, isn’t just about European conquest, but about the negotiation with other players necessary to meet your own goals. To win, it’s essential to enter into temporary alliances with other players, co-ordinating movements and attacks.
In other words, Meta had to teach Cicero not just the rules of the game, but the rules of human engagement: how to communicate clearly and charm humans into alliances. To do this, Cicero was trained on 12.9 million messages from more than 40,000 games of Diplomacy, so that it could understand how words impact on-board actions.
“Cicero can deduce, for example, that later in the game it will need the support of one particular player and then craft a strategy to win that person’s favour—and even recognise the risks and opportunities that that player sees from their particular point of view,” Meta says.
With this training under its virtual belt, Cicero was entered into 40 games of Diplomacy hosted by webDiplomacy.net. Over 72 hours, Cicero achieved “more than double the average score” of players, with just one player announcing concern that a bot was among their number after the match ended, despite the bot sending out 5,277 messages to humans. Sometimes, it was even able to explain strategies to its flesh-and-blood allies, as captured in the second example below.
Even though it’s often desirable to be duplicitous in Diplomacy, Cicero generally achieved its goals while being honest and helpful in its dealings with other players. That partly reflects the way Cicero was modeled: dialogue was only reasoned based on the upcoming turn, not reflecting how it might change over the long-term course of the game.
The study’s authors concede that the bot not being outed might partly be due to the nature of the games Cicero entered, where moves were limited to five minutes to keep things pacy. While it “occasionally sent messages that contained grounding errors, contradicted its plans, or were otherwise strategically subpar”, the authors believe they weren’t grounds for suspicion “due to the time pressure imposed by the game, as well as the fact that humans occasionally make similar mistakes”.
Could bots be running the world soon?
So what does this mean for humans, other than that we’re likely to begin losing to machines at another whole strand of games in the near future? Well, Meta believes this research could seriously improve chatbots in the real world.
“For instance, today’s AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill?” asks Meta in a blog post accompanying the research.
“Alternatively, imagine a video game in which the non-player characters (NPCs) could plan and converse like people do — understanding your motivations and adapting the conversation accordingly — to help you on your quest of storming the castle.”
So is this the end for human customer-service on Facebook itself, or Amazon – and would we even be able to tell the difference if chatting to a next-gen banking bot?
That’s the positive spin. The negative, of course, is that if this AI can trick gamers into thinking they’re playing with a fellow human, there’s potential it could be used to manipulate people in other ways. Perhaps wary of this kind of nefarious use, while Meta has open-sourced Cicero’s code, the company hopes that “researchers can continue to build off our work in a responsible manner”.
In the same way AI bots have adopted radical strategies for chess and Go (that have altered the way humans play these games) might Cicero change the nature of diplomacy or war games in the real world? If the secret to Cicero’s success was to use manners and positive politics, perhaps this is something humans could learn. Our smartest move might be to deploy the ultimate weapon: common courtesy.