Facebook, or as we’re supposed to call them now Meta, announced earlier today that their CICERO artificial intelligence has achieved “human-level performance” in the board game Diplomacy, which is notable for the fact that’s a game built on human interaction, not moves and manoeuvres (like, say, chess).
Here’s a quite frankly distressing trailer:
CICERO: The first AI to play Diplomacy at a human level | Meta AI
If you’ve never played Diplomacy, and so are maybe wondering what the big deal is, it’s a board game first released in the 1950s that is played mostly by people just sitting around a table (or breaking off into rooms) and negotiating stuff. There are no dice or cards affecting play; everything is determined by humans communicating with other humans.
So for an AI’s creators to say that it is playing at a “human level” in a game like this is a pretty bold claim! One that Meta backs up by saying that CICERO is actually operating on two different levels, one crunching the progress and status of the game, the other trying to communicate with human levels in a way we would understand and interact with.
Meta have roped in “Diplomacy World Champion” Andrew Goff to support their claims, who says “A lot of human players will soften their approach or they’ll start getting motivated by revenge and CICERO never does that. It just plays the situation as it sees it. So it’s ruthless in executing to its strategy, but it’s not ruthless in a way that annoys or frustrates other players.”
That sounds optimal, but as Goff says, maybe too optimal. Which reflects that while CICERO is playing well enough to keep up with humans, it’s far from perfect. As Meta themselves say in a blog post, CICERO “sometimes generates inconsistent dialogue that can undermine its objectives”, and my own criticism would be that every example they provide of its communication (like the one below) makes it look like a psychopathic office worker terrified that if they don’t end every sentence with !!! you’ll think they’re a terrible person.
Of course the ultimate goal with this program isn’t to win board games. It’s simply using Diplomacy as a “sandbox” for “advancing human-AI interaction”:
While CICERO is only capable of playing Diplomacy, the technology behind this achievement is relevant to many real world applications. Controlling natural language generation via planning and RL, could, for example, ease communication barriers between humans and AI-powered agents. For instance, today’s AI assistants excel at simple question-answering tasks, like telling you the weather, but what if they could maintain a long-term conversation with the goal of teaching you a new skill? Alternatively, imagine a video game in which the non player characters (NPCs) could plan and converse like people do — understanding your motivations and adapting the conversation accordingly — to help you on your quest of storming the castle.
I may not be a billionaire Facebook executive, but instead of spending all this time and money making AI assistants better, something nobody outside of AI research and company expenditure seems to care about, could we not just…hire humans I can speak to instead?