When it comes to understanding intelligence, the greatest challenge out there is not a Rubik's Cube, or chess, or even Go. These games are difficult in the sense that there are often many options, but they are still transparent: nothing is hidden; every bit of information is in front of you. The main obstacle is converting this perfect information into a strategy. There is a fixed set of rules out there, and if a computer can find them, it will achieve the optimal result in every game.
When Garry Kasparov lost to IBM's Deep Blue chess computer in 1997, he lamented this approach. "Instead of a computer that thought and played chess like a human," he later said, "they got one that played like a machine."
Kasparov hoped that games such as poker would be different. You cannot win by following a fixed set of rules because some cards are hidden, and your information is imperfect. The same is true of many other situations in life, from negotiations to auctions and trading.
It may be possible to win at games like Go by searching for an optimal set of rules — as Google DeepMind's AlphaGo recently did — but to triumph in imperfect information games, you must adapt your tactics based on the data you have and the opponents you are facing. Play the same hand the same way, or be too predictable or robotic, and you will likely lose.
That is why poker has long appealed to scientists interested in thinking machines, from Alan Turing to John von Neumann. It is also attracting modern researchers who are interested in aartificial intelligence.
Poker is often seen as more an art than a science, a game reliant on human ingenuity and initiative, rather than just number crunching. But the world's best robot players are now tearing this notion apart, and in doing so transforming our view of how machines — and humans — form tactics and make decisions.
In 2015, researchers at the University of Alberta unveiled a poker bot, called Cepheus, that had solved the two-player "limit" version of Texas Hold 'Em poker, in which the maximum stake is capped. The solution provided an "equilibrium strategy" for the game: for each given situation, the strategy flips between the possible options with a certain probability, and in the long term will not lose money. The group didn't teach Cepheus its tactics; according to researcher Michael Johanson, many of the team wouldn't call themselves poker players. Instead, the bot was left to learn by playing billions of simulated games.
Researchers are now turning their attention to other forms of poker. The "limit" version of the game that the Alberta team solved is, by definition constrained, and its possibilities manageable. This makes things easier from a theoretical point of view, but hinders its relevance to the real world. The obvious next target is "no limit" Texas Hold 'Em, which is the headline event in most human tournaments. There is no maximum stake; poker players can go "all in" at any time.
Some bots are already fluent at no-limits Hold 'Em, demonstrating tactical thinking far beyond human capability. As these bots mature, they are revealing new and innovative ways of juggling risks and making decisions with imperfect information. In the process, they are forcing us to question what aspects of our behaviour are inherently human, and what will be shared with thinking machines.
Despite poker's reputation as a game of psychology, it is not just humans who deceive opponents or adapt their behaviour: computers are also deciding this is sometimes the optimal strategy. The world's top poker bots have taught themselves to bluff, feign aggression and even manipulate their opponents. Just as Kasparov hoped, the computers are learning that sometimes it pays to play like a human.