200,000 years ago evolution pushed Homo Sapiens out of the womb and they dropped into the East African dirt.
An impression was made, like none other.
Despite being well down the food chain Homo Sapiens managed to travel all over the globe, killing everything they found and developing a cognitive ability that was unmatched in the animal kingdom.
Today, we dominate the planet. But you still wouldn’t bet on the Homo Sapien if the opponent were a Tiger, Shark or Gorilla. Even if that Sapien was Brock Lesnar.
We rule because of our intelligence. But what happens when a more intelligent species falls out of that womb?
If Homo Sapiens can create so much negligent chaos and carnage to our planet, other sentient beings and ourselves – what then of this ‘new God,’ as Sam Harris called AI in his Ted Talk: Can We Build AI Without Losing Control Over It?
This question has made people very nervous when it comes to the speed of artificial intelligence (AI) development – a lot of it due to Nick Bostrom’s views on What Happens When Our Computers Get Smarter Than We Are?
After all, nobody said the next leaders of our planet had to be flesh and blood.
Why We Can’t Stop AI Development
According to the excellent Tim Urban article The AI Revolution: The Road to SuperIntelligence there are three stages of AI development and we are currently in the first, called Artificial Narrow Intelligence (ANI).
ANI technology is all around us. Think SIRI, Amazon Echo and GPS devices. These are AI capable of mastering a particular activity.
The next stage is Artificial General Intelligence (AGI). The stage when the AI becomes better than humans at a broad range of activities.
The final stage is Artificial Superintelligence (ASI) when an AI reaches a level of intelligence that we can’t comprehend.
It’s important to understand that we cannot stop this advancement in technology. For every person or country who decides to stop working on AI because of the inherent dangers of the Control Issue, there will be another country desperate to take advantage.
And humans believe there is inherent good in the growth of AI. Imagine when we have ASI’s. There would be no disease, mortality wouldn’t be an issue and spreading our genes throughout the galaxy would be simpler.
Take the Carnegie Mellon University tag team of Tuomas Sandholm and Noam Brown, the creators of Claudico, Libratus and Lengpudashi – three poker ANIs that have competed against humans in Heads-Up No-Limit Hold’em.
I doubt either Sandholm or Brown want to use their ANIs to grow into ASIs and take over the world. So let’s, like them, focus on the positives of poker AI for once.
AlphaGo’s Play Makes Us Feel Free
A little over a year ago Google’s AI team DeepMind created an ANI designed to best humans in the ancient game of Go. AlphaGo took on the South Korean legend, Lee Sedol, and AlphaGo won.
What was remarkable about AlphaGo’s victory is it came a decade ahead of predictions from some of the smartest Homo Sapiens to emerge in 200,000 years (just to hammer home the speed of exponential growth we are experiencing).
So that’s it, right? A game that has stood for over 3,000 years now dies because AI has proven it can beat humans.
Not quite. Recently, DeepMind announced plans to host the ‘Future of Go Summit’ in China where AI experts and some of the best GO players in the world will meet to discuss the future of AI as a result of the learning taking place in the field of gaming.
As part of this Summit AlphaGo will take on the World #1 Ke Jie in a best-of-three match. What is interesting about the Summit and the news stories that have emerged since Sedol was beaten 12 months ago can be summed up in this one quote by professional Go player, Zhou Ruiyang:
“AlphaGo’s play makes us feel free, that no move is impossible. Now everyone is trying to play in a style that hasn’t been tried before.”
Will Libratus Set Us Free?
Plenty of poker writers, including me, have written a lot of doomsday stuff since Libratus’s victory over Jason Les, Dong Kim, Daniel McAuley and Jimmy Chou at the turn of the year.
Just last month another ANI called Lengpudashi comprehensively defeated a team of Chinese players in an exhibition match on the island of Hainan.
But instead of writing about the existential threats Carnegie Mellon ANIs pose to poker, what could we learn from them? If Ruiyang now believes that any move is possible after learning from AlphaGo, could Les feel the same way about Libratus?
I reached out to both Les and Dong Kim, two of Libratus’ opponents (and two players who also competed against ANI Claudico in 2015) to see what humans can learn from their AI brethren.
“What made Libratus really good was that it had such a complicated mixed strategy,” Les told me during a Skype call.
“It played very specific combos of hands in different ways on the same types of boards. This allowed it to have hands in its range, always. This allowed it to not be negatively affected by card removal and to take advantage of card removal more than humans can.”
Was it obvious that the humans were competing against a Poker God or did it feel like playing Doug Polk or Ben Sulsky for 120,000 hands?
“It was so good there was no way it could have been a human,” Les said. “The mixed strategy and large bet sizes – it was a style, unlike any other human I had witnessed.
“It came up with its own approach which didn’t take into account any human bias or observations. It was a unique playing style.”
Libratus is Latin for Balance
“Libratus is Latin for a word that means: ‘balance’ I believe,” says Les, “and this is what we saw. In every situation it would have a bluff.
“Most humans are not like that. There are always some spots where a human thinks, ‘I am never bluffing here.’ Libratus knew it had to bluff, so it always had bluffs in every situation.
“Add to that it was not getting bluffed easily. A human might think, ‘He always has it here.’ Libratus doesn’t have that bias. It knows it has to call a certain percentage of the time so it’s calling.”
I asked both Kim and Les to think of ways that humans could learn from Libratus and make changes in their game so they could play more like their AI counterparts.
Both players expressed the enormity of the problem facing humans when trying to learn from the AI, but they did come up with a few useful tips.
5 Ways to Play Poker More Like a Super Computer 1. Don’t Anchor Bet Sizes to Pot Size
“The AI really used ‘No-Limit’ to the fullest extent,” Kim explained on Twitter. “Most players don’t go all-in too often unless the pot is big but it was Libratus’s signature move.”
Les expanded on this point:
“In both NLHE and PLH you barely notice the difference between the games post-flop. Most people don’t bet more than the pot. When they do, it’s rare.
“When playing NLHE people seemed to be confined to betting a small range of sizes – quarter pot, half pot, three-quarters pot. Libratus doesn’t seem to have this problem.”
When I was taught to play poker my coach always asked me why I bet a certain amount. As a recreational player, without the time to study the game, I always struggled to answer this question.
One thing that always acted as an anchor for me was the size of the pot and it was something I picked up watching countless hours of training videos from some of the world’s best.
There would be distinct betting patterns like Les describes and betting over the pot was rare. It was as if the pot acted as an anchor that prevented too much movement.
“People should be a lot more open to using more varied bet sizes above and beyond the pot,” said Les. “The pot is not a limit. It’s a reference point. I know people overbet but the frequency that humans do it compared to Libratus is a lot less.”
I asked Les what types of hands it was showing down after making a big overbet.
“It did it with a wide variety of hands,” he said. “You would see stuff that made no sense. It would c-bet two times the pot with the second pair.
“In a vacuum that doesn’t make a lot of sense, but as a part of its broader strategy that means when it overbets the pot and the turn pairs the second card, now he could have trips some of the time.”
2. Understand the Power of Card Removal
Another area that both Kim and Les thought Libratus excelled at was its awareness of the effects of Card Removal as a part of its overall strategy.
The Card Removal effect is the understanding that the cards you hold in your hand make it less likely that your opponent holds the same. This, in turn, affects your range calculations and betting strategies.
“The way it created its ranges to account for card removal made it very tough,” said Les. “It was very aware of how the cards it held would affect its opponent’s range.
How can humans learn from this?
“Humans can get better by thinking how the cards in their hand effect what their opponent has,” said Les. “They should think about that in both calling and betting. Am I blocking bluffs, am I blocking his folds?
3. Balance, Balance, Balance
The overwhelming difference between the AI and Humans was Libratus’s balanced strategy. It was incredibly difficult for the humans to define a range for the AI.
“Libratus distributes its hands over every type of action,” said Les. “That’s something humans can’t do as well.
“It will take the same hand and some percentage of the time it will bet, sometime it will check and call and sometimes check and fold. It’s not affected by card removal of what an opponent could have. It’s averaged over all the possibilities and that gives it a lot more balance.
“Humans on average are going to have an imbalance in every direction. In certain situations, they are blocking way too much and don’t have enough value.
“In other situations they are not blocking enough and have too much value. Other times you see people folding too much or not betting enough.”
I asked Les how a human would begin to learn if they have an imbalanced strategy.
“The only way you can do this is to sit down and study away from the tables. You can look at a situation and think, ‘What are all the hands I would bet here? How many are bluffs and how many are value?’
“People would work that out and think, ‘Shit, I am not bluffing that much here.’ That’s a time-consuming exercise but something you need to do over and over again to craft your strategy to be a better poker player.”
4. Don’t Be Afraid to Try New Things
One of the first things Kim pointed out was the way that Libratus learned to play poker. It didn’t compete with human players until the challenge. It played itself, and it played a lot – billions of hands.
If the online wizards changed the face of the game when online poker allowed them to cram more hands into a month than Doyle Brunson managed in a lifetime, then think of the capabilities of an ANI designed to do nothing but play poker.
“Libratus taught itself by playing against itself over billions of hands. Humans can’t do that,” said Les. “But you shouldn’t be afraid to try new things and objectively analyze how they went.
“You can try to do something, get unlucky, lose the hand and give up. You shouldn’t do that; you should be more objective.”
5. Recalibrate End Game on Turn and River
Another area that Kim spotted that humans could learn from was the time Libratus took when making decisions from the turn onwards. Speaking to Les about this, it appears that it came down to design.
Libratus could obviously make more accurate decisions than humans in a quicker space of time, but it slowed down on the turn because it was designed that way. As Les explains:
“When Libratus got to the turn it would recalculate its strategy using something called ‘The End Game Solver.’ I don’t have the technical knowledge to tell you how that works, but it recalculated its strategy on the turn to play the turn and river close to perfectly from that point onwards.
“Libratus would stop and think for 30-40 seconds about what it was going to do from the turn onwards. Humans should think through what they are doing.
“I don’t want poker turning into everyone tanking for 45 seconds on the turn; I can’t live through another month of that. But don’t get into auto-pilot mode. Think through what you are doing.
“Maybe the middle of the hand on the turn is a good spot to think about what has happened so far and what your plan is going forward.”
Will Humans and AI Ever Work Together?
So there are things that humans can learn from Libratus. But, going forward for both humans and AI, the problem is going to be computing power.
If folks like Sandholm and Brown want to turn ANIs into AGIs, they’ll have to wait until they have the computational power to make this happen.
According to the research I did, this could happen within the next decade. Humans have a similar problem called the skull. So moving forward it seems humans are going to need AI to aid in their intellectual evolution.
Taking the Libratus v Humans match up as an example, Les and Kim have pointed out some great lessons that humans can learn. But without technology it’s going to be difficult to assimilate it.
“I imagine, down the line, we’ll see more training tools that will use better AI,” said Les. “This technology keeps advancing, and when we can afford to use it on reasonably priced computers, people can use this tech as a learning aid.
“People ask me what am I going to take from Libratus and put into my game. It’s the things we talked about, but its strategy is so intricate it takes a lot of time and thinking to come up with a cohesive strategy. This is where AI tech could help.”
Finally, I asked Les if he believed humans would ever beat AI. And what does the future hold for poker and AI as a partnership?
Are we going to see people screaming for the world’s most proficient heads-up players to take on Libratus? Or are we going to see AI and humans working more cohesively like in the Go community?
“Humans are probably never going to be able to beat it,” Les says. “I like the idea of Man v Machine competitions. If they are doing it with Go then why not poker?
“In chess, while AI dominates humans, the best chess team is an AI and human partnership. Maybe that could happen in poker with the two working together to form a more rounded strategy.”
Or perhaps, Libratus will get so intelligent, and bored, that at some point down the road it gets so pissed off at being asked to beat humans at poker it decides the best way to achieve the goal is to kill all humans.
Sorry. I couldn’t help it. What do you think is the future of poker and AI?