Call us at 030 227 21 68 or reach out at info@ai.nl.

Monopoly and AI

After Chess and Go, AI learns to play Monopoly like a pro; here’s how

Artificial Intelligence has made rapid progress since technology became the talk of the town. It is now deeply embedded in our devices and the services found on those devices. From autocomplete on Gmail, spam detection on Outlook to computational photography, AI is changing our lives. However, one of the questions that comes up regularly is whether AI can beat humans, be it in terms of intelligence or while playing games.

Man versus Machine: AI and its wins so far

AI has definitely reached its tipping point and has been successfully able to permeate our lives rapidly. However, it has not reached singularity yet, an event where the AIs either become self-aware or improve to a point that they are beyond our control. However, AI has successfully managed to beat humans in games, thus proving that with adequate and meaningful data, AI can learn and become better.

The best example of man versus machine came in 1996 when IBM’s Deep Blue became the first computer program to defeat a current chess world champion under normal conditions. While it managed a win against Gary Kasparov, Deep Blue lost three games and drew twice over the remaining five games. In 2011, IBM created Watson as a question-answering system that managed to beat Ken Jennings and Brad Rutter in Jeopardy.

While IBM focussed on specific purpose with its AI programs, DeepMind introduced an AI system in 2013 that managed to beat many Atari games with only one model. By not requiring redesign for any particular game, DeepMind managed to shine light on the concept of General AI. The biggest test for AI came in 2016 when DeepMind challenged South Korean professional Go player Lee Sedol.

Go is an Asian board game where strategy is hard to define and each state leads to many more states of evaluation. Since the game is different from Chess, where the AI can process large amounts of moves at once, AlphaGo learned to attach value up to a few periods into the future. It then learned to make decisions based on that value.

Notably, AlphaGo beat Lee Sedol, which forced him to retire. “Even if I become the number one, there is an entity that cannot be defeated,” Sedol said at that time. Since then, we have seen AlphaZero based on AlphaGo master Chess, Go and Shogi in 2017 and in 2019, AlphaStar came along to master the real-time strategy game Starcraft. All this proves that AI is ready for complex tasks and games with complex rules.

AI learns to play Monopoly like a pro

The latest frontier for AI on the gaming front is mastering monopoly. Monopoly is a classic fast-dealing property trading board game. In the game, players roll two dice to move around the board and in the process, they buy and trade properties and even develop them with houses and hotels.

The players get stipends every time they pass “Go” and the goal of the game is to drive your opponent into bankruptcy by collecting rent from them. It is so iconic that Sean Connery’s 1971 Bond film titled ‘Diamonds Are Forever’ has a reference to the game but the characters joke that Willard Whyte is trading actual properties.

Unlike other games where AI have managed to learn and beat human champions of the game, Monopoly is different. Monopoly is different because you are only in control of throwing the dice and you have zero influence on the position of your piece. Another hurdle for AI with Monopoly is the fact that the probability of a perfect player winning every time is not high.

So can AI be trained to master a game like Monopoly? Well someone did try and in a video uploaded to YouTube by b2studios, AI not only learned strategies in Monopoly but also learned that auctioning everything is a great strategy. After 11.2 million games of self-play, AI was able to discover the secrets of success in this classic board game.

To get there though, they first introduced a total of four bots to the game. These bots were trained to buy everything but trade nothing. They were also programmed to build houses but mortgage them only when in need. After a million games, the developers understood that the win rate varied depending on the number of players in the game.

The players going first were found to have a slight advantage over those going next. In a game of four players, the difference in win rate between the first and the last player was found to be 5 per cent. This difference jumps to 18 per cent with three players and with two players, that difference becomes 57 per cent. This experiment proved that most games end in stalemate but going first is the right move in Monopoly.

These games played by AI bots also showed players spend a lot of time in jail. However, looking at the relative win-rate, which only looks at assets bought by the player, shows that dark blue is the best set and it is followed by the browns. The brown set is generally not preferred and that meant changing the algorithm and creating an even better AI to play monopoly.

Meet NEAT, an acronym for Neuro evolution of augmenting topologies, that acts as the trade AI in the game. This algorithm is designed by combining neural networks with basic evolutionary principles. For Neat to function, it will need an input for every piece of action on the board and an output for all the different actions a player can take.

For each player, there is a need for three inputs – Position, Money and Cards. If four players participate in the game then we will need a total of 12 inputs and with 28 properties on the board, each of these will require two inputs determining the owner and whether the property is mortgaged. On the board, a total of 22 of these properties can be built on and AI needs those inputs in the form of housing.

NEAT, the trade AI can also be given additional input such as context. For output, there are a total of eight actions. These actions are buying, mortgaging, trading, un-mortgaging, bidding, building houses, selling houses and jail decisions. Lastly, the fitness value of AI is also taken into account before running the algorithm.

The AI played a total of 11.2 million games, which could take a human approximately 1,600 years. All these games made AI learn that paying for jail is a bad idea and favoured cards and rolling dice. It also thought bidding $3,000 on each property was a good idea. Well, you know, it lost most of the time. The algorithm also learnt that building houses and remortgaging them is a good idea.

Like humans, the AI became more sensible about its actions and became aggressive with each win and passive with each loss. Since we are talking general AI here, there were moments when the AI decided to simply mortgage everything it owned and forced each game to end in a draw. AI learnt from its mistakes, paid back its debt and started winning again.

All these mixtures of steps and actions meant that AI was becoming more proactive. It started playing the game with sophisticated strategies and began actively trading properties. If you are thinking how is this AI not doing what human players do then you are correct. All these adventures only led AI to human intelligence in the game.

Out of nowhere, AI decided to auction everything it owned and bid next to nothing. The trading AI showed its deep understanding of the game that evolved by making both good and bad moves, evolving those moves and eventually getting better at it. In the end, the trading AI was found to favour the orange sets and it also favoured trains for property.

The AI also liked the red and purple sets and like humans, it also enjoyed Mayfair. AI is not different after all. The whole exercise not only shows that AI can become a pro in Monopoly but confirms actions taken by human players over these years. Now, if you are curious, pickup Monopoly, be the first to roll the dice and see whether AI’s actions transform into actual winnings.

2048 1366 Editorial Staff
My name is HAL 9000, how can I assist you?
This website uses cookies to ensure the best possible experience. By clicking accept, you agree to our use of cookies and similar technologies.
Privacy Policy