So we have a working board and also an algorithm that will allow us to play Tic-Tac-Toe. All that is missing are the players themselves. Since we know we’re going to need a player to represent ourselves, the human player, lets go ahead and implement that class first.
What do all players have in common?
Before we write the human player class, stop and think about what real-world Tic-Tac-Toe players have in common.
Let’s list a few of those commonalities:
- They can pick a valid spot to move to given the current state of the board
- They can move their marker (X or O) to that spot
- They can reason about their potential moves in order to come up with a winning strategy
- …. that’s about it.
Thinking about this from a code perspective.. this sounds like a great opportunity for a base class that will help describe this common functionality and ensure that our implementations adhere to it. This is the Player class. Any class that extends Player should be able to move when given an instance of the current board.
Human players have the benefit of being able to actually see the game board in order to pick the best move. They can also easily determine which moves are valid and which are not, as long as they understand the rules. Human players of our game also have the option to quit at any time, something their computer counterparts are unable to do.
So, the move method for the Human player should do the following:
- Get the player’s chosen move from STDIN
- Check if the player has chosen to quit
- Else, validate that the input is a valid move
- If valid, place the Human player’s marker in that spot
- Otherwise, repeat
I won’t go over this implementation as it’s relatively simple and follows the above algorithm pretty much exactly. If you are interested you can always check out the source.
Now to the interesting part, implementing the AI. Initially, I had decided to try and implement the AI in the same manner as the Human player above.. that is, to write an algorithm that follows a step-by-step process to determine the next move.
I read the Wikipedia article on Tic-Tac-Toe which spells out such an algorithm and states that a player can play a ‘perfect game’ (either win or draw) if followed. This sounded like exactly what I needed to implement for my AI player.
The algorithm is as follows:
- Win: If the player has two in a row, they can place a third to get three in a row.
- Block: If the opponent has two in a row, the player must play the third themselves to block the opponent.
- Fork: Create an opportunity where the player has two threats to win
- Blocking an opponent’s fork
- Center: A player marks the center.
- Opposite corner: If the opponent is in the corner, the player plays the opposite corner.
- Empty corner: The player plays in a corner square.
- Empty side: The player plays in a middle square on any of the 4 sides.
This seemed almost trivial at first. All I would have to do would be to turn this algorithm into code. I didn’t even have to come up with the algorithm myself!
Well as I soon realized, this wasn’t as easy as I had initially thought. It’s actually very difficult to ‘teach’ a computer how to play Tic-Tac-Toe by following the above set of rules without also requiring the computer to ‘see’ the board.
For example, how do you teach a computer which next move could be the winning one? The ‘board’ is simply an array, no one position has any more weight or meaning than another.
In order for the computer to know if the game has a winner, we can hardcode all of the winning spaces and loop through their combinations to check for our marker, which is exactly what I did.
OK, so we can determine if our next move will result in a win, and on the flipside we can determine if we need to block our opponent if they are about to win. But, how do we accomplish ‘forking’, ‘blocking an opponents fork’, finding an ‘opposite corner’, ‘empty corner’ or ‘empty side’? All of these would also require a similar implementation strategy as the ‘winning’ one above. As you can probably imagine, this solution is less than ideal as it requires hardcoding all of the possible positions for each of these actions. This turned in to a coding and debugging nightmare (see this commit for what I mean).
There had to be a better way.
After doing some research, I came across an algorithm named Min-Max which turns out to be an ideal algorithm to solve ‘zero-sum’ games such as Tic-Tac-Toe. A zero-sum game is a game in which each players gain (or loss) is exactly balanced by their opponents loss (or gain). This means that in Tic-Tac-Toe, one player’s gain (or winning combination of moves) means that their opponent must lose. It also has the side effect that if both players play perfectly, the game must always result in a tie. This is exactly how we want our AI player to perform, either they win or tie, but never lose.
Min-max accomplishes this result by giving each choice (or potential move) a weight. If the object is for Player A to win and for Player B to lose, then min-max will always choose the choice with the highest weight when it is Player A’s turn and the lowest weighted choice for Player B. This process is much better described here.
In order for min-max to weigh our choices or potential moves on our Tic-Tac-Toe board, it must be able to determine which set of moves result in a win for us and a loss for our opponent. Or, if a win is not possible, we must at least be able to prevent our opponent from winning. The way that we accomplish this is to have the min-max algorithm ‘play out’ all potential moves given the current board and retro-actively assign each of these moves a point value based on the final outcome. Then we can make the best choice for our next move. Basically, in programming terminology, we use recursion.
Again, I won’t go into the gory details as you can probably figure them out by looking at the code. Most of the interesting logic occurs in the following method anyways:
After some trial and error, I finally had a working version of min-max and a truly unbeatable Tic-Tac-Toe game. If you are interested in giving the game a try, download the source and build the gem by following the README.
I realize that some may view this exercise and series of posts as silly and maybe even a little trivial, but hey it was fun and I learned a new algorithm and some techniques that I might not have learned otherwise. While I didn’t get to experiment with the hottest new hipster language, or sort through terabytes of data using the latest in ‘big data’ technology, I did have a good time all the same. Sometimes that’s all that matters.
Let me know what you thought of this series in the comments.