Toe-Tac-Tic: the player that gets three in a row loses. Up until recently, I was under the impression that if everyone plays the best possible move, that the second player always wins. Either, I misanalyzed the game, or I have a bug in my program. Toe-Tac-Tic is an example of a game that is Deterministic: there is no random element (Backgammon, Poker are games that are not deterministic) Two-player: Exactly two players (Three-handed euchre, poker, solitaire are not two-player) Turn-Taking: Each player takes a turn, then the other player takes a turn. Turns are discrete, not simultaneous. (Slap Jack, Monopoly, Pit, Mille Bornes, are not strictly turn-taking.) Perfect information: there are no secrets, everyone knows the exact state of the game at any time. (Stratego, any card game, especially Magic: The Gathering) Zero Sum: The outcomes are one player wins and the other loses, or they draw. These are the "easiest" games to analyze. Toe-Tac-Tic has what is called a "small search space." This means that the number of possible games is relatively small. With 9 empty squares, there can be at most 9! different games. (The first player has nine choices, the second player has eight choices, the first player has seven choices, etc.) In reality, the number of different toe-tac-tic games is significantly less than 9! (For all practical purposes, a player only has three first moves: a corner, an edge, or the center. AND not all games require the board being filled. With a relatively small number of games (much fewer than a million), kind of an exhaustive search will solve the problem for you. We think of a search space as a tree. In order to find the best possible strategy for a move, we can search this via Depth-First Search Depth-First-Search: DFS: You start at the root of the tree and keep descending the tree, visiting a child, visiting a grandchild, visiting a great-grandchild, in what ever order is appropriate, until you hit a leaf, then you back up, and try a different child, etc. until you've tried them all. Breadth-First-Search: BFS: You visit the root, then the nodes one away from the root, then the nodes two away from the root, etc. If I use Depth-First-Search to solve two-person game like those described above, then I don't need to write code for a tree because I already have a tree of sorts. Because we have recursion, which maintains an internal stack. The stack represents my current path through the tree. If I need to go up because I've exhausted all of my children, then I just return to a previous level of recursion. So, the MiniMax algorithm does exactly this: it uses recursion to emulate the search tree. Rough Sketch of MiniMax algorithm. EvaluateBoard: if I'm already at a win, lose, or draw situation, then return the status. Loop over all possible moves from this point. Make the move. Recursively call this same algorithm but with the idea that it's the other player's turn. Get the answer back from recursion: (Win, lose, or draw, and how many moves it took to get that answer.) Understanding that the other player is also trying to play the best possible game, evaluate the moves you get from the recursion. What is the best result? Well, if the recursion EVER returns loss, then discard all answers that return win or draw. If loss is never returned, then if any return draw, then discard all wins. Only go with win if that is your only option. If there is more than move that guarantees loss for the other player, then choose the move that forces the loss in the fewest steps. Likewise, winning is guaranteed for the other player, choose the move that forces the win to take as long as possible. My Python code contains no includes or imports or any sort of use of external library. I am using crude character graphics. You can, too, if you want. I am using a two-dimensional "array". I am actually using a list of list. In Python, these things we call "types" are not really part of the language. Because of this, a list could like this: [5,"Hello",[1,2,3]] Board is [["X","X","X"],["O","O","O"],[" "," "," "]] Board[0][2] This minimax algorithm always finds the best move. But AI isn't always interested in the best move. For example, if I compute that I cannot force a win, but I can force a draw, it might be a good idea for me to make that is not a guaranteed win but a likely win if I presume my opponent isn't all that hot. In chess, we call this a fool's mate.