Quiz 1 will occur on Wednesday 28 January 2026. No code. I will give you a relatively easy strategy game, and you will describe how the minimax algorithm would compute the next move. (In English). Searching algorithms in general. The idea behind searching algorithms is that you have a start state, a desired final state, and any number of intermediate states. The number of intermediate states is usually finite, but sometimes it isn't, and even when the states are finite, the number of states might be quite large. One possible search space might be towns, or junctions, or crossings, or stopping points, or whatever. I start in Marquette, and I want to get to Ann Arbor, driving only on legal roads. How should AI get me there? Should I go to Houghton first? No, but how does AI know that? Another example: sliding number puzzle. 1 2 3 4 5 6 7 8 9 10 11 12 <== 10,461,394,944,000 states (possible configurations) 1314 15 Maybe your grid starts like this: 6 1 8 3 12 7 5 15 14 13 10 11 2 4 9 How do you slide the squares around until you get the final state? Another example: Rubik's Cube How do get from you starting position to the ending position is not obvious. 43,252,003,274,489,856,000 states. How should you choose which state to visit next in order to solve the problem? Best-first-search strategy. You imagine that there is a cost to each transition. And so you select the choice with the lowest cost. In the case of the Marquette to Ann Arbor problem, you can selecting the transition that leads to the junction nearest to you. If you start in Marquette, depending on how you label your junctions, you might choose Negaunee as the place to try (even though that actually doesn't get you closer to Ann Arbor), or you might Harvey (which does). In the case of sliding number or Rubik's Cube, the cost of each transition is essentially the same (one move) so it doesn't matter which one you try. Additionally, getting to Ann Arbor (or solving a Rubik's Cube) may not be your only goal, you may want to find the shortest way to get there. So best-first, try a transition, if you're at the goal, great, if not try a different transition. Which transition should you try? One from where you are now, or another one from the starting position? If you want to go from where you are now, this is called DEPTH-FIRST-SEARCH If you want to from the starting position, this is called BREADTH-FIRST-SEARCH. For the purposes of solving the problem, BFS is probably the best choice, like, 99% of the time. Getting out of the maze if you're a human being. FOLLOW THE LEFT WALL. Only works if the goal point a literal exit to the outdoors. (And the point is also a point exterior to the maze.) Classic Maze Solving strategy (with bread crumbs). If I get to a junction that I've never been to before, take any path. If I get to a junction that I've been to before, backtrack, even if there are choices I could take. If I've backtracked this path, don't backtrack again, just take any new path. If there aren't any new paths, then take the old path available (there will only be one). With Depth-First-Search, you go all the end of a path in your search space, and the back up one position and try an unused path in your search space. With Breadth-First-Search, you try every spot one move away from your your start space, then every spot two moves away, and so forth. Depth-First-Search is based heavily on a stack. BFS is based heavily on a queue. Stacks and Queues are both special cases of what more generic data structure? Priority Queue: when you pop an item, you pop the item of highest priority. Priority Queues are themselves generalized by the search tree. If you are a computer, BFS is probably more useful because when you get to the destination, you now know the shortest path to the destination, which is probably useful. Since you try shorter paths before longer paths, when you get to the destination, you know that's the path because you've already tried all the shorters paths, and they didn't work. When you doing a best-first-search, either with BFS or DFS, what very critical thing do you have to be careful about? One of the problems in searching is visiting a state you had already been to. How do you prevent this from happening? If you visit a previously visited state, you can treat this as leaf in your search tree and go no farther; cut off the search at that point. But...how do you know you've been to a previously visited state? You could keep some sort of lookup table of previously visited states, not so great for a Rubik's Cube. If you're using BFS, you can just "suck it up", knowing you may very well have repeats, but just keep going, knowing that you'll eventually get there anyway...of course, BFS is a resource hog itself. (However, you may not need to keep a state around in memory after you've tried every transition from it. One thing in DFS is backtrack. When you try a new state, go back to the start state and see if your state is in the path between start state and you, your present path. And if it is, then cut off the search. There may have been another way to get there along a different path, and maybe you tried that one already, and maybe there's some redundant work here, but at least you're not infinite looping. Depth-limited search: you only go down so far in your search. Once your depth reaches a certain limit, you don't go down in any further. In this case, you can't infinite loop, because a loop will eventually exceed the depth limit. But why would you want to do this? If you're depth-limited search can't find the goal, what good is it, even if there is a goal? This is an AI class, not an Algorithms class. An Algorithms class is really interested in the correct answer. An AI class is OK with an answer that's close enough. Using a depth-limited search, what we can do is estimate what our first move ought to be (of all these moves, which gets us "closest" to our goal), then we can make the move, and then try the algorithm again.