GGP: Monte Carlo Search

In the previous post, we looked at a heuristic-based general game player, which worked for single player and two player zero-sum games. There are a few problems with this approach though: We need to come up with a good heuristic for the game at hand but more importantly, heuristics exploit local properties of states (properties that do not depend on the game tree as a whole) and for many games, there may not be any correlation between these local properties and global success. It would be better if we didn’t have to come up with a heuristic; if the heuristic was somehow built into the search algorithm itself i.e., the algorithm were itself capable of ordering the moves, and the ordering depended on the whole game tree instead of local state properties.

Continue reading


A simple heuristic-based General Game Player

In the previous post, we talked about general game playing. Now let’s build a simple general game player, using the algorithms that we’ve already seen (Iterative deepening DFS and Iterative Deepening Alpha Beta Search).

Continue reading

Introduction to General Game Playing

So far, we’ve talked about designing agents for specific single player and two-player zero-sum games, using heuristics to guide the search, and saw how we can model and tackle uncertainty in move generation. It’s finally time to discuss General Game Playing (GGP).

Continue reading

Search in the presence of uncertainty

In the last two posts, we talked about adversarial search and built a bot for checkers. It’s now time to relax yet another constraint on the type of search problems we are able to solve, in particular, the determinism of actions i.e., we’ll no longer require that an action from a given state leads to the same state each time and will allow some randomness/uncertainty in the states that can be reached from a given state by taking a given action.

Continue reading

Creating a bot for Checkers

In the previous post, we learnt about adversarial search. Before we relax more assumptions on search problems and move on to general games, let’s first see how we can create a bot for a zero-sum game, in particular, checkers.

Checkers State

Checkers or Draughts is a strategy board game for two players, played on an 8×8 chess board. There is a challenge on Hackerrank for creating a checkers bot. You can read the problem statement and the input/output format there. We’ll try to solve that challenge in this post.

Continue reading

Adversarial Search

So far we’ve talked about uninformed and informed search algorithms. That should be it, right? Both of them combined should cover the space of all search algorithms, no? Well, unfortunately not. We’ve made some strong assumptions in everything discussed so far. We had assumed that search problems had following properties:

  • Single agent: There is only one agent (player).
  • Deterministic actions: There is no randomness i.e., an action in a given state has the same effect every time.
  • Fully observed state: The agent has perfect information about the world.
  • Discrete state space: States evolve only in response to the actions taken by the agent and not as a function of time.

Let’s relax one of these constraints and allow multiple agents instead of only one. A major class of multiple-agent search problems is the space of fixed-sum games, which are necessarily adversarial (since the total utility is fixed, the players must compete with each-other to maximize their utilities). An important special case of adversarial games is two-player, zero-sum, deterministic, perfect-information games, e.g., chess, checkers, tic-tac-toe etc.

Continue reading

Heuristics in AI Search

In the previous post, we discussed some informed search algorithms and the idea of using heuristics to guide the search process. It’s time to look under the hood to understand what these heuristics are, their properties and how to design one for a problem at hand.

Continue reading