We learnt in the previous post that we can model a problem as a search problem if we are able to define 4 components: *state space, successor function, start state and goal test*. In code, it’d look like this:

We’ll use this problem (PACMAN) as a running example to illustrate modeling into a search problem and implementation of different search algorithms. In this problem, we are given a grid of cells, pacman is located at one of the cells and another cell contains a food pellet. We need to find a path from pacman’s location to the food pellet’s location. We can model this problem into a search problem as follows:

We saw a pseudocode of the general tree search algorithm in the previous post. Now, let’s see how it can be implemented.

We are yet to define how *strategy* is implemented. It represents the queuing mechanism which defines the order in which nodes are expanded. Different algorithms differ in how they implement this queuing mechanism. It can be a stack (for DFS), a queue (for BFS) or a priority queue (for UCS or A*).

We can get the corresponding graph search algorithm by just keeping a set of explored nodes and ensuring that we don’t enqueue them again.

Although the two algorithms are very similar, they may have wildly different properties. Graph search requires space proportional to the size of state space explored while tree search can easily get stuck in loops.

These algorithms are called uninformed because they don’t have any information about the problem domain (except for start state, goal test and successor function). Given a state, they have no idea about how good that state is and whether or not they are getting close to a solution. This has its pros and cons. It is very easy to use the same uninformed search algorithm for different problems; they require the least amount of information about the problem domain. The main problem with them is that they are very inefficient. Since they have no information about the problem domain, they search blindly and have large space and time complexities. With this short explanation out-of-the-way, let’s see these algorithms in action.

**Depth First Search (DFS)**

DFS chooses the most recently explored node for expansion, i.e., it uses LIFO ordering for node expansion. In other words, it uses a stack as the queuing mechanism.

Properties of DFS:

*Completeness*: DFS is complete only if we prevent cycles i.e., Graph search version of DFS is complete but Tree search version is not.*Optimality*: DFS is not optimal. It does not always find the cheapest solution. It only returns the first solution it finds, no matter how expensive it is.*Time complexity*: O(b^{m}), where b is the branching factor and m is the maximum depth.*Space complexity*: O(bm) for the tree search version and O(b^{m}) for the graph search version

**Breadth First Search (BFS)**

BFS uses FIFO ordering for node expansion i.e., it uses a FIFO queue as the queuing mechanism.

There’s no point in using a tree search version of BFS because it has the same space complexity as the graph search version. The tree search version provides no benefit over graph search version but can be more expensive because the same nodes may be explored over and over again.

Properties of BFS:

*Completeness*: Yes. BFS is guaranteed to find a solution if one exists.*Optimality*: Only if action costs are all 1. BFS finds the shallowest goal node, the one requiring the least number of actions or edges. This is optimal only if all actions have the same cost.*Time complexity*: O(b^{s}), where s is the depth of the shallowest solution.*Space complexity*: O(b^{s})

One important point to note here is that in all these algorithms, we find a solution when a node removed from the fringe (strategy) satisfies the goal and not when a node is put into the fringe. For optimality, goal test is applied to nodes removed from the fringe and not put into it.

**Iterative Deepening DFS**

Both BFS and DFS have their pros and cons. BFS is complete and finds the shallowest solution but has a large space complexity while DFS has a small space complexity for the tree search version. We can combine the benefits of both into iterative deepening DFS. It uses a depth limited DFS and calls it iteratively with successively increasing values of the maximum depth until it finds a solution.

Depth limited DFS is usually implemented recursively. It may look like this algorithm is very wasteful. It keeps exploring the same nodes over and over again. However, in an exponentially growing state tree, most of the work happens at the lowest level so it explores only a constant multiple of nodes explored by BFS while having the same space complexity as DFS.

Properties of Iterative Deepening DFS:

*Completeness*: Yes*Optimality*: Only if action costs are all 1*Time complexity*: O(b^{s})*Space complexity*: O(bs)

**Uniform Cost Search (UCS or Dijkstra’s algorithm)**

UCS chooses the cheapest node for expansion i.e., it uses a priority queue for storing nodes, ordered by their path costs from the start state.

Properties of UCS:

*Completeness*: Yes*Optimality*: Yes*Time complexity*: O(b^{C*/ε}), where C^{*}is the solution cost and ε is the minimum action cost. C^{*}/ε is the effective depth in the worst case.*Space complexity*: O(b^{C*/ε})

In the next post, we’ll look at some of the informed search algorithms.

**References**:

Pingback: Informed Search Algorithms | Everything Under The Sun

Pingback: Adversarial Search | Everything Under The Sun

Pingback: A simple heuristic-based General Game Player | Everything Under The Sun