This is an extension to the A* in general where I had explained about the basics of A* and the skeletal algorithm which is common for all implementations.

## Introduction

Here we’ll be learning about the implementation of the A* algorithm that we have learnt so far on N-Puzzle.

**What is N-Puzzle?**

N-Puzzle or sliding puzzle is a popular puzzle that consists of N tiles where N can be 8, 15, 24 and so on. The puzzle is divided into √(N+1) rows and √(N+1) columns eg. 15-Puzzle will have 4 rows and 4 columns, an 8-Puzzle will have 3 rows and 3 columns and so on. The puzzle consists of one empty space where the tiles can be moved and thus the puzzle is solved when a particular goal pattern is formed like the following is one of the variant goal pattern.

The possible number of solvable states an N-puzzle can have is calculated by (N+1)!/2 thus for 8-Puzzle we have 9!/2 solvable states which is 181,440.

I’ll be explaining the procedure to solve N-puzzle using 8-Puzzle i.e 3 x 3 and the final state(Goal state) will be the one shown above, the explanation is in general thus with some obvious tweaks any goal state can be solved by following this guide.Also since we are here to find the shortest/optimal solution to the puzzle, all heuristics explained here are admissible(considering you have read my main article on A-Star which explain what admissible means).

## Prerequisites

The basic knowledge of A* algorithm is required which can be found in my earlier tutorial here as I’m not going to explain all the variables F score, G score, H score etc here.

## Working

Let’s work on this in the order of the previous article. So first of all we need to decide what all information is needed to be kept in the node.

- We need to save the puzzle state at that instance, thus a NxN grid.
- We need to store the heuristic value h.
- The pointer that points to the parent node.
- We may store the row and column of the blank space to speed up the problem solving how ever it’s not required.

**Structure of a Node**

Structure Node

{variable array grid[N][N] , variable h;

pointer parent;}

**Making a graph**

Now as we discussed earlier, we need to realize the search space as a graph.

Each state in the graph is represented with it’s puzzle configuration, thus each node is a separate puzzle state which is produced by sliding a tile to the blank space on the previous state. Let’s take the following case:

The above figure is considered as a single node in a graph, let’s take it as the starting node. The following will be the expanded graph in the next epoch when the available moves are implemented.

Each node can have maximum of 4 children, the graph fill further expand in a similar fashion with each child pointing to the parent using pointer.

The g score here, which is the cost of the move will increase by 1 at each depth since each tile sliding to the blank space represents one move and in the end we need to get the optimal moves for the puzzle instance, this basically mean that we have to add 1 to the g score of the parent before storing it in the child nodes.

Let’s discuss on the various Heuristics that we can use:

**Hamming Distance/Misplaced Tiles**

Just as the name suggests, this heuristics returns the number of tiles that are not in their final position. Let’s take this puzzle instance.

Here we see that all the tiles are not in their final position except ‘7’ thus the heuristic value(h) will be 7 for this instance. Remember we don’t consider the blank space as a tile while calculating this heuristic value.

This heuristic is however the slowest and a huge amount of nodes will be explored to reach the goal state compared to other heuristics.

**Manhattan Distance/****Taxicab**geometry

Manhattan Distance of a tile is the distance or the number of slides/tiles away it is from it’s goal state.Thus, for a certain state the Manhattan distance will be the sum of the Manhattan distances of all the tiles except the blank tile.

Let’s take the following example.

Here we see the tiles 5, 3, 4, 8 are misplaced so we calculate the Manhattan distances of each of these tiles which are 2, 3, 2, 1 respectively, thus the Manhattan heuristic value for this state will be 8 (h=8).

Manhattan Distance is faster than Hamming distance explained above but it’s still slow for higher values of N.

**Linear Conflict + Manhattan Distance/Taxicab geometry**

Two tiles ‘a’ and ‘b’ are in a linear conflict if they are in the same row or column ,also their goal positions are in the same row or column and the goal position of one of the tiles is blocked by the other tile in that row.

Let’s take the following example

In this instance we see that tile 4 and tile 1 are in a linear conflict since we see that tile 4 is in the path of the goal position of tile 1 in the same column or vice versa, also tile 8 and tile 7 are in a linear conflict as 8 stands in the path of the goal position of tile 7 in the same row. Hence here we see there are 2 linear conflicts.

As we know that heuristic value is the value that gives a theoretical least value of the number of moves required to solve the problem we can see that one linear conflict causes two moves to be added to the final heuristic value(h) as one tile will have to move aside in order to make way for the tile that has the goal state behind the moved tile and then back resulting in 2 moves which retains the admissibility of the heuristic.

Linear conflict is always combined with the Manhattan distance to get the heuristic value of that state and each linear conflict will add 2 moves to the Manhattan distance as explained above, so the ‘h’ value for the above state will be

###### Manhattan distance + 2*number of linear conflicts

Manhattan distance for the state is: 10

Final h: 10 + 2*2= 14

Linear Conflict combined with Manhattan distance is significantly way faster than the heuristics explained above and 4 x 4 puzzles can be solved using it in a decent amount of time.Just as the rest of the heuristics above we do not consider the blank tile when calculating linear conflicts.

**Pattern Database**

Pattern databases are the fastest compared to the other heuristics. In general a pattern database consists of a creating a database of all patterns occurring in a selected area of the puzzle and the minimum cost/Heuristic value required for it to reach the goal state from every possible permutation of positions of tile in that group i.e. it stores a collection of solutions to sub-problems that must be achieved to solve the problem. It’s used as a table lookup to get the ‘h’ value of the current state.

There are various types of pattern databases

**1. Fringe Database**

A fringe database is when you take a part of the N-Puzzle i.e some tiles of the database and map their minimum cost to reach the goal state from their current positions. For example in this picture

We have taken the left and top corner tiles(marked with yellow) and the remaining white tiles as our fringe patterns and we deduce all the permutations of locations of these tiles and the minimum cost for each permutation to reach the goal state which is shown in the right picture, the left picture represents on of the permutation of locations of these tiles which represents one entry in our pattern database.

Now let’s say the yellow tiles take minimum 20 moves to reach the goal state as shown in the right picture and the white tiles take a minimum of 30 moves to reach the goal state as shown, then we take the maximum of both that is 30 moves as the heuristic value.

**NOTE:** While selecting the tiles for each fringe pattern make sure that the tile in one group is not included in the second group i.e the patterns are disjoint patterns

**2. Static Additive Pattern Database**

Static Additive pattern database is faster than fringe pattern database as it does not take the maximum cost among the groups but it adds the cost required of all the pattern groups since the cost to goal state of each pattern is calculated without considering the other group i.e. each group doesn’t consider conflicts between tiles from other group, it only considers the conflicts within itself (this is when creating the databases) and thus remains admissible.

Static additive pattern databases can be of various forms according to the memory or processing power available like the following shown for 15-puzzle.

It is obvious that 7-8 Partitioning will take a lot of memory but will be the fastest. The 6-6-3 partitioning is the best compromise between speed and memory.

The resulting ‘h’ value will be the sum of all the minimum moves required for each group to reach the goal state from current positions.

**3. Dynamically partitioned Additive Pattern Database**

A dynamically partitioned additive pattern database considers conflicts between each groups and thus is the fastest. I’m not much familiar with this type as I haven’t coded it and this is all the information I can provide about them.

**NOTE: **All the pattern databases mentioned above are created by using breadth first search(BFS) from the final state of the selected group till all the patterns are generated, using BFS on the empty tile only when the empty tile is switched with a tile from the group we add 1 to the cost and check if it’s present in the database, if not we insert it in the database with it’s cost otherwise we don’t add it to the database or the BFS queue. Since all the tiles other than those in that group are considered same we don’t increase the cost when the blank tile moves to their position.In this way we consider all the permutations of positions of the tiles in the group including the blank tile but insert into the database without considering the blank tile, thus in this way we have a single permutation of a group with all the positions of blank tile while the group remains the same, but inserting into database the least cost of all the states with all positions of blank tile of a single permutation of the group.

You can read my followup tutorial about creating Pattern Database by using BFS. It explains in detail on making 6-6-3 pattern database and all you need to know about creating pattern databases.

**Progression**

Now let’s use Manhattan heuristic on a random 8-Puzzle instance and work it towards the solution.

As we know A* selects the node with least f score each cycle after expanding the selected node , we see above clearly how the solution is found in 3 cycle of expansion after choosing the least f score each step.

Make sure you go through my A* in general as it explains the basic algorithm behind this

**NOTE: **A* can only be used to solve 8-Puzzle it uses much more memory for solving higher N as the memory consumption is exponential in A* because of the closed and the open lists that are to be maintained, for higher N we use memory constrained version of the A* algorithm like the IDA* algorithm. The basic algorithms changes but rest everything remains as it is. Make sure you go through my post on IDA* algorithm. IDA* algorithm can be used for higher values of N since it uses memory linearly. If used with proper heuristics it can solve 15-Puzzle in a few seconds.

If you see anything wrong with the post or if you have any queries regarding the algorithm. Post them in the comment section. There can be things wrong with this as I am still a beginner on the path of learning and this is based on what I have learnt so far with examples from external sources as well. So experts, please let me know through the comments section.