- Part one introduces the notion of path symmetry: a property of uniform-cost grid maps which can significantly slow down search.
- Part two discusses Rectangular Symmetry Reduction (RSR) [2]: a simple yet effective preprocessing algorithm that eliminates many path symmetries by decomposing a grid map into a set of empty rectangles.

In this article I describe Jump Point Search (JPS) [3]: an online symmetry breaking algorithm which speeds up pathfinding on uniform-cost grid maps by “jumping over” many locations that would otherwise need to be explicitly considered. JPS is faster and more powerful than RSR: it can consistently speed up A* search by over an order of magnitude and more. Unlike other similar algorithms JPS requires no preprocessing and has no memory overheads. Further, it is easily combined with most existing speedup techniques — including abstraction and memory heuristics.

**Please note:** this work was developed in conjunction with Alban Grastien. It forms part of my doctoral research into pathfinding at NICTA and The Australian National University. Electronic copies of the paper in which this idea was first described are available from my homepage.

## The Algorithm

JPS [3] can be described in terms of two simple pruning rules which are applied recursively during search: one rule is specific to straight steps, the other for diagonal steps. The key intuition in both cases is to prune the set of immediate neighbours around a node by trying to prove that an optimal path (symmetric or otherwise) exists from the parent of the current node to each neighbour and that path does not involve visiting the current node. Figure 1 outlines the basic idea.

**Figure 1: Neighbour Pruning**

Node **x** is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. In both cases we can immediately prune all grey neighbours as these can be reached optimally from the parent of **x** without ever going through node **x**.

We will refer to the set of nodes that remain after pruning as the *natural neighbours* of the current node. These are marked white in Figure 1. Ideally, we only ever want to consider the set of natural neighbours during expansion. However, in some cases, the presence of obstacles may mean that we need to also consider a small set of up to *k* additional nodes (0 ≤ *k* ≤ 2). We say that these nodes are *forced neighbours* of the current node. Figure 2 gives an overview of this idea.

**Figure 2: Forced Neighbours**

Node **x** is currently being expanded. The arrow indicates the direction of travel from its parent, either straight or diagonal. Notice that when **x** is adjacent to an obstacle the highlighted neighbours cannot be pruned; any alternative optimal path, from the parent of **x** to each of these nodes, is blocked.

*jump point*successor. Jump points are interesting because they have neighbours that cannot be reached by an alternative symmetric path: the optimal path must go through the current node.

The details of the recursive pruning algorithm are reasonably straightforward: to ensure optimality we need only assign an ordering to how we process natural neighbours (straight steps before diagonal). I will not attempt to outline it further here; the full details are in the paper and my aim is only to provide a flavour for the work. Figures 3 gives two examples of the pruning algorithm in action. In the first case we identify a jump point by recursing straight; in the second case we identify a jump point by recursing diagonally.

**(a)**Jumping Straight

**(b)**Jumping Diagonally

**Figure 3: Jumping Examples.**Node

**x**is currently being expanded.

**p(x)**is its parent.

(a) We recursively apply the straight pruning rule and identify **y** as a jump point successor of **x**. This node is interesting because it has a neighbour **z** that cannot be reached optimally except by a path that visits **x** then **y**. The intermediate nodes are never explicitly generated or even evaluated.

(b) We recursively apply the diagonal pruning rule and identify **y** as a jump point successor of **x**. Notice that before each diagonal step we first recurse straight (dashed lines). Only if both straight recursions fail to identify a jump point do we step diagonally again. Node **w**, which is simply a forced neighbour of **x**, is generated as normal.

## Properties and Performance

- It is optimal.
- It involves no pre-processing.
- It requires no extra-memory overheads.
- It can consistently speed up A* search by over 10 times; making it not only competitive with, but often better than, approximate techniques such as HPA* [1].

Properties 1-3 are interesting theoretical results, and rather surprising, but I will not address them further here. My main objective in this article is simply provide a flavour for the work; for a full discussion, including proofs, please see the original paper [3]. Property 4 is perhaps of broadest practical interest so I will give a short summary of our findings below.

We evaluated JPS on four map sets taken from Nathan Sturtevant’s freely available pathfinding library, Hierarchical Open Graph. Two of the benchmarks are realistic, originating from popular BioWare video games *Baldur’s Gate II: Shadows of Amn* and *Dragon Age: Origins*. The other two *Adaptive Depth* and *Rooms* are synthetic though the former could be described as semi-realistic. In each case we measured the relative speedup of A* + JPS vs. A* alone.

Briefly: JPS can speed up A* by a factor of between 3-15 times (Adaptive Depth), 2-30 times (Baldur’s Gate), 3-26 times (Dragon Age) and 3-16 times (Rooms). In each case the lower figure represents average performance for short pathfinding problems and the higher figure for long pathfinding problems (i.e. the longer the optimal path to be found, the more benefit is derived from applying JPS).

What makes these results even more compelling is that in 3 of the 4 benchmarks A* + JPS was able to consistently outperform the well known HPA* algorithm [1]. This is remarkable as A* + JPS is always performing optimal search while HPA* is only performing approximate search. On the remaining benchmark, Dragon Age, we found there was very little to differentiate the performance of the two algorithms.

**Caveat emptor:** It is important to highlight at this stage that A* + JPS only achieves these kids of speedups because each benchmark problem set contains a large number of symmetric path segments (usually manifested in the form of large open areas on the map). In such cases JPS can exploit the symmetry and ignore large parts of the search space. This means A* both generates and expands a much smaller number of nodes and consequently reaches the goal much sooner. When there is very little symmetry to exploit however we expect that our gains will be more modest.

## Screenshots

**Figure 4: Search Effort.**Comparative pathfinding examples from our experimental results. Images in the first column show total nodes expanded by A* (marked red) in order to find the optimal path to the goal (marked blue). Images in the middle and last columns are total nodes expanded by A* + RSR and A* + JPS respectively. Notice that A* + JPS ignores many symmetric path segments (more than A* + RSR even) and typically reaches the goal with much less effort.

## Final Thoughts

Another exciting aspect of this work is the possibilities it opens for further research. For example: could we pre-process the map and go even faster? Or: are there analogous jumping rules that one could develop for weighted grids? What about other domains? Could we apply the lessons learned thus far to help solve other interesting search problems? The answers to the first two questions already appear to be positive; the third is something I want to explore in the near future. Regardless of how it all turns out, one thing is certain: it’s an exciting time to be working on pathfinding!

## References

**[1]**A. Botea, M. Müller, and J. Schaeffer.

*Near Optimal Hierarchical Path-finding.*In Journal of Game Development (Issue 1, Volume 1), 2004.

**[2]** D. Harabor, A. Botea, and P. Kilby. *Path Symmetries in Uniform-cost Grid Maps. *In Symposium on Abstraction Reformulation and Approximation (SARA), 2011.

**[3]** D. Harabor and A. Grastien. *Online Graph Pruning for Pathfinding on Grid Maps.* In National Conference on Artificial Intelligence (AAAI), 2011.

The last image rooms_jps.png links to ad_jps.png

Comment by jonenst — September 7, 2011 @ 21:59

@jonenst: fixed. thank you so much for pointing that out!

Comment by dharabor — September 7, 2011 @ 23:07

http://news.ycombinator.com/item?id=2969008

Comment by Name (required) — September 7, 2011 @ 23:49

[…] Jump Point Search « Shortest Path (tags: optimization algorithm pathfinding search gamedev) Share this:StumbleUponDiggRedditLike this:LikeBe the first to like this post. […]

Pingback by links for 2011-09-07 « Blarney Fellow — September 8, 2011 @ 10:39

The speed-up comes from specializing the general graph search to a uniform grid, right?

Cayley graphs are one way to say “uniform grid”. To express a 2d rectangular grid, you could give a group presentation, which is something like saying “There are four directions, north, south, east west. North followed by South comes back to the same place, East followed by West comes back to the same place, and North East West South comes back to the same place.”. (Actually, you would say that all directions have inverses because it is a group, and so there would be two generators and one word, but group theory might be too rigid, you might want semigroups or monoids – I just want to point out that there are ways of describing uniform grids.)

A generalized jump point search algorithm might consume a presentation and output a specialized jump point search algorithm for maps on that grid.

A hexagonal grid can be presented with three generators, each equal to its own inverse, and going around in a hexagon (abcabc) leads back to the same place. It’s easy to construct hyperbolic grids, too. For example, the 2-3-7 triangle group (there’s an image here http://en.wikipedia.org/wiki/%282,3,7%29_triangle_group)can be described as having two generators x and y, such that xx comes back to the same place, yyy comes back to the same place and (xy)^7 comes back to the same place. Probably, there is a presentation of the movement of a chess knight, though I don’t know it (I’m pretty sure it’s the same group as the usual 2d rectangular grid, just a different presentation).

More exotically, the regularity in rolling-cube mazes (as described here: http://www.logicmazes.com/rc/cubes.html) could be captured with a group presentation. Possibly even pathfinding inside of transposition puzzles?

Comment by johnicholas — September 14, 2011 @ 03:02

Hi John,

Excellent points. The speedup does indeed come from specialising the graph to uniform grids. At the moment you’d have to cook up, by hand, variants for other types of uniform cost grids (hexagons etc). This is not difficult in most cases but the situation is not ideal. I thus quite like your suggestion for how a generalised solution should work!

My thoughts for solving this problem have been quite similar: what’s needed is a pre-processing algorithm that generates an appropriate automaton for a given type of map. Said automaton would take a node and generate a set of jump point successors. In the 90s Korf published a paper which tries to do something like this but, iirc, his method was limited to the search spaces of combinatorial puzzles like the Rubik’s Cube. Detecting symmetries in such cases is a bit simpler than in grid-based pathfinding because the goal is fixed and there are no obstacles.

Comment by dharabor — September 14, 2011 @ 12:11

Amazing work! I would love to implement this myself but I don’t see how it would work with influence weighted grids in which case it wouldn’t be as useful.

Comment by Henrik Johansson — September 27, 2011 @ 14:39

You might model influence-weighted grids by representing the map as a uniform three-dimensional grid, where everything except a two-dimensional surface is an obstacle. That would make it possible to apply Jump Point Search, but maps without open straightaways would reduce its advantage.

Possibly it would be useful if the influence spread out in a nice way – taxicab-metric cones, for example – so that the search can jump up and down slopes relatively easily. Possibly it would be useful if you’re willing to heavily discretize the levels of influence, so that the map looks nearly flat with a few contours on it. In that case the search could jump up to the contour, step across it, then jump onward.

Comment by Johnicholas — September 30, 2011 @ 03:54

Henrik,

I think Johnicholas is on the right track. Here is my simple take on the problem:

If your grid maps contain uniform cost regions, simply treat any neighbour which is of a different terrain type to the current node, as forced. This will allow you to quickly search across a uniform-cost region, stop to expand a node when crossing into a different region, and continue jumping on the other side. I don’t yet know how well this approach works because I haven’t tried. I suspect however that it should be reasonably effective on a typical RTS map such as this one from Warcraft 3.

I hope to write a detailed article on this problem in the near future. With, I hope, a better generalisation and some results.

Stay tuned! :)

Comment by dharabor — October 7, 2011 @ 13:57

Just finished implementing JPS into my game’s A* algorithm. Have been experiencing solid 10x-20x speedups in all of my tests, which was great to learn. I imagine JPS will soon become the industry-standard optimization when it comes to uniform grid-based pathfinding.

Thanks!

Comment by Tyson Ibele — January 5, 2012 @ 06:38

By the way, I can confirm that your last suggestion regarding weighted grids works.

After grabbing all the neighbors of an expanded node, I do a quick check to see if any of them have a weight that is different than the node being expanded. If they do, they’re marked as forced……. and the paths that are returned will follow the route of least resistance, as expected.

So…JPS isn’t limited to uniform-cost grids, which is great!

Comment by Tyson Ibele — January 5, 2012 @ 07:50

This looks absolutely great. Have anyone done an implementation one can look at? I’m trying to optimize an Actionscript 3 implementation of A* but I don’t understand the syntax in the paper.

Comment by Nisse Bergman — January 12, 2012 @ 09:23

[…] attempting to pace up my pathfinding and found out A* with JPS. It essentially prunes tiles before to adding them in the direction of available […]

Pingback by Is Jump Point Search (A* with JPS) appliable to non-diagonal grids? | Q&A System — January 21, 2012 @ 14:36

Amazing work! I implemented A* + JSP as a method to identify a path from a start to a goal that tried to stay as far away from obstacles as it could and saw a huge improvement. Standard A* took about 44 hours to identify the minimum distance on a 1000 by 1000 grid and A* + JSP took merely 17 minutes. Absolutely wonderful!

Comment by HarryChriss — January 28, 2012 @ 01:18

Hello, can this algorithm be used for solving routing problems, eg finds a path between a start point and end-point?

Comment by Prouso01 — February 3, 2012 @ 21:13

Nice, I’ll try to implement this.

Is A* + JPS only efficient with 2D squared grids ?

What if one is working with hexagonal grid or other map representations ? Will it still be efficient ?

Comment by akismet-50617b23d4317f7edf65af426d146b65 — May 16, 2012 @ 20:10

This is awesome! Is any source code available? I would greatly appreciate it :)

Comment by Vittorio Romeo — May 31, 2012 @ 01:07

Hi, Vittorio,

Maybe you might be interested in Jumper, a Lua implementation of JPS.

See here: https://github.com/Yonaba/Jumper

Comment by Roland_Yonaba — May 31, 2012 @ 02:25

Hi all,

Lua Benchmark for Jump Point Search avaliable here: https://github.com/Yonaba/Jumper-Benchmark

It uses 2012 Grid-Based Planning Path Competition (GPPC) maps.

Feel free to give it a spin!

Comment by Roland Yonaba (@RYonaba) — November 5, 2012 @ 05:48

I am interested in applying jump points to minimize number of turns + path length in a straight moves only scenario. Other techniques I found to deal with this problem use, for instance, rectangular symmetry. The ‘only’ problem I haven’t dealt with is transforming diagonal jumps into a optimal (minimum number of turns + path length) sequence of straight moves.

What I’ve thought until now: for minimizing turns, we should use the Manhattan heuristic and add the number of turns needed to perform a jump to the g-factor. Also, maybe the Euclidean heuristic in the g-factor can be replaced with the Manhattan heuristic?

Besides that, the main problem is transforming a diagonal jump into a sequence of turn-wise optimal moves (that number then would add to the jump node g-score).

Eg, a simple case where we try to move from S to E:

(bear in mind that no other jump point was found in the diagonal search until we found E)

1 0 1 1 1 E

1 0 1 0 0 0

1 0 0 0 0 0

1 0 0 0 0 1

0 0 0 0 0 0

S 1 1 0 1 1

There is already several paths made available by the fact that it is a diagonal jump point (marked with 2’s):

1 0 1 1 1 E

1 0 1 2 2 2

1 0 2 2 2 0

1 2 2 2 0 1

2 2 2 0 0 0

S 1 1 1 1 1

Under these conditions, what is the best, turn-wise, straight moves only path ?(because the distance covered will be the same under the Manhattan metric) (having in account an initial orientation).

We can ignore holes in the map (filled with 3’s):

1 3 1 1 1 E

1 3 1 0 0 0

1 0 0 0 0 0

1 0 0 0 0 1

0 0 0 0 0 3

S 1 1 3 1 1

This case is simple, and a perfect solution with 3 turns + a possible start turn would be the following path (marked P):

1 3 1 1 1 E

1 3 1 0 0 P

1 P P P P P

1 P 0 0 0 1

P P 0 0 0 3

S 1 1 3 1 1

From there on, I haven’t thought much.

Comment by Coolis — November 11, 2012 @ 00:46

How to use it in non-uniform grid ?

In my map ,there are some tiles have more cost than normal-ground (river / hill / wall).

thx.

Comment by finscn — December 20, 2012 @ 17:46

Can use JPS with RSR ? faster ?

I used JPS 500 x 700 grids need few seconds on mobile phone. I expect 0.5 seconds to find out XD.

Comment by Terence Mok — March 1, 2013 @ 15:26

[…] implementation is based on the original paper and article on JPS found here: Jump Point Search. The Lua based implementation, Jumper, was used for help with some parts of the […]

Pingback by How to Speed Up A* Pathfinding With the Jump Point Search Algorithm | Gamedevtuts+ — March 13, 2013 @ 01:30

[…] Jump Point Search give best result for PathFinding. […]

Pingback by Some Path Findings Algorithms you may Like « Jakir Hossain — March 31, 2013 @ 02:43

[…] Jump Point Search | Shortest Path. […]

Pingback by Jump Point Search | Shortest Path | Thoughts of an Engineer — June 2, 2013 @ 10:35

As you point out, Jump Point Search may not be effective in a grid with lots of walls. But, the single point pruning step is very easy to implement and makes a lot more sense than blindly returning all 8 neighbours. I modified my A* neighbours function to do pruning with forced neighbours and with some informal tests found a 2x speed gain over A*.

I would be very interested to hear your comments.

Comment by Bruce — July 22, 2013 @ 22:38

I implement JPS for my as3 game but it looks slower than normal A* because the it recursive too much in jump() function. My map is about 100×100.

Comment by chuma — September 7, 2013 @ 14:59

This algorithm seems to only find paths that consist purely of horizontal, vertical and diagonal segments. While this may be technically enough for the “shortest path” in a pure grid of squares, it’s rather useless in a practical game where an object can move to any direction, and where the shortest distance is a straight line from point A to point B (rather than first moving eg. horizontally and then diagonally.)

It’s possible to devise an algorithm that finds these straight lines, forming geometrically the shortest path when measuring the total length of these lines.

Comment by Juha Nieminen — September 24, 2013 @ 17:38

Juha, take a look at Anya*. It is somewhat related to both JPS and Theta* but takes a different approach in finding the turning points. Like JPS, Anya* focuses on obstacles to define turning points to preserve optimality, and like Theta*, it pulls a string to determine the Euclidean shortest path across optimal path space (see also Swamps).

Comment by Starshine — December 10, 2013 @ 22:43

[…] Jump Point Search « Shortest Path A very neat search algorithm that kicks A*'s ass (speeds it up by ~10x) (tags: algorithm) […]

Pingback by links for 2011-09-08 | 地狱天堂 — January 13, 2014 @ 11:34

[…] it works. The website here(Which is all that I could find on the topic of Jump-point searching): https://harablog.wordpress.com/2011/09/07/jump-point-search/ doesn’t really explain much about how to prune, and how to determine a Jump-point successor. […]

Pingback by Jump-Point searching in detail | BlogoSfera — May 2, 2014 @ 11:04

[…] looked at Jump Point Search I’m going to go with that. It can’t deal with varied cost of terrain. That’s […]

Pingback by Pathfinding | Wampus DevBlog — August 20, 2014 @ 05:29

It seems the visualizations are not really “fairly” made. Even the jumping straight rule is doing *something* to consider the intermediate nodes and jump through them. It only finds the obstacle which creates the jump destination because it did some work to consider a node and its neighbors.

Wouldn’t it be more accurate to color all the nodes along the straignt jump paths, even if in a different color? Since it certainly did traverse them?

Comment by jeskeca — September 28, 2014 @ 03:01

@jeskeca:

During search there are two types of operations that take time:

(i) evaluating nodes using the heuristic function.

(ii) maintaining the open list

In pathfinding the first of these operations is usually very cheap and not a problem (cf. AI Planning where heuristics are very expensive). That means the search time is dominated by open list operations (push, pop and update). Each such operation typically takes O(log2) time. If you look at the literature, practitioners in this area speed up pathfinding by developing ways to expand fewer nodes. This is also the case for JPS.

In JPS we use recursive rules to scan the grid and prune nodes based on the result. Obviously scanning the grid is not free; it takes some time but the important thing is that each scanning operation amounts to a few read instructions a handful of branching instructions This is much cheaper than generating and updating nodes on the open list. Anyway, I did not want to conflate scanning nodes from the grid with expanding nodes so I did not attempt to visualise this part of the algorithm. I think the pictures included in the article visualise very well the kind of savings you can obtain with JPS.

You may be interested to know that there is a sequel to the original JPS paper where we look at the overhead from scanning the grid and introduce not only new ways to scan the grid faster but also new ways to eliminate scanning entirely. The result is that we can further improve the performance JPS — often by quite a bit!

The paper in question can be found here: http://www.aaai.org/ocs/index.php/ICAPS/ICAPS14/paper/view/7914

There is also an associated video recording here: https://www.youtube.com/watch?v=NmM4pv8uQwI

I expect there soon be will be a writeup on this blog with more details.

Comment by dharabor — September 29, 2014 @ 09:41

Tremendous issues here. I am very glad to look your

post. Thanks so much and I am looking ahead to touch you.

Will you kindly drop me a e-mail?

Comment by Dr Dre Beats Studio — February 19, 2015 @ 22:29

This may be just a misunderstanding, but the diagonal expansion seems off to me. In figure 1, aren’t 6->x->5 and 6->7->5 symmetric optimal paths?

Therefore shouldn’t 5 and 2 both be pruned?

My apologies if I am missing something obvious here.

Comment by Jeremy Diaz — March 12, 2015 @ 11:41

[…] There is a topic called Jump Point Search. You could make some research about it. Start checking this and […]

Pingback by A* pathfinding | Engin Deniz Cengiz — March 19, 2015 @ 07:03

[…] amount of steps added to the list) to make A* run more quickly. Jump Point Search (JPS), created by Daniel Harabor and Alban Grastien, is an excellent optimization for […]

Pingback by Jump Point Search for Hexagons | Programming and Things — April 26, 2015 @ 06:31

In Figure 1 (Simple Neighbour Pruning) for diagonal movement, why are cells 2 and 5 allowed? They are accessible from parent via 4 and 7 using the same cost. Am I missing something here?

Comment by Dmytro Korduban — January 26, 2016 @ 19:50

[…] can I generalize Jump Point Search to a 3D search […]

Pingback by [ASK] programming - 3D search using A* JPS | Some Piece of Information — February 28, 2016 @ 11:00

[…] https://harablog.wordpress.com/2011/09/07/jump-point-search/ […]

Pingback by Jump Point Search, Faster Than A* – Aaron Millet — December 12, 2016 @ 23:43

I think this could do well with pre-processing. I am envisioning an int[][] at each node position containing the node indices to be opened from each entry direction. You can find out ahead of time exactly which indices could be optimally reached via the parent, and prune them. Then it’s just a matter of “foreach(int index in neighbors[entryDirection]) OpenNodes.Enqueue(index);” Unless I am not understanding it. I will have to test it out when I have some spare time. Thanks for the blog!

Comment by Daniel — August 8, 2018 @ 12:07

[…] there is some magic called Jump Point Search, or JPS for short. This will be implemented […]

Pingback by Creating a Starcraft AI – Part 13: A star is born | Making Computer Do Things — January 25, 2019 @ 02:43