The state represented by each search node doesn’t have to be limited to a position;
on the contrary, it can include an arbitrarly complex set of values. For example,
if turning 90 degrees takes the same time as walking from one square to the next,
the state of your character can be [position, heading]. Each node now represents
not only the position of the character, but also its heading; and the new edges
of the graph (implicit or explicit) reflect that.
Going back to the original 5x5 grid, the starting position of the search now may
be [A, East]. The adjacent nodes are now [B, East] and [A, South] - if you want
to reach F, you need to correct your heading first, so the path would be
[A, East], [A, South], [F, South].
First-person shooter? At least four dimensions: [X, Y, Z, Heading]. Maybe even
[X, Y, Z, Heading, Health, Ammo].
It's even more than that. You can get great results from A* on any graph search problem where you can find a good heuristic for the distance to the goal from some node, which may not be related to position at all. For example, I once did this with nodes that represented sets of database attributes and arcs that represented table joins.
A* is not only a pathfinding algorithm, but a search algorithm. In my AI classes I used it for pathfinding, solving sudokus and other puzzles, and even to solve a job shop scheduling problem[1]. If you can translate your problem into a graph, you can use A* with it.
In these classes, we had a problem to solve, and we had to solve it using different algorithms and in the end submit a report in wich we compared the result and efficiency of each. The ones that used heuristics, we had to implement about three different heuristics, analyze each and mix them up.
Cool article! If you're interested in how to do pathfinding in 3d, there's a very interesting library called Recast that does this [0]. The author also has a blog where he wrote about his progress, but that seems to have gone quiet. [1]
Maybe I misunderstand how this works but I want to find a pathfinding algorithm that is super fast and only calculates the first few steps toward a target. The target is moving and the calculation needs to happen several times every second so calculating the whole path is not feasible. It doesn't really matter if it gets stuck or doesn't find the optimal path, just that it looks like it's trying to move closer to the target.
More context would be useful to know exactly what kind of algorithm you need.
In our game we use a bounding ellipse to limit the A* traversal with the actors current position and the target position as the focal points.
If a pathfind fails, we still move towards the closest square that we found to the goal.
When the two points are far away from each other the ellipse is long and thin which causes the paths to be very direct. As the current position and target position get closer the ellipse turns in to more of a circle meaning that the pathfinder will discover more indirect routes should they be needed to get to the final goal.
So this means that entities will tend to move in direct paths when they are far away (even if they don't actually know if they will be able to complete the path yet) and then discover an actual route around blocking obsticals as they get closer.
Because long and thin ellipses tend to really constrain the number of nodes that will be considered, it works pretty well to speed up the A* searches.
Do you have time to preprocess the map? Do you have some spare memory? If so, check out Compressed path databases.
They're a type of oracle that does exactly what you want: given a start and target position, the oracle returns the next step on the optimal path to the target. You can extract one step, a few steps or the entire path. Extracting a single step is very fast since it requires little more than a memory lookup. Needless to say this approach has significant advantages if you need to constantly replan your path.
If "trying to move closer" is enough, maybe you don't even need a pathfinding algorithm - if the target is to the right move one step to the right, if the target is above move one step up, and so on?
That's basically what I have with the addition of a small collision check. If there is a collision tile between the seeker and the target it tries to move sideways but this has the effect that it only takes three adjacent tiles to stop the seeker from reaching the target. I would like something smarter than that but still preserve the efficiency.
Guesses more than ideas but. Have the distance estimation function take time as an argument? Actually move along the current best n-step path then start again?
> # If we just got to the goal node, build and return the path.
if adjacent == goal_node:
return build_path(goal_node)
If you want to ensure the path is optimal, you'll have to wait until you 'expand' the goal node, as there might exist goal states in the open set with lower costs.
My take-away, from the last part: