Sunday, August 18, 2013

avoiding collisions with pathfinding

We split bot navigation into two layers - pathfinding is run for each bot independently, which yields a path (which will persist for a long time), and we hand off that path to a more reactive system (eg. a steering system that uses velocity obstacles or something) which is responsible for dealing with surprises like other bots getting in the way. Even thought steering works well at preventing collisions between bots, it has to use the path it's given, and it can't avoid chokepoints that force it through other bots - paths that end up slowing everyone down and making them look dumb. 

Here's an example in my toy pathfinder: white squares are traversable, dark grey are blocked, and the colored plus signs are the paths of 4 different bots (they're taking paths that heavily overlap each other). All the paths go through the chokepoint in the middle. The purple bot could have easily avoided the traffic jam by going south a bit, but instead he's taking a (slower!) path that forces him to deal with everyone else.


If we increase the path cost of chokepoints by predicting heavily contested positions, A* will route us around those traffic jams. It's easy to predict where other bots will be - just look at their paths. To keep things fast, you can implement this with a per-position count of the number of paths that use each position (treated like a refcount) - that way, the cost of finding a path is independent of the number of other paths.

Here's my toy implementation: although the naive shortest path for all 4 bots would go through the same chokepoint, the bots now take paths that avoid each other. (their 'avoidance radius' falls off at 3 squares, so they're giving each other a wide berth). If the cost were higher, they'd worker harder to avoid each other - but some path intersection is unavoidable here.


This sort of system could be an improvement for style reasons, too - imagine a swat team spreading out to clear an apartment - although maybe they're ultimately going to the same place, you want them to explore as much as they can (kind of like a cheapo version of influence maps).

One complication is that paths become order dependent. The first bot to path in an area ignores everyone else, the second bot only ignores the first, etc. Finding truly 'correct' paths in this case would need to be done simultaneously for everyone (but I don't see how that would even work! Any ideas?).

A less tricky implementation would be to only approximately track positions - by keeping a per-area refcount (if your game spaces are hierarchically partitioned), or only attaching the cost to doorways, or only tracking bend points on the path rather than every position, etc.

A more tricky version would be time-indexed - meaning you care about the specific *time* that each bot will arrive in each position. This makes things more correct - a bot would be happy walking in single-file behind other bots. However, time-indexing has a few caveats:

  • makes the bookkeeping more complicated, and makes the runtime slower (slightly)
  • requires your pathfinding system to accurately predict the movement speed of a bot
I tried to implement time-indexing, but it introduced weird kinks in returned paths, where paths will intentionally waste time (running in a useless circle) to avoid a collision in the future (which makes sense in terms of pathfinding, but looks like a bug). I tried to fix this by adding a low-cost 'wait in place' action that bots can do (in addition to moving in their 8 directions), but that ended up being a giant can of worms - because it means A* needs to re-visit positions that are already in the closed set with a lower cost.


Maybe a simpler alternative to time-indexing would be to tag each position with a velocity - that way you'd get more cooperative groups without all the complexity of real time-indexing. Ie. you're saying "you can use this door without trouble if you're going north".

Here's a paper that does something kind of similar that got me thinking about all this: I think their videos show really impressive group dynamics, and I think a big part of what makes it look good is the planned-local-avoidance thing (more than the footstep planning animation synthesis, although that's pretty cool too!).

Sunday, April 7, 2013

exploring more dimensions with A*

When you're running A*, you're searching over the edges and vertices of a graph, where the graph's vertices represent some kind of state, and the edges between vertices are the state changes. In pathfinding, the states are positions, and edges are the smallest movements you can do to get between neighboring positions, like "move +X", or "move +Z".
State = {x, z}
State Transitions = { +x, -x, +z, -z }


You end up with a resulting path like this: Grey tiles are blocked, white are unblocked, and red shows the resulting path found by a pretty vanilla A*, going from upper right to lower left.
Blue shows the nodes we visited during the search.


This can be kind of crappy for games where you want to model more complex mechanics of motion: characters in games can't really just "move +Z". For an example, imagine a car - they can take several seconds to accelerate up to full speed, or to make a 90 degree turn. If we build a shortest-length path for the car to follow, he'll have to drive really slowly to make all the tiny fiddly sharp turns (or in the worst case, be completely unable to make them at all). This might be the shortest distance, but it's not the shortest duration path. And even in games with just human characters (which I work on), you can sometimes spot guys running at full speed trying to follow a shortest-distance path, which makes them look robotic. It can also cause bugs if we try to satisfy physical constraints about limited acceleration or turn speeds, and the path violates those contraints (eg. making 90 degree turns around corners).

If we can more accurately model the state space, we'll get a more accurate shortest-duration path:
State = { x,  z, vel_x, vel_z }
State Transitions = { +vel_x, -vel_x, +vel_z, -vel_z }  
// the driver only controls acceleration, not position
Then if we also have an appropriate interface for A* to know about this more complex state (so get_neighbors(state) needs to return the result of acceleration in each direction, and heuristic_cost(state) takes velocity and acceleration into account to derive a minimum arrival time), we get something like this instead:


For this image, our dude can move at a top speed of 10 tiles/sec, but can only accelerate at 1 tile/sec^2 (which is a roughly car-ish ratio of top speed to acceleration).
What's cool about this is:
- it's indeed a shorter-duration path even though it's longer
- the driver avoids navigating around all the little obstacles
- you can see the faster path "swings wide" around that corner to be able to go faster
- you can see the explored region (in blue) finds the few quick paths through the obstacles

Neat! It makes me want to make an all-terrain driving obstacle course game. I think to be really accurate, you'd want to feed something close to the game's real rules/simulation into A* - so that it could explore the accurate consequences of its choices. In that case, the inputs for a car would probably be something like:
State Transitions = { rotate steering left, rotate steering right, more gas, less gas (or brake) }
(or whatever the inputs to a car are for your AIs).

The big downside, of course, is the cost of computing this stuff, since we're exploring in more dimensions. For the blue region shown on the map, in the worst case there can be up to 200 "visited" nodes overlapped on each tile position (because of how I quantized my velocity - I don't know how you'd do this with a more analog velocity space). Practically it wouldn't be so bad, but in my test code, paths were about 20x more expensive to find - and certain clever optimizations to A* (like jump point search) don't work anymore.

My pipedream, though is to write a bot for "quake done quick" that has the player's inputs as the state space:
State Transitions = { w, a, s, d, jump, aim delta, fire, switch weapons }
But that sounds impossibly slow, right? If that bot could discover rocket jumping, I'd be ecstatic.