Sunday, November 20, 2016

solving pushpull puzzles

There's a little puzzle flash game called pushpull. Try playing a game of it to see what it's about - it's pretty straightforward.

So, imagine the game as this 3x3 grid of cells:
a b c
d e f
g h i

Here is how the different cells relate to each other. The number of times you need to press piece 'a' depends on the number of times you press piece 'b' and 'd', and it also depends on the initial state of 'a' as well. The relationship between them is that at the end of the game, you must have toggled the square's parity a number of times such that it's in the down state.

We can represent this by considering the variables a through i to be binary variables that count the number of times you click each grid cell (binary variables because adding 2 clicks to any cell is idempotent, so we can remove them), and also consider the initial states s[a] through s[i] to be binary variables - we could say that a, b, d, and s[a] must have an even parity all together, since after each of them is applied, the cell must be 'down', or 0.

We can represent these relationships with these equations, where the things on a line have their parity xor'ed:
0 = a b d s[a]
0 = b a e c s[b]
0 = c b f s[c]
0 = d a e g s[d]
0 = e d b f h s[e]
0 = f c e i s[f]
0 = g d h s[g]
0 = h g e i s[h]
0 = i h f s[i]

Those equations can be represented as a matrix A:
1 1 0 1 0 0 0 0 0
1 1 1 0 1 0 0 0 0
0 1 1 0 0 1 0 0 0
A= 1 0 0 1 1 0 1 0 0
0 1 0 1 1 1 0 1 0
0 0 1 0 1 1 0 0 1
0 0 0 1 0 0 1 1 0
0 0 0 0 1 0 1 1 1
0 0 0 0 0 1 0 1 1

Where if we take the vector s of the initial game state, and the vector x of how many times you click each cell (variables a-g in the equations above), then the solution to the game is Ax + s = 0. This is saying the same thing as the equations above - if we add together the initial state, along with each cell's influencing cells, it has to equal the zero vector because the game ends with a flat board.

Since I couldn't find any programs to do mod-2 matrix solving, it took some (python-assisted) work to solve for the pseudoinverse of the matrix A. Then modulo-2, that ends up being:
1 0 1 0 0 1 1 1 0
0 0 0 0 1 0 1 1 1
1 0 1 1 0 0 0 1 1
A-1= 0 0 1 0 1 1 0 0 1
0 1 0 1 1 1 0 1 0
1 0 0 1 1 0 1 0 0
1 1 0 0 0 1 1 0 1
1 1 1 0 1 0 0 0 0
0 1 1 1 0 0 1 0 1

Testing this matrix: if the input is [0 1 1 1 1 0 1 1 0] (which is a game I just got), multiplying by the "inverse" gives [3 3 3 2 4 3 2 3 4]. Since only the parity of number of clicks matters, we can take that vector mod 2, which is [1 1 1 0 0 1 0 1 0] - and clicking on those boxes yields a flat board.

Sunday, November 13, 2016

dumb little robots!


Some little arduino robots I've made:



Pattern-copying clapping sockmonkey:


Automatic plant waterer:


The blimpbot was too heavy to fly :(

Sunday, July 3, 2016

making puzzles by solving puzzles

Constraint Satisfaction Problems

Constraint satisfaction is a useful and very general approach to solving a wide class of puzzle-y problems. If you can phrase a kind of problem as "I need to choose some values (from finite sets) for a bunch of variables, but there are constraints that rule out some combinations of values", that kind of puzzle can probably be solved with constraint satisfaction.

Example puzzles would be things like sudoku (as a constraint problem: it's assigning a decimal digit value to each of 81 variables, subject to the row/column/3x3 uniqueness constraints), crosswords (assign a letter to all the boxes, such that all contiguous boxes spell out words in the dictionary), and the classic "dinner party seating problems" (eg. there are 8 seats at a round table and 8 guests. pick seats for everyone such that Alex can't sit next to Bob, Bob must sit next to either Cathy or Dave, Dave must sit next to Alex but not Edgar, etc). 

Possible Solutions

When there are a lot of variables to choose, it's intractable to exhaustively test every combination of values to find a solution, because there's an exponential number of combinations. Imagine the tree of choices we could potentially make, where at each depth D of the tree, when we've picked the first V values from a domain D, there will be |D|^V branches in the tree, which grows too quickly for brute-force searching.

Constraints as Pruning

However, we don't need to search through most of the tree, because once a constraint makes a tree node invalid, we don't need to explore any of its children. So applying constraints prunes off whole branches of the tree of value assignments*, which eliminates an exponential number of nodes with each pruning, which makes it feasible to explore the tree.

* That is, if you don't have to make many choices for them to apply. If the constraint was "the MD5 hash of your values is 0", you can't get out of doing the exhaustive work.

Recursive Backtracking

The straightforward way to recursively guess-and-test your way to a solution is to randomly pick a value, apply any constraints using that value (which rules out values for other variables), and bailing out if we ever violate a constraint. Eg. something like:

def solve(puzzle):
if any variable has no potential values: return False
if all variables have only one value: return puzzle
pick an undecided var (ideally the one with the fewest remaining values)
for each potential value in var (ideally in most constraints order):
set var to value
apply all constraints
found = solve(puzzle)
if found: return found
  return False

To be clever, you can keep track of the set of potential values for each variable, so that when a constraint rules out a value, you can remove it from the set. Furthermore, constraints can propagate transitively from variable to variable - that is, once you've limited the possible values of one variable, that can in turn limit other variables.

When a constraint is violated by making a bad random choice, then applying the constraint makes one of the set of potential values empty, which makes us fail and undo (backtrack) to the previous guess. There's a great writeup of this technique by Peter Norvig.

Find All Solutions

Some types of puzzle are only valid if they have exactly one solution (Sudoku is an example of this). How could we distinguish between under-constrained sudoku puzzles and valid ones? Our solver stops once it hits the first valid solution. What if it didn't and kept going? We could pretty easily modify our solver into a recursive generator function, ie one that returns all solutions:
def solve_all(puzzle):
if any variable has no potential values:
  return # nothing to yield

if all variables have only one value: 
yield puzzle # the one solution
return
for all undecided vars:
found_value = False
for each potential value in var:
set var to value (in a local copy of puzzle)
apply all constraints
for y in solve_all(puzzle_copy): # assign all other vars
found_value = True
yield y
# couldn't assign any value for v, this branch is dead
if not found_value:
return

So that eg. if we gave it this underconstrained puzzle to solve:
   |  6|
 59|   |  8
2  |  8|
-----------
 45|   |
  3|   |
  6|  3| 54
-----------
   |325|  6
   |   |
   |   |

It would find many solutions, starting with these two (note the last two rows are swapped).
178|236|495
359|147|268
264|598|137
-----------
745|962|381
813|754|629
926|813|754
-----------
481|325|976
592|671|843
637|489|512


178|236|495
359|147|268
264|598|137
-----------
745|962|381
813|754|629
926|813|754
-----------
481|325|976
637|489|512
592|671|843

To make the puzzle a valid one (with only a single solution), we'd have to add more digits to the puzzle. We can pick an arbitrary blank space to reveal a number. In a sense, we can use the solver to help turn incomplete puzzles (not enough information to be valid) into every valid puzzle that it could be.

So if our fancy solver can search for all solutions, what if we gave it the empty 9x9 sudoku grid? Here's the first and the ten thousandth solution (of a huge number!) of solutions to the empty sudoku board, in sorted order (trying the numbers in order, from left to right). In a sense, our solver is now a generator! If we left it running, it would eventually print every solved valid sudoku puzzle.
#1
123|456|789
457|891|236
689|237|145
-----------
214|365|897
375|982|461
896|174|352
-----------
531|628|974
748|519|623
962|743|518

#10,000
123|456|789
457|891|236
689|237|145
-----------
214|365|897
375|982|461
896|174|352
-----------
538|729|614
942|618|573
761|543|928

Too much Information

Of course, although those puzzles are now valid, they're also already solved, which makes them bad puzzles. Ideally, our generator would show only the minimal number of clues to a valid puzzle. Since our solver knows when a puzzle is solved, though, we already know when we've added enough information!

A Backtracking Generator

To turn a simple solver into a simple generator, instead of guessing at assigning variables, we actually assign them, and keep doing that until the puzzle is (uniquely) solvable. Like the solver, it has to backtrack if it hits a dead end (if we pick incompatible values). We can avoid giving away our completed solution by maintaining two copies of the puzzle - one that's minimal (ie. the puzzle we're going to return), and the other that's fully constrained (ie. what can be deduced from the minimal one without any guessing), and we're done when the constrained one is solved.

def generate(minimal, constrained):
if failed(constrained): return False
if solved(constrained): return minimal

while 1:
pick a random variable
pick a random value from the variable's set

assign the value for minimal

assign the value for constrained
run the constraints on constrained

if the constraints fail: 
then that random value isn't valid,
so remove it from the set of values Run the constraints again. If they fail (meaning there are now no valid values for that variable):
return False

if it succeeds, return generate(minimal, constrained)

The reason we have to remove invalid values (instead of just trying another number) is to stop us from getting stuck in an infinite loop - we need to be able to backtrack multiple times to back out of a dead end that started with a grandparent.

But this generates valid mostly-empty puzzles (and the solution at the same time). Sweet!

Eg. this guy:
 38| 5 |  4
   |8  |65
9 7|   |
-----------
 6 |48 |
 71|  6|3
   | 7 |
-----------
 4 |367|8 5
   |  1|79
 26|   |

Difficulty

We could generate harder puzzles by picking variables and values via some heuristics, rather than randomly (just like the ones that improve solvers, I like the similarity). we could also do a better job of noticing when we're done, by counting whether there are 0, 1, or multiple solutions to a puzzle, ie.

def count_solutions(p):
count = 0
for y in yieldall(p):
count = count + 1
if count == 2: break
return count

Then starting the generator with this instead:
def generate(minimal, constrained):
sols = count_solutions(constrained)
if sols is 0: return False
if sols is 1: return minimal

To get the really minimal puzzles (they're probably still not difficult, though!). 

Sunday, August 18, 2013

avoiding collisions with pathfinding

We split bot navigation into two layers - pathfinding is run for each bot independently, which yields a path (which will persist for a long time), and we hand off that path to a more reactive system (eg. a steering system that uses velocity obstacles or something) which is responsible for dealing with surprises like other bots getting in the way. Even thought steering works well at preventing collisions between bots, it has to use the path it's given, and it can't avoid chokepoints that force it through other bots - paths that end up slowing everyone down and making them look dumb. 

Here's an example in my toy pathfinder: white squares are traversable, dark grey are blocked, and the colored plus signs are the paths of 4 different bots (they're taking paths that heavily overlap each other). All the paths go through the chokepoint in the middle. The purple bot could have easily avoided the traffic jam by going south a bit, but instead he's taking a (slower!) path that forces him to deal with everyone else.


If we increase the path cost of chokepoints by predicting heavily contested positions, A* will route us around those traffic jams. It's easy to predict where other bots will be - just look at their paths. To keep things fast, you can implement this with a per-position count of the number of paths that use each position (treated like a refcount) - that way, the cost of finding a path is independent of the number of other paths.

Here's my toy implementation: although the naive shortest path for all 4 bots would go through the same chokepoint, the bots now take paths that avoid each other. (their 'avoidance radius' falls off at 3 squares, so they're giving each other a wide berth). If the cost were higher, they'd worker harder to avoid each other - but some path intersection is unavoidable here.


This sort of system could be an improvement for style reasons, too - imagine a swat team spreading out to clear an apartment - although maybe they're ultimately going to the same place, you want them to explore as much as they can (kind of like a cheapo version of influence maps).

One complication is that paths become order dependent. The first bot to path in an area ignores everyone else, the second bot only ignores the first, etc. Finding truly 'correct' paths in this case would need to be done simultaneously for everyone (but I don't see how that would even work! Any ideas?).

A less tricky implementation would be to only approximately track positions - by keeping a per-area refcount (if your game spaces are hierarchically partitioned), or only attaching the cost to doorways, or only tracking bend points on the path rather than every position, etc.

A more tricky version would be time-indexed - meaning you care about the specific *time* that each bot will arrive in each position. This makes things more correct - a bot would be happy walking in single-file behind other bots. However, time-indexing has a few caveats:

  • makes the bookkeeping more complicated, and makes the runtime slower (slightly)
  • requires your pathfinding system to accurately predict the movement speed of a bot
I tried to implement time-indexing, but it introduced weird kinks in returned paths, where paths will intentionally waste time (running in a useless circle) to avoid a collision in the future (which makes sense in terms of pathfinding, but looks like a bug). I tried to fix this by adding a low-cost 'wait in place' action that bots can do (in addition to moving in their 8 directions), but that ended up being a giant can of worms - because it means A* needs to re-visit positions that are already in the closed set with a lower cost.


Maybe a simpler alternative to time-indexing would be to tag each position with a velocity - that way you'd get more cooperative groups without all the complexity of real time-indexing. Ie. you're saying "you can use this door without trouble if you're going north".

Here's a paper that does something kind of similar that got me thinking about all this: I think their videos show really impressive group dynamics, and I think a big part of what makes it look good is the planned-local-avoidance thing (more than the footstep planning animation synthesis, although that's pretty cool too!).

Sunday, April 7, 2013

exploring more dimensions with A*

When you're running A*, you're searching over the edges and vertices of a graph, where the graph's vertices represent some kind of state, and the edges between vertices are the state changes. In pathfinding, the states are positions, and edges are the smallest movements you can do to get between neighboring positions, like "move +X", or "move +Z".
State = {x, z}
State Transitions = { +x, -x, +z, -z }


You end up with a resulting path like this: Grey tiles are blocked, white are unblocked, and red shows the resulting path found by a pretty vanilla A*, going from upper right to lower left.
Blue shows the nodes we visited during the search.


This can be kind of crappy for games where you want to model more complex mechanics of motion: characters in games can't really just "move +Z". For an example, imagine a car - they can take several seconds to accelerate up to full speed, or to make a 90 degree turn. If we build a shortest-length path for the car to follow, he'll have to drive really slowly to make all the tiny fiddly sharp turns (or in the worst case, be completely unable to make them at all). This might be the shortest distance, but it's not the shortest duration path. And even in games with just human characters (which I work on), you can sometimes spot guys running at full speed trying to follow a shortest-distance path, which makes them look robotic. It can also cause bugs if we try to satisfy physical constraints about limited acceleration or turn speeds, and the path violates those contraints (eg. making 90 degree turns around corners).

If we can more accurately model the state space, we'll get a more accurate shortest-duration path:
State = { x,  z, vel_x, vel_z }
State Transitions = { +vel_x, -vel_x, +vel_z, -vel_z }  
// the driver only controls acceleration, not position
Then if we also have an appropriate interface for A* to know about this more complex state (so get_neighbors(state) needs to return the result of acceleration in each direction, and heuristic_cost(state) takes velocity and acceleration into account to derive a minimum arrival time), we get something like this instead:


For this image, our dude can move at a top speed of 10 tiles/sec, but can only accelerate at 1 tile/sec^2 (which is a roughly car-ish ratio of top speed to acceleration).
What's cool about this is:
- it's indeed a shorter-duration path even though it's longer
- the driver avoids navigating around all the little obstacles
- you can see the faster path "swings wide" around that corner to be able to go faster
- you can see the explored region (in blue) finds the few quick paths through the obstacles

Neat! It makes me want to make an all-terrain driving obstacle course game. I think to be really accurate, you'd want to feed something close to the game's real rules/simulation into A* - so that it could explore the accurate consequences of its choices. In that case, the inputs for a car would probably be something like:
State Transitions = { rotate steering left, rotate steering right, more gas, less gas (or brake) }
(or whatever the inputs to a car are for your AIs).

The big downside, of course, is the cost of computing this stuff, since we're exploring in more dimensions. For the blue region shown on the map, in the worst case there can be up to 200 "visited" nodes overlapped on each tile position (because of how I quantized my velocity - I don't know how you'd do this with a more analog velocity space). Practically it wouldn't be so bad, but in my test code, paths were about 20x more expensive to find - and certain clever optimizations to A* (like jump point search) don't work anymore.

My pipedream, though is to write a bot for "quake done quick" that has the player's inputs as the state space:
State Transitions = { w, a, s, d, jump, aim delta, fire, switch weapons }
But that sounds impossibly slow, right? If that bot could discover rocket jumping, I'd be ecstatic.

Saturday, July 28, 2012

Spelunky

Spelunky is difficult, but not in the style of technical platformers (VVVVVV or N+ or whatever). It’s difficult in the same way roguelikes are – lots of things will kill you, so you have to be attentive, and learn how the game mechanics can combine in unexpected ways. The levels are procedural, so there are no hand-crafted puzzles to solve. I feel like my deaths are always my fault, which is good in a game with permadeath and a life expectancy of a few minutes.

I can only play for about an hour at a time, though – even though I’m still enjoying playing, I can’t stay focused and cautious for that long :( 

Get it at http://spelunkyworld.com/

Sunday, August 14, 2011

graph optimization

Is there a well-known method to simplify a nav graph? From what I've seen most people have a nav mesh (made of triangles), not a graph (made of edges). But assume you have a graph - is there a well-known method to remove the edges that wouldn't significantly shorten the paths found on the graph? I wonder if this is a well-known problem (I don't know what to search for on google - "graph simplification" turns up a bunch of weird topology papers).

I wrote up a little solution for it that I think works pretty well. Here's my sweet-ass visualization: http://i.imgur.com/XMOaM.png
It's a graph made of random edges in 3d. Black lines were rejected from the original graph, white lines were kept in the simplified graph.

The method is to keep candidate edges from the original graph only when they're shorter than some fraction of the shortest existing path between those vertices. Like building a minimum spanning tree, visit the candidates in length order.

The tricky part is you need to know those shortest path lengths (for all pairs of verts in the graph as they're added). To do that, you can propagate shortest-path updates each time you add an edge (a newly shortened path AB adds new potential shortened paths from A to each of B's neighbors and from B to each of A's neighbors too).

All paths in the shrunk graph are guaranteed to be within an arbitrary threshold of optimal. (if the threshold is small, you get a minimum spanning tree, which is neat!). In the picture, it's a <2x optimal path length guarantee.

Performance is weird, something like O(#verts * #edges)? The shortest-path-update ordering (stack vs. queue for the to-visit list) has a huge impact on performance and I'm not really sure why :o It handles graphs up to about 500-1000 edges instantly, but barfs on bigger inputs. The aspect that could be improved is the shortest path propogation - depending on the edge add order, a lot of the path shrinking is wasted effort.

I think a further improvement would be start with the minimum spanning tree (which is decently fast), compute the all-pairs shortest paths, then start adding the remaining candidate edges. This skips the majority of the shortest-path updating, since the graph is mostly connected.

Let me know if you want me to post the code somewhere, it's pretty simple.