Bugs, Mazes, and the Unreasonably Effective Brady's Algorithm
September 12 2025
Recently I was introduced to a game similar to the busy beaver game that had really caught my interest. I'll be referring to it as the bug lab problem.
Bug lab was introduced to me by discord user @savask through a Code Forces post. The essence of the problem can be boiled down to the following:
First, it looks around and finds the cell it has been to the least and will move to that cell if there is no tie. If there is a tie, and one of the tied cells is in the same direction it moved previously, it will move to that cell. If none of the tied cells are in the same direction it moved previously, it will choose based on the following (decreasing) priority: Down, Right, Up, Left.
If the maze is \(m \times n\), the bug starts in the top left corner, and has to reach the bottom right corner to escape, what is the maze configuration that takes the bug the longest to escape?
I like this problem because it reminds me a lot of the Busy Beaver problem. However, this is significantly easier while still being challenging.
The first way it's easier is that it's provably computable. If a path exists to the exit, the bug will always eventually find it. And if no path exists, that is provable by running a DFS starting from the top left cell.
So theoretically, solving the problem is a matter of running an algorithm across all \(2^{mn - 2}\) configurations. However, this would of course, take forever as soon as \(mn\) gets significantly large. We need a way to reduce the search space.
That's where what is quickly becoming my favorite algorithm comes in!
Brady's Algorithm
Brady's Algorithm was initially conceived by Allen Brady as a way to enumerate the search space of Turing Machines for the Busy Beaver problem. It seems to be a member of a broader approach to finding solutions to these kinds of problems, but I don't know if said approach has a name. As such, I will refer to it as Brady's Algorithm. I will explain how it works in the context of this game.
Consider the following maze:
Where "#" = wall, " " = empty cell, "-" = undefined cell, and "*" = start or end cell. There are a lot of undefined cells, which means that it doesn't matter what they are. The bug will never need to interact with them. If we were brute forcing these configurations, we'd need to go through all \(2^{24}\) possibilities that all have this exact path. Is there a way to eliminate these duplicates?
That's exactly what Brady's Algorithm does. The essence of Brady's Algorithm is that we start with a completely undefined grid of cells, and each time we enter an undefined cell, we fill it in with two copies. One where the cell is empty, and one where the cell is a wall. Then we run the simulation on each copy until they hit another undefined cell, and repeat. If an unsolvable maze is ever created, we eliminate it. The algorithm finishes once every configuration has either reached the end, or been eliminated for being unsolvable.
For such a simple algorithm, it is remarkably effective. In fact I believe this is probably the most efficient possible way to enumerate every unique maze solution.
I implemented this in C, and used it to solve the problem for \(7 \times 7\) grids. The naiive brute-force solution would have required the checking of \(2^{47}\) possibilities, which is far too many for my computer to handle. Assuming a configuration takes 1 microsecond to check, I would have been waiting for roughly 4 and a half years! Instead, finding the champion only took an afternoon. That's with a suboptimal implementation too, there's a lot of room for improvement in my code (like not running a DFS every time an undefined cell is filled in).
The Champions
The following are the champions for each \(n \times n\) grid:
Grid Size | Grid | Number of steps |
---|---|---|
\(3 \times 3\) |
| 8 |
\(4 \times 4\) |
| 20 |
\(5 \times 5\) |
| 42 |
\(6 \times 6\) |
| 96 |
\(7 \times 7\) |
| 218 |
Note that I am excluding the outer walls and they're there implicitly. This doesn't change the actual values. The ending cell is also left undefined because my code terminates the simulation upon reaching it so it never gets defined.
\(8 \times 8\) will take much longer to solve. I've left my code running for a while to try and find some long-running configurations but running it to completion will likely require some further optimizations. As of writing, the best my code has found is \(346\) but discord user @dyuan has found a configuration running for 468 steps! Given that it seems to be growing exponentially, I expect \(8 \times 8\) to be in the ballpark of 400 to 500.
Future Work on the Problem
I want to try paralellizing the work to allow enumeration to be done across several computers at once. I also need to optimize the code that checks if a given maze is solvable, as right now it runs a worst-case \(O(n)\) search where n number of cells every time a wall is placed.
If you ever have any ideas for heuristics/symmetries/etc to eliminate paths that need to be checked potential champions that would also be very helpful! I'm always open to more ideas.
Discord user @dyuan is working on finding an upper bound for the maximum number of steps for a given grid, and others are trying to find good solutions to large grids (Like the original \(29 \times 19\) grid on codeforces) using heuristic models.
Brady's Algorithm is Really Powerful
The main thing I want to highlight with this article is the fact that Brady's Algorithm is really powerful, and likely far more generalizable than the initial domain it was conceived in. The problem of enumerating solution spaces and finding ones with specific properties is a common one in all sorts of areas.
It seems to be generalizable to enumerating a solution space to find solutions an NP-complete problem. I could imagine a similar implementation of "evaluate as you go" being used to find solutions to a SAT problem.
In fact, as far as I can tell, this is how SAT-solvers work! Just with a lot more heuristics for slight speedups. I admittedly don't have a ton of knowledge on SAT-solving, but from what I've read this is roughly how DPLL algorithms work.
It's just an interesting connection I found I wanted to mention. I think this is a really cool and underappreciated method for tackling hard problems that isn't talked about enough.