Reputation: 169
I'm looking for an algorithm that can search a 3D array and figure out the quickest way to pathfind from a single point to any edge of the array. More specifically, I am working on checking if a hollow structure within a 3 dimensional space is "airtight" and if it would be possible for something within this structure to escape through a hole.
Could anyone recommend good sources, documents, or articles that I could read about this stuff? I'm not looking for a specific copy/paste solution since I really want to understand how it works but my searches have been turning up pretty dry.
There's also I high chance that I'm approaching this from a completely wrong angle and actually need to change how I go about looking for information on this, if you have any recommendations, let me know!
Also, the max size of the 3D array will be [256,256,256] but the structures within will usually be much smaller. Furthermore, each element within the array will be like an empty room with 6 possible walls on each side.
I've tried searching for some time but most of my findings have been about "shortest-path" or "path of least resistance" which is not what I need. I want the least computationally expensive way to check if I can get from the inside of the structure to the outside, even if it is not the shortest path.
#Update: Round About Solution#
I did not find an actual answer to my question but I found a way to solve one of my problems so I'm adding this as an edit rather than as an answer because I still want to know how to deal with the original question.
After some research, I discovered flood-fill algorithms which gave me some ideas, the only problem with them was that most of the flood-fill methods were recursive. While recursion was a fast and simple solution, it quickly caused a stack overflow and could not even get close to the depth that I would potentially need to reach (it could get about 18,000 but I needed > 256^3). I then discovered scan-line filling, which was good, but I was having trouble understanding it and could only find 2D implementations for pixel-based graphics or 3D point-based models. After learning information about both uses, I managed to scrounge together my own 3D scan-line filler that can completely fill in a 256^3 area in about 2 seconds.
A "block" is an element in the 3d array.
Whether or not it "isSealed" could better be about checking if that block has been iterated over yet.
The stack is used to queue up empty elements that need to be checked.
This is in c# btw.
private void MyScanLine3DFill(int x, int y, int z)
{
Stack<Block> blocks = new Stack<Block>(); //create a stack
blocks.Push(grid[x, y, z]); //initialize the stack to check from the specified starting position
bool spanWest;
bool spanEast;
bool spanSouth;
bool spanNorth;
while (blocks.Count != 0) //loop through the stack as long as it is not empty
{
Block temp = blocks.Pop(); //get the block on top of the stack
int y1 = temp.coords.Item2; //get the y coord of the block
while (y1 >= 0 && grid[temp.coords.Item1, y1, temp.coords.Item3].isSealed == false) //go down the column until you hit a sealed block
{
y1--; //go down a block in the column
}
y1++; //go up one block
spanWest = false; //reset the spans for this iteration of the loop
spanEast = false;
spanSouth = false;
spanNorth = false;
while (y1 < maxY && grid[temp.coords.Item1, y1, temp.coords.Item3].isSealed == false) //go up the column until you hit a sealed block
{
grid[temp.coords.Item1, y1, temp.coords.Item3].isSealed = true; //set the block in the current iteration to true
//check the west block
if (!spanWest && temp.coords.Item1 > 0 && grid[temp.coords.Item1 - 1, y1, temp.coords.Item3].isSealed == false) //if there is an unsealed block to the west
{
blocks.Push(new Block(airBlock, (temp.coords.Item1 - 1, y1, temp.coords.Item3))); //add the unsealed block to the west to the stack
spanWest = true;
}
else if (spanWest && temp.coords.Item1 - 1 == 0 && grid[temp.coords.Item1 - 1, y1, temp.coords.Item3].isSealed != false) //if there is a sealed block to the west
{
spanWest = false;
}
//check the east block
if (!spanEast && temp.coords.Item1 < maxX - 1 && grid[temp.coords.Item1 + 1, y1, temp.coords.Item3].isSealed == false) //if there is an unsealed block to the east
{
blocks.Push(new Block(airBlock, (temp.coords.Item1 + 1, y1, temp.coords.Item3))); //add the unsealed block to the east to the stack
spanEast = true;
}
else if (spanEast && temp.coords.Item1 < maxX - 1 && grid[temp.coords.Item1 + 1, y1, temp.coords.Item3].isSealed != false) //if there is a sealed block to the east
{
spanEast = false;
}
//check the south block
if (!spanSouth && temp.coords.Item3 > 0 && grid[temp.coords.Item1, y1, temp.coords.Item3 - 1].isSealed == false) //if there is an unsealed block to the south
{
blocks.Push(new Block(airBlock, (temp.coords.Item1, y1, temp.coords.Item3 - 1))); //add the unsealed block to the south to the stack
spanSouth = true;
}
else if (spanSouth && temp.coords.Item3 - 1 == 0 && grid[temp.coords.Item1, y1, temp.coords.Item3 - 1].isSealed != false) //if there is a sealed block to the south
{
spanSouth = false;
}
//check the north block
if (!spanNorth && temp.coords.Item3 < maxZ - 1 && grid[temp.coords.Item1, y1, temp.coords.Item3 + 1].isSealed == false) //if there is an unsealed block to the north
{
blocks.Push(new Block(airBlock, (temp.coords.Item1, y1, temp.coords.Item3 + 1))); //add the unsealed block to the north to the stack
spanNorth = true;
}
else if (spanNorth && temp.coords.Item3 < maxZ - 1 && grid[temp.coords.Item1, y1, temp.coords.Item3 + 1].isSealed != false) //if there is a sealed block to the north
{
spanNorth = false;
}
y1++; //go up a block
}
}
//refresh mesh if applicable
}
Upvotes: 0
Views: 178
Reputation:
I don't think you can really beat the basic conceptual idea of a breadth-first or depth-first search in this case. The base case that terminates our search is unknown in advance. You can't really get too fancy for an unknown destination in an algorithm (a heuristic) except to linearly search for the solution one step at a time.
A Potential Optimization
I was thinking about this problem a bit further the other day, and while this is still the same overall algorithmic idea, it should be considerably more efficient than testing one cell at a time for occupancy (getting closer to 256^2 tests than 256^3 tests).
The idea is to store and maintain 3 separate sets of 3D occupancy bitsets (the second transposed in a way such that the next bit is the next element along Y, and the third along Z) requiring only one bit to indicate an occupied cell.
This way instead of testing one cell at a time in your search, you can easily test 64 at a time or more on a 64-bit architecture (32 or more on 32-bit, and up to 512 at a time with SIMD). With a 256x256x256 grid, that should consume the data extremely quickly. Using 256-bit registers or larger, you can test all the cells of a 1D slice of the grid along a axis from a given point to the nearest occupied cell in a single FFS (find first set bit) test which is generally an efficient intrinsic on lots of hardware. Some handy code I use a lot:
// Returns the index of the first set bit or 64 if there are
// no set bits.
int32_t ffs(uint64_t bits)
{
#if defined(_MSC_VER) && defined(_WIN64)
// MSVC 64-bit.
unsigned long index = 0;
return _BitScanForward64(&index, bits) ? index : 64;
#elif defined(_MSC_VER) && defined(_WIN32)
// MSVC 32-bit.
unsigned long index = 0;
if (_BitScanForward(&index, bits & 0xffffffff))
return index;
else if (_BitScanForward(&index, bits >> 32))
return index + 32;
return 64;
#else
// GCC
return bits != 0 ? ffsll(bits): 64;
#endif
}
// Returns the index of the first set bit starting from the nth
// position or 64 if there are no set bits starting from
// that position.
int32_t ffs(uint64_t bits, int32_t n)
{
if (n < 64)
{
// Clear to nth bit.
bits &= ~((1ull << n) - 1);
// Return next set bit.
return ffs(bits);
}
return 64;
}
The rest is similar to your vanilla BFS/DFS. At least I haven't been able to think up a better solution. Visual illustration:
It's admittedly a very rough idea and I'm one of those who needs to dive in and code it and profile it to start resolving things I missed on the drawing board (which I often do) and generate new algorithmic and optimization ideas. Yet I think it should be a good starting point.
If you're doing many of these tests per frame, you might just want to compute an entire distance map for the entire grid by pushing all the nodes to the processing list initially. That would avoid a lot of overlapping/redundant work finding shortest distance to edge from any given point on the grid. Precomputing a distance map for the entire grid in advance can be handy even for cases where the elements in the grid are moving each frame to cut down on the redundant work as it gives you the shortest path from any cell to a target in the minimal number of iterations (the number of iterations being equal to the shortest geodesic distance).
A Little Bit Further On Computing Entire Distance Maps
In response to the comments, assuming you don't just do the pathfinding for one cell/point in your grid per frame (even let's say just a few nodes per frame), I think you'll quickly find the precomputation of the entire distance map for the entire 256^3 grid outperforming alternatives, even if you have to invalidate the distance map every single frame.
It's because the amount of memory you have to read and write to solve this problem, even in the best-case scenarios I can possibly conceive, are quite large. In spite of my proposal to use bitsets, 256^3 is still a lot of cells, and further you need to temporarily associate data to them in parallel storing things like the shortest distance discovered so far in a direction with a BFS or DFS. You'll still read a good chunk of its memory and write to a hefty chunk of parallel memory looking for the nearest edge and keeping track of distances. So it's very redundant work for the hardware, even if it seems less redundant for humans, to throw away all that valuable data you computed and cached in L3, L2, possibly even L1 only to evict it and read and write to a good portion again by doing pathfinding a second time in the same frame.
So actually I'd start as a strategy, unless you're only doing one pathfinding operation per frame, to start with computing the entire distance map for the entire 256^3 grid to trivialize the pathfinding, and make that precomputation your primarily target for profiling and optimization. And only then if you need further performance, you start to break away from it and start computing things more on the fly for each individual pathfinding operation (and with the ability to easily throw away your new solution with, perhaps, a version control branch, since you might easily finding it performing worse). I think that's a more productive path in most cases.
Upvotes: 1