Reputation: 571
Say there are n 3-dimensional objects (polyhedra). Is the fastest way to calculate the intersection of all objects O(n ^ 2)?
Right now, I'm using a library that essentially forces T(n) to equal n ^ 2:
for each object: // there are n objects
get list of intersecting objects // this takes n steps
That literally takes n ^ 2 steps.
The only faster way that I can think of is still O(n ^ 2), but T(n) = n(n+1) / 2, or (n*n + n) / 2.
Here's pseudocode:
for(int i = 0; i < len(objects) - 1; i++)
for(int j = i + 1; j < len(objects); j++)
if object at i intersects object at j:
object at i . addToIntersectList ( object at j )
object at j . addToIntersectList ( object at i )
This way we don't have to check to see if two objects intersect twice. I know there are about 3000 objects in the list I am iterating through. This algorithm takes 4501500 steps, whereas the original algorithm takes nearly double that, 9000000 steps.
Am I missing something? Is this the best we can get?
Upvotes: 4
Views: 3305
Reputation: 4080
You can put all the the objects in a 3D tree structure. Then you only need to check pairs that are 'close' to each other in some sense.
This is a typical way to store spatial information. E.g. neo4j spatial has a k tree under the hood for storing locations and doing `near to' type queries.
Upvotes: 0
Reputation: 56755
Believe it or not, I have effectively already answered this question on StackOVerflow, here: https://stackoverflow.com/a/19172550/109122. The question and answer is for polygons (2D) but it should work equally well for polyhedra (3D).
The question there also makes reference to what is believed to be the fastest algorithm, the Hoey Shamos sweep-line technique, which is O(n Log n). You could research and implement that, but it is quite complicated.
The much simpler algorithm I demonstrated in my answer has a performance that is highly dependent on how "well-behaved" the shapes and their positioning is. The wilder the shapes and the denser their packing/overlapping, the poorer it will perform. However, with well-behaved shapes (i.e., convex, or mostly so) where intersections are few and packing is not too dense, I found it to perform better than O(n sqrt(n)). The code and discussions there are mostly about the lines within two polygons, but this also generalizes to the polygons themselves.
The big advantage of my approach for your case, aside from its relative simplicity, is that it can be applied independent of whatever function you use to detect overlap between two polygons. It just replaces your dual nested loop, with a more complicated series of pre-tests.
Upvotes: 3
Reputation: 4175
While there are a few ways to improve the O(n²) performance by changing the looping stuff, there are significant improvements that can be made by changing other things about the way you do your collision checking.
One of the main inefficiencies in your code is the way it relies on fully checking each polyhedron against every other polyhedron, which is frequently not always necessary. You don't need to do an intensive intersection test if two shapes aren't even close together, and if you have two clusters of shapes separated by a vast expanse of space, you do not need to check each member of the two clusters against every member of the other cluster, either. Some techniques for performing optimizations of this sort include:
You can use these techniques to majorly speed up a collision search.
Upvotes: 3