Steve Bennett
Steve Bennett

Reputation: 126907

Simple, somewhat efficient algorithm for finding intersections between one series of line segments and others?

I've had a look at the other algorithms for finding intersections between many-to-many line segments, but:

  1. the fast ones are a bit too complicated (binary search trees)
  2. my situation is slightly different (one to many, and the segments are arranged into sequences)

I have a number of "ways", each of which is a series of line segments: [(x0,y0)-(x1,y1)],[(x1,y1)-(x2,y2)],... I'd like to find potential intersections between one of these ways, and all others.

Is there an algorithm which would be better than blindly testing every segment in my test way against every segment of every other way? However, I think maintaining a binary search tree is too complicated for the application. If I can get away without maintaining any data structure, that would be good, too.

For this application, it's probably ok if a few false negatives are returned, for unusual situations. (The goal is to save a human the effort of locating intersections manually, so a few misses is probably ok.) The language is ActionScript.

Upvotes: 0

Views: 1855

Answers (1)

Ze Blob
Ze Blob

Reputation: 2957

There's two ways that I know to find intersections in line segments.

The first method is to use the Bentley-Ottmann algorithm to find all the intersections. You can then filter out the segment pairs you don't want (the ones from the same set). Note that supporting all the edge cases for this algorithm can be quite difficult and I wouldn't recommend it. Also I had difficulty finding existing implementations for Bentley-Ottmann and I know of only one that deals with edge cases.

The other method is to use two R-Tree (is this your binary search tree?) to index each of your series of segments. You can then iterate through all the segments of one series and look up the set of segments of the other series that are nearby. With this hopefully reduced set of segments, you can then brute force your intersection search. Depending on your dataset it can either closely match the performances of Bentley-Ottmann or perform exactly like the brute force approach. Another down side is that if you have to move your segments around, you have to update your indexes. On the other hand, I find it easier to deal with the edge cases with this algorithm and R-Tree implementations should be easier to find or implement yourself.

I still recommend you try the brute-force approach before attempting any of these. It shouldn't take too long and most of the code should still be useful to implement the other two methods. You might also find that it's fast enough to avoid the inevitable headaches with the more complicated methods.


Small update to answer Steven's comment.

When I implemented my intersection finding algorithm I had to optimize a process that dealt with OSM data as well. Most of my performance tests were done using the city of London which contains a lot of segments and intersections (don't remember an actual number, sorry). My intersection finding algorithm also had to handle every edge cases (degenerate segments, overlapping segments, horizontal and vertical segments and segments intersecting on their end-points) and be able to cope with a constantly changing dataset.

The process that I was optimising could call the intersection finding algorithm hundreds of thousands of times while constantly modifying the dataset it was working on. It also did quite a few other things besides intersection searches. Using the naive approach, this took around 6 hours to execute.

I initially started working on a B-O implementation which was based on this paper (look for A robust and efficient implementation of a sweep line algorithm for the straight line segment intersection problem here). Sadly it was quite tricky to get right (weird logic and numerical stability problems) and I ended up having to scrap it. Looking back, I should have tried to open-source it but it's a little too late now.

Anyway, I then tried the R-Tree approach which worked wonderfully well and managed to reduce that colossal 6 hours down to a mere 30 minutes where most of it was spent elsewhere. This is despite having to constantly update the segments in the R-Tree.

So yes it's worth it if your dataset is big enough or if you do enough searches. I still heavily recommend trying the brute-force approach first and, if it's not fast enough, upgrade to an R-Tree.

Upvotes: 5

Related Questions