Reputation: 3467
my plan is to extract information out of a floor plan drawn on a paper. I already managed to detect 70-80% of the drawn doors:
Now I want to create a data model from the walls. I already managed to extract them as you can see here:
From that I created the contours:
My idea now was to get the intersections of the lines from that image and create a data model from that. However if I use houghlines algorithm I get something like this:
Does somebody have a different idea of how to get the intersections or even another idea how to get a model? Would be very nice.
PS: I am using javacv. But an algorithm in opencv would also be alright as I could translate that.
Upvotes: 12
Views: 6014
Reputation: 4825
It strikes me that what you really want is not necessarily walls, but rather rooms - which are incidentally bounded by walls.
Moreover, while it looks like your "wall" data is rather noisy (i.e. there are lots of small sections that could be confused for tiny rooms) - but your "room" data isn't (there aren't many phantom walls in the middle of rooms).
Therefore, it may be beneficial to detect rooms (approximately axis-aligned rectangles that don't contain white pixels over a certain threshold), and extrapolate walls by looking at the boundary between nearby pixels.
I would implement this in three phases: first, try to detect a few principle axis from the output of houghlines (I would first reach for a K-means clustering algorithm, and then massage the output to get perpendicular axis). Use this data to better align the image.
Second, begin seeding small rectangles randomly about the image, in black areas. "Grow" these rectangles in all directions until each side hits a white pixel over a certain threshold, or they run into another rectangle. Continue seeding until a large percentage of the area of the image is covered.
Third, find areas (also rectangles, hopefully) not covered by rectangles, and collapse them into lines:
There are a few drawbacks to this approach:
I apologize for not including any code snippets - but I thought it more important to convey the idea, rather than the details (please comment if you'd like me to expand on any of it). Also note, that while I played around with opencv a few years ago, I'm by no means an expert - so it may already have some primitives to do some of this for you.
Upvotes: 1
Reputation: 590
I am just throwing an idea here, but you could try to start by thresholding the original image (which might produce interesting results since your drawings are on white paper). Then, by performing a region growing segmentation on the binary image, you will probably end up with the rooms segmented from each other and from the background (a criterium to identify rooms and background might be area similarity). From that, you shall be able to build different models as required by your problem: for instance, relative position of rooms, area, or even composition (i.e. the whole floor plan contains big rooms, which contains smaller ones and so on).
Upvotes: 0
Reputation: 1132
Try dilating the lines from either the Hough transform image or the original contour image by 1 pixel. You can do this by drawing the lines bigger with a line thickness of 2 or 3 (if you used the hough transform to get the lines) or you could dilate them manually using this code.
void dilate_one(cv::Mat& grid){
cv::Size sz = grid.size();
cv::Mat sc_copy = grid.clone();
for(int i = 1; i < sz.height -1; i++){
for(int j = 1; j < sz.width -1; j++){
if(grid.at<uchar>(i,j) != 0){
sc_copy.at<uchar>(i+1,j) = 255;
sc_copy.at<uchar>(i-1,j) = 255;
sc_copy.at<uchar>(i,j+1) = 255;
sc_copy.at<uchar>(i,j-1) = 255;
sc_copy.at<uchar>(i-1,j-1) = 255;
sc_copy.at<uchar>(i+1,j+1) = 255;
sc_copy.at<uchar>(i-1,j+1) = 255;
sc_copy.at<uchar>(i+1,j-1) = 255;
}
}
}
grid = sc_copy;
}
After the Hough transform you have a set of vectors that represents your lines stored as cv::Vec4i v
This has the endpoints of the line. Easiest solution would be to match the end points of each line and find those which are closest. You could use simple L1 or L2 Norms to calculate the distance.
p1 = cv::Point2i(v[0],v[1])
and p2 = cv::point2i(v[2],v[3]))
Points which are very close should be intersections. The only problem are T intersections where there may not be an endpoint but this doesn't seem to be a problem in your image.
Upvotes: 1
Reputation: 2137
First, you can also use the line segment detector to detect lines: http://www.ipol.im/pub/art/2012/gjmr-lsd/
If I understand right, the problem is that you're getting a few different short lines for every "real" lines. You can take all the endpoints of the "short line" and approximate a line that crosses using fitLine(): http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=fitline#fitline
Upvotes: 6