Reputation: 592
I want to crop images automatically.
I am using ImageMagick for this .
Command i am using
convert 3.jpg -fuzz 10% -trim trim.jpg
How do i fix this .
I think there is problem with fuzz factor which am setting .
Upvotes: 6
Views: 12804
Reputation: 824
The problem with Kamyar Infinity's method is when some area of background' color is close to the object, you can't get the right boundary of the object.
The threshold value which is set to math.floor(numpy.average(imgray))
won't help you...
eg:
Even though thresh value is perfect(manually adjust), you can't overlook that little point on the top right of the image. You need to filter out some areas. eg:
Method to achieve this was given by the opencv officially.
Creating Bounding boxes and circles for contours
Another method was given here could be useful. (basically same with Kamyar Infinity but added cv.inRandge)
Dealing with contours and bounding rectangle in OpenCV 2.4 - python 2.7
Upvotes: 2
Reputation: 2759
If you want to do this with OpenCV, a good starting point may be after doing some simple processing to remove noise and small details in the image, you can find the edges of the image and then find the bounding box and crop to that area. But in case of your second image, you may need to do some post-processing as the raw edges may hold some noise and borders. You can do this on a pixel-by-pixel basis, or another maybe overkill method would be finding all the contours in the image and the finding the biggest bounding box. Using this you can get the following results:
And for the second one:
The part that needs work is finding a proper thresholding method that works for all the images. Here I used different thresholds to make a binary image, as the first one was mostly white and second one was a bit darker. A first guess would be using the average intensity as a clue.
Hope this helps!
Edit
This is how I used some pre-processing and also a dynamic threshold to get it work for both of the images:
im = cv2.imread('cloth.jpg')
imgray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
imgray = cv2.blur(imgray,(15,15))
ret,thresh = cv2.threshold(imgray,math.floor(numpy.average(imgray)),255,cv2.THRESH_BINARY_INV)
dilated=cv2.morphologyEx(thresh, cv2.MORPH_OPEN, cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(10,10)))
_,contours,_ = cv2.findContours(dilated,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
I also checked the contour area to remove very large contours:
new_contours=[]
for c in contours:
if cv2.contourArea(c)<4000000:
new_contours.append(c)
The number 4000000
is an estimation of the image size (width*height), big contours should have an area close to the image size.
Then you can iterate all the contours, and find the overall bounding box:
best_box=[-1,-1,-1,-1]
for c in new_contours:
x,y,w,h = cv2.boundingRect(c)
if best_box[0] < 0:
best_box=[x,y,x+w,y+h]
else:
if x<best_box[0]:
best_box[0]=x
if y<best_box[1]:
best_box[1]=y
if x+w>best_box[2]:
best_box[2]=x+w
if y+h>best_box[3]:
best_box[3]=y+h
Then you have the bounding box of all contours inside the best_box
array.
Here is the result for the third image:
Upvotes: 15
Reputation: 24439
You can try isolating the saturation channel, and trim as expected.
# Convert to HSV, isolate saturation channel, and switch to format
# that supports extended paging.
convert source.jpg -colorspace HSV -channel S -separate /tmp/saturation.png
# Trim as before
convert /tmp/saturation.png -trim /tmp/trim.png
# Capture results of -trim
GEO=$(identify -format '%wx%h%X%Y' /tmp/trim.png)
1232x1991+384+336
# Apply results to original image
convert source.jpg -crop $GEO trim.jpg
Upvotes: 3