Reputation: 4191
Consider I have the following a stabilized video frame where stabilization is done by only rotation and translation (no scaling):
As seen in the image, Right-hand side of the image is symmetric of the previous pixels, i.e the black region after rotation is filled with symmetry. I added a red line to indicate it more clearly.
I'd like to find the rotation angle which I will use later on. I could have done this via SURF or SIFT features, however, in real case scenario, I won't have the original frame.
I probably can find the angle by brute force but I wonder if there is any better and more elegant solution. Note that, the intensity value of the symmetric part is not precisely the same as the original part. I've checked some values, for example, upper right pixel of V character on the keyboard is [51 49 47]
in original part but [50 50 47]
in symmetric copy which means corresponding pixels are not guaranteed to be the same RGB
value.
I'll implement this on Matlab or python and the video stabilization is done using ffmpeg
.
EDIT: I only have stabilized video, don't have access to original video or files produced by ffmpeg.
Any help/suggestion is appreciated,
Upvotes: 4
Views: 451
Reputation: 11072
A pixel (probably) lies on the searched symmetry line if
dG
, Figure 1 left)dGs
, Figure 1 middle)So, the points of interest are characterised by high values for |dGs| - |dG|
(=> dGs_dG
, Figure 1 right)
As can be seen on the right image of Figure 1, a lot of false positives still exist. Therefore, the Hough transform (Figure 2 left) will be used to detect all the points corresponding to the strongest line (Figure 2 right). The green line is indeed the searched line.
Tuning
Changing n
: Higher values will discard more false positives, but also excludes n
border pixels. This can be avoided by using a lower n
for the border pixels.
Changing thresholds: A higher threshold on dGs_dG
will discard more false positives. Discarding high values of dG
may also be interesting to discard edge locations in the original image.
A priori knowledge of symmetry line: using the definition of the hough transform, you can discard all lines passing through the center part of the image.
The matlab code used to generate the images is:
I = imread('bnuqb.png');
G = int16(rgb2gray(I));
n = 3; % use the first, second and third left/right point
dG = int16(zeros(size(G) - [0 2*n+2]));
dGs = int16(zeros(size(G) - [0 2*n+2]));
for i=0:n
dG = dG + abs(G(:, 1+n-i:end-2-n-i) - G(:, 3+n+i:end-n+i));
dGs = dGs + abs(G(:, 1+n-i:end-2-n-i) - G(:, 2+n:end-n-1));
end
dGs_dG = dGs - dG;
dGs_dG(dGs_dG < 0) = 0;
figure
subplot(1,3,1);
imshow(dG, [])
subplot(1,3,2);
imshow(dGs, [])
subplot(1,3,3);
imshow(dGs_dG, [])
BW = dGs_dG > 0;
[H,theta,rho] = hough(BW);
P = houghpeaks(H,1);
lines = houghlines(BW,theta,rho,P,'FillGap',50000,'MinLength',7);
figure
subplot(1,2,1);
imshow(H, [])
hold on
plot(P(:, 2),P(:, 1),'r.');
subplot(1,2,2);
imshow(I(:, n+2:end-n-1, :))
hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'g');
end
Upvotes: 6