Determine distance between objects and its reflection

I am trying to measure the distance between an object and its reflection. The upper "line" is the reflection. The lower is the object itself. The object is a spiral, this further worsens the view on the object. The light, that is thrown on the object only reflects partly and makes it look as if the object would change its size. The light is produced in a slow motion camera (5000 images/second), thrown on the object to make it visible. The object is permanently moving (all axis). I am trying to analyse its movement from these images.

The images are super low res (15x20 pixel). I applied googles RAISR AI to enlarge the images and increase their quality. In addition I applied a blurr filter to help opencv with making the contours. In the end I apply contours, to mark the relevant area visible.

Before improvement:

enter image description here

After improvement + Contours:

enter image description here

This specific picture is one of the good ones. Problem is that most of them look like this:

enter image description here enter image description here enter image description here enter image description here

Is there a person out there, that has an idea how I would measure the distance between the object and its reflection?

My last approach yielded no satisfying result. In that I would make a break above the object. Problem is that the object (reflected lighting to camera) changes its size.

enter image description here enter image description here enter image description here

How would I do something like this?

enter image description here

I have such a nice Boss. I don't wanna tell him that I can't solve this problem. Help is much appreciated.

Upvotes: 2

Views: 465

Answers (2)

DrM
DrM

Reputation: 2525

Techniques based on 2-D correlations provide a rich set of capabilities for recognizing and locating objects and reflections.

Following is an example code that illustrates how this works. We look for reflections by flipping the image, and in the following we use roll() to illustrate how displacements work in the coordinate system. The 2-D correlation then gives you a measure of how the two inputs line up as a function of displacing one with respect to the other. (Try experimenting with 1-d data if it helps you more easily get a feel for how this works. Nothing is different about this in 2-d except for the number of dimensions).

Here we take a gross approach and use the entire image. Since we are working with the Fourier transforms, this is okay. However, you can sometimes improve performance if you can identify and excise a piece of the image to work with as the reference.

There are also techniques involving projection onto a (ideally) orthonormal basis set, wavelets, etc. These methods work best when the basis set is a good match for the thing you want to find. Fourier transform based methods work well whenever you are well within the Nyquist limit and meet basic SNR considerations. But to be fair, the FT too, is an expansion in a basis set.

Finally, it should be noted that no technique whatsoever, can create new information. If it is not there in the input, no algorithm and no amount of code will find it.

Okay, here is the example code demonstrating correlations.

#!/usr/bin/python

import numpy as np
import matplotlib.pylab as plt
from scipy.signal import correlate2d

plt.figure( figsize=[6,8] )

im = plt.imread("temp.png")

# For simplicity of exposition, we just sum the three color channels.
im1 = np.sum(im,axis=2)

ny = 5
nx = 2

n1 = 1
ax = plt.subplot( ny, nx, n1 )
ax.imshow(  im1 )
ax.set_title( 'raw' )
ax.set_aspect( 'equal' )

corr = correlate2d( im1, im1, boundary='symm', mode='same')

n1 += 1
ax = plt.subplot( ny, nx, n1 )
ax.contourf(  corr, 20 )
ax.set_title( 'auto-correlation' )
ax.set_aspect( 'equal' )


for a in 0, 1:
    imtest = np.roll(im1,4,axis=a)
    corr = correlate2d( im1, imtest, boundary='symm', mode='same')

    n1 += 1
    ax = plt.subplot( ny, nx, n1 )
    ax.imshow( imtest )
    ax.set_title( 'roll axis %d'%a )

    n1 += 1
    ax = plt.subplot( ny, nx, n1 )
    ax.contourf(  corr, 20 )
    ax.set_title( 'correlation, roll axis %d'%a )
    ax.set_aspect( 'equal' )

    imtest = np.flip(im1,axis=a)
    corr = correlate2d( im1, imtest, boundary='symm', mode='same')

    n1 += 1
    ax = plt.subplot( ny, nx, n1 )
    ax.imshow( imtest )
    ax.set_title( 'flip axis %d'%a )

    n1 += 1
    ax = plt.subplot( ny, nx, n1 )
    ax.contourf(  corr, 20 )
    ax.set_title( 'correlation, flip axis %d'%a )
    ax.set_aspect( 'equal' )

plt.tight_layout()
plt.show()

Here is the output using your raw image. Notice where the local maxima occur in the correlations, for the self correlation and for the rolls and flips.

Output from the sample code

See the example listed at the bottom here:scipy.signal.correlate2d

Upvotes: 0

Easy_Israel
Easy_Israel

Reputation: 841

It seems that your main problem is low resolution. It seems to me that RAISR AI is a single frame super resolution approach.

You have a slow motion camera, so maybe you have more images than you need. Then you could use a multiple frame approach as in opencv super resolution

With a multi frame approach you gain more real information. the single frame approach is just estimating more information.

you tagged this question with : A problem could be, that super resolution is not part of the opencv python version. So maybe you need a workaround with ctypes or another wrapper solution.

Upvotes: 0

Related Questions