lettumaker
lettumaker

Reputation: 11

Aligning array values

Lets say I have two arrays, both with values representing a brightness of the sun. The first array has values measured in the morning and second one has values measured in the evening. In the real case I have around 80 arrays. I'm going to plot the pictures using matplotlib. The plotted circle will (in both cases) be the same size. However the position of the image will change a bit because of the Earth's motion and this should be avoided.

>>> array1
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 3, 1, 0]
[0, 0, 1, 1, 2, 0]
[0, 0, 1, 1, 1, 0]
[0, 0, 0, 0, 0, 0]

>>> array2
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 2, 1, 0]
[0, 0, 1, 1, 4, 0]
[0, 0, 1, 1, 1, 0]

In the example above larger values mean brighter spots and zero values are plotted as black space. The arrays are always the same size. How do I align the significant values (not zero) in array2 with the ones in array1? So the outcome should be like this.

>>> array2(aligned)
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 2, 1, 0]
[0, 0, 1, 1, 4, 0]
[0, 0, 1, 1, 1, 0]
[0, 0, 0, 0, 0, 0]

This must be done in order to post-process arrays in a meaningful way e.q. calculating average or sum etc. Note! Finding a mass center point and aligning accordingly doesn't work because of possible high values on the edges that change during a day.

Upvotes: 0

Views: 1155

Answers (1)

DrV
DrV

Reputation: 23540

One thing that may cause problems with this kind of data is that the images are not nicely aligned with the pixels. I try to illustrate my point with two arrays with a square in them:

array1:
0 0 0 0 0
0 2 2 2 0
0 2 2 2 0
0 2 2 2 0
0 0 0 0 0

array2:
0 0 0 0 0
0 1 2 2 1
0 1 2 2 1
0 1 2 2 1
0 0 0 0 0

As you see, the limited resolution is a challenge, as the image has moved 0.5 pixels.

Of course, it is easy to calculate the COG of both of these, and see that it is (row,column=2,2) for the first array and (2, 2.5) for the second array. But if we move the second array by .5 to the left, we get:

array2_shifted:
  0   0   0   0   0
0.5 1.5 2.0 1.5 0.5
0.5 1.5 2.0 1.5 0.5
0.5 1.5 2.0 1.5 0.5
  0   0   0   0   0

So that things start to spread out.

Of course, it may be that your arrays are large enough so that you can work without worrying about subpixels, but if you only have a few or a few dozen pixels in each direction, this may become a nuisance.

One way out of this is to first increase the image size by suitable extrapolation (such as done with an image processing program; the cv2 module is full of possibilities with this). Then the images can be fitted together with single-pixel precision and downsampled back.


In any case you'll need a method to find out where the fit between the images is the best. There are a lot of choices to make. One important thing to notice is that you may not want to align the images with the first image, you may want to alignt all images with a reference. The reference could in this case be a perfect circle in the center of the image. Then you will just need to move all images to match this reference.

Once you have chosen your reference, you need to choose the method which gives you some metrics about the alignment of the images. There are several possibilities, but you may start with these:

  1. Calculate the center of gravity of the image.

  2. Calculate the correlation between an image and the reference. The highest point(s) of the resulting correlation array give you the best match.

  3. Do either of the above but only after doing some processing for the image (typically limiting the dynamic range at each or both ends).

I would start by something like this:

  • possibly upsample the image (if the resolution is low)
  • limit the high end of the dynamic range (e.g. clipped=np.clip(image,0,max_intensity))
  • calculate the center of gravity (e.g. scipy.ndimage.center_of_mass(clipped))
  • translate the image by the offset of the center of gravity

Translation of a 2D array requires a bit of code but should not be excessively difficult. If you are sure you have black all around, you can use:

translated = np.roll(np.roll(original, deltar, axis=0), deltac, axis=1)

This rolls the leftmost pixels to the right (or vice versa). If that is bad, then you'll need to zero them out. (Or have a look at: python numpy roll with padding).

A word of warning about the alignment procedures: The simples (COG, correlation) fail, if you have an intensity gradient across the image. Due to this you may want to look for edges and then correlate. The intensity limiting also helps here, if your background is really black.

Upvotes: 3

Related Questions