Reputation: 11
I am having a bit of trouble trying to transform 2D irregular points into a grid in Python.
As a bit of background, I have calculated average x,y position of a eye tracking data and am trying to devise some (higher order/polynomial) function to transform these points to a grid with 'real' position in space. In the figure below, I am trying to take the average x,y positions of the eye tracking data (shown in black) and trying to project them to the 'known' x,y positions for the grid (shown in red)
Averaged Data (Black), Desired Grid Points (Red)
The raw data values are as follows:
AvgX = [5.9249217363378444, 29.400090125634197, 189.88137522082039, 10.635691487603076, -156.27020696966224, 125.03659193723372, -168.39555447902086, 186.62552891024129, 111.90418423429169, 100.57725103088637, -76.716438908489465, 6.5688214489253474, 146.18743315136786, -77.676030038490595, 175.21590859125735, -55.931989461463523, -175.71204466459488, 97.750258696640429, 4.4562688886630015, -71.385022755147517, 191.47832859030729, -83.713605575394325, 100.81203864776603]
AvgY = [168.67521806680125, 19.421198111140864, -221.60630388937381, 79.730784050599141, 195.43389670921019, 98.926386207770904, -85.356440304228784, -39.574253598391287, 175.70610514354374, -113.76915782872061, -187.40510724928777, -86.989048811265221, -118.46908736453032, 8.054366530368533, 51.680353870737072, -81.628307614654986, 18.393403891381649, -23.678128041659768, -193.94235177110983, 100.69985383522851, 145.38153797528696, 190.0494081938453, -202.22859560880681]
GridX = [0.0, 0.0, 185.635467529, 0.0, -185.635467529, 92.8177337646, -185.635467529, 185.635467529, 92.8177337646, 92.8177337646, -92.8177337646, 0.0, 185.635467529, -92.8177337646, 185.635467529, -92.8177337646, -185.635467529, 92.8177337646, 0.0, -92.8177337646, 185.635467529, -92.8177337646, 92.8177337646]
GridY = [188.696807861, 0.0, -188.696807861, 94.3484039307, 188.696807861, 94.3484039307, -94.3484039307, 0.0, 188.696807861, -94.3484039307, -188.696807861, -94.3484039307, -94.3484039307, 0.0, 94.3484039307, -94.3484039307, 0.0, 0.0, -188.696807861, 94.3484039307, 188.696807861, 188.696807861, -188.696807861]
From my understanding, I will need to apply some sort of polynomial function to map the averaged data to the known grid. However, I am not sure how to go about this.
Rounding works when the averaged points (blue) are near the target grid points (red). However, this does not work when the averaged points are closer to an arbitrary grid point than the actual target grid point. Take for example if the entire averaged grid is shifted down.
In this example the 1. red points are the target location 2. black points are the averaged location 3. blue points are the transform when rounded 4. orange arrows are how the averaged points are shifted with rounding 5. green arrows are how I am trying to shift the data.
Eventually I want to put in some random point (raw data) and apply some function to the point to reposition it to it's "actual" location given the calibration points. I am guessing that I will need to apply some spline or higher order polynomial function across each row and column of points to produce some contour to interpolate the raw data inputs that I give this function.
Upvotes: 1
Views: 1199
Reputation: 20705
Since your grid is characterized by two numbers -- the step size in x and the step size in y -- we can apply a simple transformation that rounds the data:
import numpy as np
DELTA_X = 92.8177337646
DELTA_Y = 94.3484039307
def gridify(coords, spacing):
coords = np.array(coords)
return np.round(coords / spacing) * spacing
x = gridify(AvgX, DELTA_X)
y = gridify(AvgY, DELTA_Y)
Which gives:
Upvotes: 1
Reputation: 1045
If your grid is regular, why not just round to the nearest 100 or whatever the spacing is?
Upvotes: 0