Reputation:
I'm working on a project involving finding paths using the A* algorithm (Thanks, Patrick Lester, for a great tutorial). A series of PNG maps are supplied, points of interest are specified by coordinates received from a web service and the problem is to show paths between those points. It was a bit of a disaster at first because the various levels were not located in the same coordinate system and so moving from one level to another involved an unintentional shift in x and y coordinates instead of only changing z - making the cost and heuristic of the level change completely strange, with some very non-optimal paths being generated.
To fix this I could have broken the pathing calculation up into per-level solutions with a move to a new level as an intermediate, non-calculated step. Instead I chose to make all of the level maps part of one coordinate system so that if you look at a lift on one level, the lift is present on all the levels it reaches at the same x and y coordinate.
Only problem is the coordinates of points of interest used by the original maps. Those coordinates don't match anything meaningful on the new maps. While I am confident pathing is now working nicely the system as a whole isn't because the start and end points of the path are not being plotted correctly in the map system.
To get the new maps (all located in one coordinate space) from the old maps each was transformed in a simple and repeatable way. I figure if I get a coordinate and apply the same transform to it as was applied to the map it refers to, all will be well. The maps are rotated, resized and translated.
Given an image and the resulting transformed image is there a way to derive the transformation matrix? It's an iPhone project so ideally I'm looking for a CGAffineTransform. For each map I could manipulate the old map again to get the new map and record the transformation being done but I am curious about whether there is a way to work backwards here.
(readers - if you can help tag this question better please do, it's a little out of my area)
Upvotes: 0
Views: 483
Reputation: 61026
For finding a transformation you need a minimum of point coordinates (original and transformed) equal to the number of parameters.
If you are working with images, and not with perfect geometrical entities, a Least Squares computation is much better. By using more points, you reduce the errors induced by the space quantization (ie pixels).
If you google for "fit affine transformation least squares" you'll find code for several functions for this purpose, including this one in Python.
HTH!
Upvotes: 0
Reputation: 10378
I only skim read your question - sorry! But from the title, and a little bit of maths, it sounds like there should be a way. Using some matrix algebra:
xA = B
//Where A is the original image, B is the transform, and x is the transform matrix.
//Now to find x:
x = B(A^-1)
i.e. multiplying both sides by the inverse of A will allow you to find x, the transform matrix (See here more on matrix inversion).
How you would apply this to a CGAffineTransform, or find the inverse of an image in your case, I am not too sure! But the above math shows that what you ask is definitely doable. Hope this helps!
Upvotes: 0