Reputation: 3797
When using a bilinear filter to magnify an image (by some non-integer factor), is that process lossless? That is, is there some way to calculate the original image, as long as the original resolution, the upscaled image and the exact algorithm used are known, and there is no loss in precision when upscaling (no rounding errors)?
My guess would be that it is, but that is based on some calculations on a napkin regarding the one-dimensional case only.
Upvotes: 1
Views: 524
Reputation: 272667
Taking the 1D case as a simplification. Each output point can be expressed as a linear combination of two of the input points, i.e.:
y_n = k_n * x_m + (1-k_n) * x_{m+1}
You have a whole set of these equations, which can be expressed in vector notation as:
Y = K * X
where X
is a length-M
vector of input points, Y
is a length-N
vector of output points, and K
is a sparse matrix (size NxM
) containing the (known) values of k
.
For the interpolation to be reversible, K
must be an invertible matrix. This means that there must be at least M
linearly-independent rows. This is true if and only if there is at least one output point in-between each pair of input points.
Upvotes: 2