Reputation: 1381
I came across a problem but not able to get a better approach than O(m X n)
A 2D array contains column wise sorted array, Is there any way to choose one element from each column and create a row such that the difference of maximum and minimum element in that row should be minimum. for example
A = 1 2 5
3 4 15
7 11 17
The answer should be 2 as u can choose 3 from first col, 4 from second and 5 from third making row 3 4 5 making max - min 2.
Is there a way to get this in O(Row * Col)
Ps: Its not from any ongoing competition.
Upvotes: 3
Views: 1712
Reputation: 2467
(Too lazy to use difference between the maximum and minimum entry in a row, I'll call that range.)
There are a lot of special cases one could take advantage of, one of which kills the apparent trivial worst case of one and the same value for each and every entry, where every row has a range of 0: when a range of 0 is found, searching for a lower one is excessive.
I suggest as a worst case a matrix filled row first with successive values "and extra space around the last one" - starting one column further down every row
1 2 3 5
11 7 8 9
15 17 13 14
, where the range is the same for every row, a lower range might exist (even with duplicates forbidden) and I don't see how to avoid inspecting each and every entry. For one, my favourite "trick" doesn't work:
For a newly found minimum range candidate, look at the next values in the columns containing the minimum and maximum entries, respectively: while the difference doesn't drop below the candidate, the range can't and rows can be skipped. (Another chance for identical values in a row to mess things up - please specify in the question whether they are permitted.)
Then, there are columns starting with a value that sorts after the last value of all other columns: these contribute row maxima (assuming ascending order). Likewise, columns with a last value to come before the first values of every other contribute row minima. (For unique values, weaker conditions suffice.)
Upvotes: 1
Reputation: 17605
The runtime of O(m*n)
is in fact linear, as there are m*n
elements in the input. Within a sublinear runtime bound, it is impossible to read every element, which means that without additional assumptions the described algorithm is optimal with respect to runtime complexity.
Upvotes: 1