Reputation: 23
I wanted to know that if there is a difference between running times and invariants of iterative and recursive merge sort. How to change the Merge sort (iterative or recursive version) in such a way that the best case is the same as in the case of Insertion sort?
Upvotes: 2
Views: 1021
Reputation: 28921
To change merge sort to get a best case time of O(n), a scan needs to be done to see if the data is already sorted. There are variations of merge sort that scan data to find existing sorted runs, such as Timsort, but that is a hybrid of merge sort and insertion sort.
https://en.wikipedia.org/wiki/Timsort
If the sort does not need to be stable (preserve the order of equal elements), then any inversion (data[i+1] < data[i]) can be treated as a sorted run boundary. If the sort needs to be stable, then some method will be needed to keep track of run boundaries (such as an array of indexes or pointers).
A pure merge sort has time complexity O(n log(n)), and the number of moves is always the same, but if the data is already sorted, then the number of compares is cut in about half.
Upvotes: 0
Reputation: 4798
If there was Pseudo code it would be much more sense able. However, There is not any difference in running times of iterative or recursive implementation and the only difference is that, the recursive implementation is much more readable and convenient compared to iterative one. both implementations have O(nLogn)
running time.
How to change the Merge sort (iterative or recursive version) in such a way that the best case is the same as in the case of Insertion sort
Merge sort is neutral to input that means, either sorted or not sorted input will have O(nLogn)
so THERE IS NOT BEST CASE OR WORSE CASE AND ALWAYS O(nLogn)
.
On the other hand, insertion sort is not neutral to input that means, in best case it has O(n)
running time when the input is sorted and in worse case is O(n^2)
when input is reverse sorted.
I wanted to know that if there is a difference between running times and invariants of iterative and recursive merge sort.
In computer science, a loop invariant is a property of a program loop that is true before each iteration. link
iterative implementation invariant
curr_size<=n-1
left_start<n-1
Upvotes: 2
Reputation: 144969
Iterative and recursive merge sort variants, also referred to as top-down and bottom-up merge sort have the same time complexity O(N.log(N)) and stability. The running time may be affected by quality of implementation, especially cache friendliness, efficiency of the working space allocation method and effective balancing of the fragment sizes for bottom-up merging, which is automatic in top-down merge sort. An interesting and sometimes useful property is the time complexity does not depend on the actual data distribution. There is no best case and worst case in classic implementations, just a constant time complexity of O(N.log(N)).
You can modify the merge sort algorithm to exhibit a better time complexity on arrays that are mostly sorted as ascending or descending order at a small cost, while keeping the same average and worst case time complexity of O(N.log(N)):
Adding an extra test at the beginning of the merge phase to verify if the halves are already ordered just adds N
tests but reduces the complexity to linear O(N) for sorted arrays.
Another common optimisation for both top-down and bottom-up merge sort is falling back to insertion sort for chunks smaller than a given threshold. This does not change the average time complexity but does improve the running times by a noticeable factor on sorted or almost sorted arrays.
Upvotes: 1