Reputation: 21
I'm a new student to CS. Right now I'm self-studying Intro to programming. I was studying about Selection Sort Algorithm, and I think that by making this change below to selection sort it is going to make it be more efficient, is this true or I'm missing something?? the change is instead of calling the swap function each time even when there is no change made to the array, we can add changeMade Boolean variable and we use it to call the function only when there is a change made to the array. Please correct me if I'm wrong
Declare Integer startScan, i, minValue
Declare Integer minIndex
//the boolean variable
//that could make algorithim
//more efficient
Declare Boolean changemade
//Declare the array and Declare it is size
//and initialize it
Constant Integer SIZE = 5
Declare Integer array[SIZE]=1, 4, 8, 2, 5
For StartScan = 0 To SIZE-2
set changeMade = False
set array[startScan]= minValue
For i= startScan+1 To SIZE-1
If array[i]<minValue Then
set minValue=array[i]
set minIndex= i
set changeMade=True //the modification
End If
End For
If ChangeMade = True Then
call swap(array[minIndex], array[startScan])
End For
Module swap(Integer Ref a, Integer Ref b)
Declare Integer temp
set temp = a
set a = b
set b = temp
End Module
Upvotes: 2
Views: 266
Reputation:
That sounds like a false good idea.
This heuristic is effective on elements that are already at the correct position.
Assume there are 10% of them, which can be considered optimistic. While sorting N elements, you will spare 0.1 N swaps but add a large number of assignments to the flag (up to N²/2 !) and N tests of the flag (conditional instructions are very slow).
Unless the swap is really costly, odds are high that the overhead of manipulating the flag will dominate.
It is certainly a better idea to drop the flag and test minIndex != startScan
, but even so, it is unsure that avoiding the swap will counterbalance the extra comparisons.
Upvotes: 0
Reputation: 137
Operations such as swap are almost ignored while calculating the complexity.
Although all the operations are taken into account when calculating time complexity. But as loops are dominant compared to other operations, we ignore other operations and only consider dominant operations(Because for large input value, cost of all other operations are much smaller than the dominant operations) .
As an example with selection sort: When you consider all statement costs into account then you get a function f(n)=an2+bn+c (a,b and c are constants and depend on machine architecture). Here dominant term is an2.So we can say Time complexity of selection sort O(an2).We also ignore leading terms coefficient a ,as a does not change the rate of growth.
Have you read about the Asymptotic analysis and notations such as theta, omega, big O. Have a look at them, it will help you get the answer to your question.
Upvotes: 2