Reputation: 575
Quicksort gives us a pretty nice O(nlogn); However, I was thinking is there a way to sort an array with unique values that is faster than Quicksort?
Upvotes: 0
Views: 1514
Reputation: 2129
Without additional assumptions, the lower bound for worst time complexity of any algorithm that uses comparisons isBigTheta(nlogn)
.Note that the sorting of a permutation will in fact be the inverse of p. This means that if you are able to sort p({1,2,...n), then you are able to determine which permutation was applied to your data, out of all possible permutations.
The total number of permutations is n!, and for every information bit acquired your set is partitioned into two sets representing the outcomes consistent with that bit. Therefore you can represent the search for which permutation you use as a binary tree where every node is a set of permutations, the children of a node are the partitions of the parent set and the leaves are the outcome of your algorithm.
If your algorithm determines which partition you use, the leaves are singletons, so you end up with n! leaves. The tree with minimal height that contains n! leaves is log(n!) which is asymptotically nlog(n). http://lcm.csa.iisc.ernet.in/dsa/node208.html is a good reference for all of this.
Upvotes: 0
Reputation: 453
About sorting algorithms and techniques @Bilal answer is quite helpful!!
A work around the problem could run in O(N*log(N))
but for further calculations will be less because of removing of the duplicated values.
So the idea is to input the values and insert it in std::set
which will automatically remove duplicated values, and if the duplicates are needed you can store it's count while getting input from user!!
A sample code will be something like this:
int n,x;
set<int> st;
int cnt[MAX_VAL];
int main(){
cin>>n;
for (int i=1;i<=n;i++){
cin>>x;
cnt[x]++;
st.insert(x);
}
// Rest of your code
}
Upvotes: 0
Reputation: 633
Here are some of the fastest sorting algorithms and their runtimes:
Upvotes: 2