Reputation: 4324
I've wrote an algorithm that looks for duplicates in unsorted lists. It does n - m operations on each iteration where m is iteration number and n is size of input list. What is its big O?
Upvotes: 1
Views: 197
Reputation: 298432
(n - m) + (n - m - 1) + (n - m - 2) + ... + (n - m - m)
= (n * m) + m + (m - 1) + (m - 2) .. - (m - m) # Group the `n`s
= (n * m) + (1 + 2 + .. + m) # Carefully reverse this sum
= (n * m) + 0.5 * m * (m + 1) # sum(1...n) = n(n+1)/2
m
should range from 0
to n
, so:
= (n * n) + 0.5 * n * n + 0.5 * n # Just substitute `n` for `m`
= 3/2 * n^2 + 0.5 * n # Combine the `n^2` terms
-> 3/2 * n^2 # n^2 >> n for large enough n
=> n^2 # Drop the constant
I'm probably doing this a bit too verbosely.
Upvotes: 4
Reputation: 1058
O(n^2). The work here is n+(n-1)+(n-2)+...+1 = n(n+1)/2
.
A more intuitive way to look at it is that you do at least n/2
work for at least n/2
iterations, so you do at least n^2/4
work.
Upvotes: 7