Dawood Awan
Dawood Awan

Reputation: 7338

Why would an O(n^2) algorithm run quicker than a O(n) algorithm on the same input?

Two algorithms say A and B are written to solve the same problem. Algorithm A is O(n). Algorithm B is (n^2). You expect algorithm A to work better.

However when you run a specific example of the same machine, Algorithm B runs quicker. Give the reasons to explain how such a thing happen?

Upvotes: 16

Views: 20065

Answers (5)

rliu
rliu

Reputation: 1148

While all of the answers so far seem correct... none of them feel really "right" in the context of a CS class. In a computational complexity course you want to be precise and use definitions. I'll outline a lot of the nuances of this question and of computational complexity in general. By the end, we'll conclude why Itay's solution at the top is probably what you should've written. My main issue with Itay's solution is that it lacks definitions which are key to writing a good proof for a CS class. Note that my definitions may differ slightly from your class' so feel free to substitute in whatever you want.

When we say "an algorithm is O(n)" we actually mean "this algorithm is in the set O(n)". And the set O(n) contains all algorithms whose worst-case asymptotic complexity f(n) has the property that f(n) <= c*n + c_0 for some constant c and c_0 where c, c_0 > 0.

Now we want to prove your claim. First of all, the way you stated the problem, it has a trivial solution. That's because our asymptotic bounds are "worst-case". For many "slow" algorithms there is some input that it runs remarkably quickly. For instance, insertion sort is linear if the input is already sorted! So take insertion sort (O(n)) and merge sort (O(nlog(n))) and notice that the insertion sort will run faster if you pass in a sorted array! Boom, proof done.

But I am assuming that your exam meant something more like "show why a linear algorithm might run faster than a quadratic algorithm in the worst-case." As Alex noted above, this is an open ended question. The crux of the issue is that runtime analysis makes assumptions that certain operations are O(1) (e.g. for some problem you might assume that multiplication is O(1) even though it becomes quadratically slower for large numbers (one might argue that the numbers for a given problem are bounded to be 100-bits so it's still "constant time")). Since your class is probably focusing specifically on computational complexity then they probably want you to gloss over this issue. So we'll prove the claim assuming that our O(1) assumptions are right, and so there aren't details like "caching makes this algorithm way faster than the other one".

So now we have one algorithm which runs in f(n) which is O(n) and some other algorithm which runs in g(n) which is O(n^2). We want to use the definitions above to show that for some n we can have g(n) < f(n). The trick is that our assumptions have not fixed the c, c_0, c', c_0'. As Itay mentions, we can choose values for those constants such that g(n) < f(n) for many n. And the rest of the proof is what he wrote above (e.g. let c, c_0 be the constants for f(n) and say they are both 100 while c', c_0' are the constants for g(n) and they are both 1. Then g(n) < f(n) => n + 1 < 100n^2 + 100 => 100n^2 - n + 99 > 0 => (factor to get actual bounds for n))

Upvotes: 3

Itay Karo
Itay Karo

Reputation: 18296

Algorithm A, for example, can run in time 10000000*n which is O(n).
If algorithm B, is running in n*n which is O(n^2), A will be slower for every n < 10000000.

O(n), O(n^2) are asymptotic runtimes that describe the behavior when n->infinity

EDIT - EXAMPLE

Suppose you have the two following functions:

boolean flag;

void algoA(int n) {
  for (int i = 0; i < n; i++)
    for (int j = 0; j < 1000000; j++)
      flag = !flag;

void algoB(int n) {
  for (int i = 0; i < n; i++)
    for (int j = 0; j < n; j++)
      flag = !flag;

algoA has n*1000000 flag flip operations so it is O(n) whereas algoB has n^2 flag flip operations so it is O(n^2).

Just solve the inequality 1000000n > n^2 and you'll get that for n < 1000000 it holds. That is, the O(n) method will be slower.

Upvotes: 38

rishikesh tadaka
rishikesh tadaka

Reputation: 483

It depends on different scenario.There are 3 types of scenario 1.Best, 2.Average, 3.Worst. If you know sorting techniques there is also same things happens. For more information see following link:

http://en.wikipedia.org/wiki/Sorting_algorithm

Please correct me if I am wrong.

Upvotes: 1

Johannes Egger
Johannes Egger

Reputation: 4041

Big-O-notation says nothing about the speed itself, only about how the speed will change when you change n.

If both algorithms take the same time for a single iteration, @Itay's example is also correct.

Upvotes: 4

Alex Florescu
Alex Florescu

Reputation: 5151

Knowing the algorithms would help give a more exact answer.

But for the general case, I could think of a few relevant factors:

  • Hardware related

    e.g. if the slower algorithm makes good use of caching & locality or similar low-level mechanisms (see Quicksort's performance compared to other theoretically faster sorting algorithms). Worth reading about timsort as well, as an example where an "efficient" algorithm is used to break the problem up in to smaller input sets and a "simpler" and theoretically "higher complexity" algo is used on those sets, because it's faster.

  • Properties of the input set

    e.g. if the input size is small, the efficiency will not come through in a test; also, for example with sorting again, if the input is mostly pre-sorted vs completely random, you will see different results. Many different inputs should be used in a test of this type for an accurate result. Using just one example is simply not enough, as the input can be engineered to favor one algorithm instead of another.

  • Specific implementation of either algorithms

    e.g. there's a long way to go from the theoretical description of an algorithm to implementation; poor use of data structures, recursion, memory management etc. can seriously affect performance

Upvotes: 4

Related Questions