guptasaanika
guptasaanika

Reputation: 137

Why we use big O notation for best and average cases also?

here

As we can see best, worst and average case time complexities for different algorithms, then suppose for merge sort, best case should be Ω(n logn) but instead it's given O(n logn). Similarly, for average case it should have been given Θ(n logn) but there also big O notation is used. And this big O notation is used everywhere in this table, no matter whether it is best case or average case. Please explain me why.

Upvotes: 2

Views: 4990

Answers (1)

templatetypedef
templatetypedef

Reputation: 373022

In practice, there are two versions of asymptotic notation in use.

  • Formal, mathematically rigorous asymptotics. If you're working in a mathematical context (for example, you're trying to prove tight bounds on some expression, or you're trying to argue why a certain algorithm doesn't exist), then you absolutely need to choose from O, Ω, o, ω, Θ, etc. properly in the course of making an argument because they have specific, technical meanings. This is why, for example, if you pick up a CS theory paper you'll see a mix of different asymptotic notations tossed around.

  • Informal, layperson usage. Most practicing software engineers are interested in big-O notation inasmuch as it relates to overall program efficiency. In this context, big-O notation is used in a way that's not technically mathematically correct but is still a good proxy for what's meant. For example, someone might decide to pick one data structure over another with the justification that "operations on the first data structure take time O(log n), while operations on the second take time O(n)" even though such a statement is analogous to saying something like "Amit is shorter than Pranav, because Amit is at most 2m tall and Pranav is at most 5m tall." Although this isn't mathematically correct, in the way that the term is commonly tossed around it's usually clear what's meant.

The challenge with these notations is that if you're expecting a super rigorous, precise, mathematically accurate description of an algorithm's runtime and you get a layperson use of big-O notation, you'll be confused because the literal meaning of what's said might be wrong. Similarly, if you're a software engineer who's used to the layperson version of big-O notation and someone starts tossing around Θ and Ω notation, it can be confusing because you might not be used to seeing it.

I think the "best" answer to your question is "the people making that table probably should be using more precise asymptotic notation, so even though technically what they're doing isn't ideal, it's a relatively common practice to present information this way." Since I tend to spend a lot of time in Theoryland, I would personally prefer if they switched to use different asymptotic notation here, but since I also interface with a bunch of software engineers I completely understand why they didn't.

Upvotes: 3

Related Questions