Sreenidhi
Sreenidhi

Reputation: 245

Running time of algorithms in seconds

I understand that the running time of algorithms is expressed in Big O or Big omega notations and so on, but i still can't figure out how long in seconds (or milliseconds) a code gets executed. for example, n=10^6, and we do O(n), then how do i know how long it takes? i also understand that other statements within say, a for loop, will also contribute to the running time and that the time maybe different on different CPUs. But usually in coding contests, we are given a specific time, say 2-5 seconds and here i can't decide if my algorithm is efficient enough or what is making my code slow. thanks!

Upvotes: 1

Views: 2499

Answers (3)

The advantage of the O-notation is that it's machine-independent, and it's actually the best indicator to know if an algorithm is efficient or not compared to others. Note that computers get faster every year, so the concrete speed measure in seconds you get now will change over the time, but with O-notation it will be always clear that it's much faster to solve a problem in O(log n) than in O(n).

But still you have many tools for measuring the (approximate) speed execution-time of a program. For example, in Linux you have the time command. Nevertheless, the result will also depend on many other variables (including physical variables; for example, your CPU temperature may slower the performance of your programs). It's not possible to measure the exact time measure of a given algorithm because of the environment's noise (and it still would be useless because it would always depend on the machine in which it's running).

Upvotes: 0

Andrew Shirley
Andrew Shirley

Reputation: 417

O(n) isn't really meant for testing time. It's more of a test of scalability. Something could take a trillion operations, but it would still be O(1) because it takes exactly 1 trillion operations, no matter the size of the input.

The only way to test time across scale is to actually run it and see what happens with various sizes of inputs.

Upvotes: 0

Sobrique
Sobrique

Reputation: 53488

That's not how big O notation works. What it refers to is the scaling of the algorithm efficiency as you add 'things'.

It's not an absolute time, but rather relative. A 100 items with an O(n) will take 10x as long as 10 items. O(N^2) would mean you'd expect 100x difference. (10 ^ 2 = 100, 100 ^ 2 = 10,000)

And that's all. It's a way of expressing efficiency, not computing runtime. You would still need to know how long one "operation" took, and that's processor/architecture dependent.

See also: What is a plain English explanation of "Big O" notation?

"Big-O notation is a relative representation of the complexity of an algorithm."

Upvotes: 1

Related Questions