Linda Nu
Linda Nu

Reputation: 57

Understanding the results from the time command

I am running a program using the /usr/bin/time command successively and I'm getting the following results:

Invocation 1

real    0m0.044s

user    0m0.000s

sys 0m0.041s

Invocation 2

real    0m0.037s

user    0m0.002s

sys 0m0.032s

Invocation 3

real    0m0.033s

user    0m0.001s

sys 0m0.029s

What could be the reason for these little variations?

And does this suggest that the buffering strategy of the standard I/O library is successful?

Upvotes: 0

Views: 178

Answers (1)

Sridhar Nagarajan
Sridhar Nagarajan

Reputation: 1105

  • real: The total elapsed time between start and end of the program.
  • user: The user CPU time that the program used. This includes all user mode library calls that the program made.
  • sys: The system CPU time that the program used. This includes only the kernel system calls and not any user library calls.

The execution time will change on different times running a program. The operating system is allowed to schedule program run-times and wait times as it sees fit. Most of the times, especially on user-centric OSs (Windows, Mac, Linux) they try to make things fair, so that a program doesn't get totally forgotten about. However, that isn't to say that the scheduling won't change each and every time a program is run.

For instance, maybe you run it the first time and there is only 1 (highly unlikely) other process that needs to run at the same time. Your program will get more timeslots closer together. Then you run your program again, but this time there are 10 programs running as well. This means that your code all of the sudden is likely to get 1/10 (one-tenth) or the time slots it had in the previous execution.

Sadly, most of the time when you are looking at execution time you have a timestamp taken before the program begins running (for the first time) and then again once it completes. This means it is the entire time from the beginning of when it was first scheduled, to the time when it completed.

So, if it gets 10 slots given to it at 1 second intervals it will take about 10 seconds for the program to execute.

That said, if you run your program say 100 times and take the average elapsed time over all the executions you will begin getting towards the actual running time of your program. The more data points (say running it 1000 times instead of 100) the closer to the correct answer you will get. Granted, that can also change based on OS since they may have slightly different scheduling algorithms.

Upvotes: 5

Related Questions