Reputation: 2210
I have a C++ binary and I am trying to measure it's worst case performance. I executed it with /usr/bin/time -v < command >
And result was as
User time (seconds): 161.07
System time (seconds): 16.64
Percent of CPU this job got: 7%
Elapsed (wall clock) time (h:mm:ss or m:ss): 39:44.46
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 19889808
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 1272786
Voluntary context switches: 233597
Involuntary context switches: 138
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
How do I interpret this result, what is causing this application to take this much time?
There is no waiting for user input, it basically deals with large text file and database.
I am looking at it from Linux(OS) perspective.Is it too many context switches(Round robin Scheduling in Linux) that has caused this?
Upvotes: 1
Views: 261
Reputation: 664
The best thing you can do is to run it with a profiler like gprof, gperftools, callgrind (part of valgrind) or (the best in my opinion) Intel VTune. They can show you what is going one behind the code. And you'd better have the debug symbols (!= than compiling without optimization) to get a clear picture about that. Otherwise you can just have "best guesses" of what is going one under the hood...
As I said, I'm biased towards VTune as it is fast and it displays a lot of useful info. Take a look here at an example:
Upvotes: 0