Reputation: 4146
I am fooling around with a few languages and I want to compare time it takes to perform some computation. I have troubles with proper time measurement in swift. I am trying solution from this answer but I get improper results-the exeution takes much longer when i run swift code.swift
than after compilation and the results are telling me the opposite:
$ swiftc sort.swift -o csort
gonczor ~ Projects Learning Swift timers
$ ./csort
Swift: 27858 ns
gonczor ~ Projects Learning Swift timers
$ swift sort.swift
Swift: 22467 ns
This is the code:
iimport Dispatch
import CoreFoundation
var data = [some random integers]
func sort(data: inout [Int]){
for i in 0..<data.count{
for j in i..<data.count{
if data[i] > data[j]{
let tmp = data[i]
data[i] = data[j]
data[j] = tmp
}
}
}
}
// let start = DispatchTime.now()
// sort(data: &data)
// let stop = DispatchTime.now()
// let nanoTime = stop.uptimeNanoseconds - start.uptimeNanoseconds
// let nanoTimeDouble = Double(nanoTime) / 1_000_000_000
let startTime = clock()
sort(data: &data)
let endTime = clock()
print("Swift:\t \(endTime - startTime) ns")
Same happens when I change timer to clock()
call or use CFAbsoluteTimeGetCurrent()
and whether I compare 1000 or 5000 element array.
EDIT:
To be clearer. I know that pasting one run does not produce statistically meaningful results but the problem is that I see one approach takes significantly longer than the other and I am told something different.
EDIT2: It seems I am still not expressing my problem clear enough. I have created a bash script to show the problem.
I am using time
utility to check how much time it takes to execute command. Once again: I am only fooling around, I do not need statistically meaningful results. I am just wondering why the swift utilities tell me something different that I am experiencing.
Script:
#!/bin/bash
echo "swift sort.swift"
time swift sort.swift
echo "./cswift"
time ./csort
Result:
$ ./test.sh
swift sort.swift
Swift: 22651 ns
real 0m0.954s
user 0m0.845s
sys 0m0.098s
./cswift
Swift: 25388 ns
real 0m0.046s
user 0m0.033s
sys 0m0.008s
AS you can see the results using time
show that it takes more or less 10 times longer to execute one command than another. And from the swift code I get info that it is more or less the same.
Upvotes: 0
Views: 1644
Reputation: 437552
A couple of observations:
In terms of the best way to measure speed you can use Date
or CFAbsoluteTimeGetCurrent
, but you'll see that the documentation for those will warn you that
Repeated calls to this function do not guarantee monotonically increasing results.
This is effectively warning you that, in the unlikely event that there is an adjustment to the system's clock in the intervening period, the calculated elapsed time may not be entirely accurate.
It is advised to use mach_time
if you need a great deal of accuracy when measuring the elapsed time. This involves some annoying CPU-specific adjustments (See Technical Q&A 1398.), but CACurrentMediaTime
offers a simple alternative because it uses mach_time
(which does not suffer this problem), but converts it to seconds to make it really easy to use.
The aforementioned notwithstanding, it seems that there is a more fundamental issue at play here: It looks like you're trying to reconcile a difference between to very different ways of running Swift code, namely:
swiftc hello.swift -o hello
time ./hello
and
time swift hello.swift
The former is compiling hello.swift
into a stand alone executable. The latter is loading swift
REPL, which then effectively interprets the Swift code.
This has nothing to do with the "proper" way to measure time. The time to execute the pre-compiled version should always be faster than invoking swift
and passing it a source file. Not only is there more more overhead in invoking the latter, but the execution of the pre-compiled version is likely to be faster once execution starts, as well.
If you're really benchmarking the performance of running these routines, you should not rely on a single sort of 5000 items. I'd suggest sorting millions of items and repeating this multiple times and averaging the statistics. A single iteration of the sort is unsufficient to draw any meaningful conclusions.
Bottom line, you need to decide whether you want to benchmark just the execution of the code, but also the overhead of starting the REPL, too.
Upvotes: 1