Reputation:
So I run this command:
$ redis-cli --intrinsic-latency 100
... some lines ...
11386032 total runs (avg latency: 8.7827 microseconds / 87826.91 nanoseconds per run).
Worst run took 5064x longer than the average latency.
The problem in this report is that 87826.91 nanoseconds is not equal to 8.7827 microseconds.
The correct answer is 8782.69 nanoseconds
About versions:
$ redis-cli -v
redis-cli 3.0.5
$ redis-server -v
Redis server v=3.0.5 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=9e32aff68ca15a3f
Upvotes: 0
Views: 376
Reputation:
In redis-cli.c there is this code:
static void intrinsicLatencyMode(void) {
.......
double avg_us = (double)run_time/runs;
double avg_ns = avg_us * 10e3;
if (force_cancel_loop || end > test_end) {
printf("\n%lld total runs "
"(avg latency: "
"%.4f microseconds / %.2f nanoseconds per run).\n",
runs, avg_us, avg_ns);
printf("Worst run took %.0fx longer than the average latency.\n",
max_latency / avg_us);
exit(0);
}
The problem is in the line that converts microseconds to nanoseconds:
double avg_ns = avg_us * 10e3;
Instead of 10e3 the code should use 1e3:
>gdb -q
(gdb) print 10e3
$1 = 10000
(gdb) print 1e3
$2 = 1000
(gdb)
Upvotes: 3