Reputation: 97
I am having a problem with Guava RateLimiter. I create the RateLimiter with RateLimiter.create(1.0) ("1 permit per second") and calls rateLimiter.acquire() on every cycle, but when I test run I get the following result:
Average: 1232.0 Diff since last: 2540
Average: 1180.0 Diff since last: 258
Average: 1159.0 Diff since last: 746
Average: 1151.0 Diff since last: 997
Average: 1144.0 Diff since last: 1004
Average is the number of milliseconds it sleeps on average and diff is the number of milliseconds passed since last print. On average it's okay, it does not permit my code to run more than once per second. But sometimes (as you can see) it runs more than once per second.
Do you have any idea why? Am I missing something?
The code that generates the above output:
private int numberOfRequests;
private Date start;
private long last;
private boolean first = true;
private void sleep() {
numberOfRequests++;
if(first) {
first = false;
start = new Date();
}
rateLimiter.acquire();
long current = new Date().getTime();
double num = (current -start.getTime()) / numberOfRequests;
System.out.println("Average: "+ num + " Diff since last: " + (current - last));
last = current;
}
Upvotes: 0
Views: 905
Reputation: 48874
Your benchmark appears to be flawed - when I try to replicate it I see very close to one acquisition per second. Here's my benchmark:
public class RateLimiterDemo {
public static void main(String[] args) {
StatsAccumulator stats = new StatsAccumulator();
RateLimiter rateLimiter = RateLimiter.create(1.0);
rateLimiter.acquire(); // discard initial permit
for (int i = 0; i < 10; i++) {
long start = System.nanoTime();
rateLimiter.acquire();
stats.add((System.nanoTime() - start) / 1_000_000.0);
}
System.out.println(stats.snapshot());
}
}
A sample run prints:
Stats{count=10, mean=998.9071456, populationStandardDeviation=3.25398397901304, min=989.303887, max=1000.971085}
The variance there is almost all attributable to the benchmark overhead (populating the StatsAccumulator
and computing the time delta).
Even this benchmark has flaws (though I'll contend it's less-so). Creating an accurate benchmark is very hard, and simple whipped-together benchmarks are often either inaccurate or worse don't reflect the actual performance of the code being tested in a production setting.
Upvotes: 0