Reputation: 4774
I'm using a LoadingCache:
val cacheloader =
new CacheLoader[Key, Value]() {
override def load(key: Key): Value = loadKeyFunc(key, None)
override def reload(key: Key, prevValue: Value): ListenableFuture[Value] = {
val task = ListenableFutureTask.create(new Callable[Value]() {
def call(): Value = {
loadKeyFunc(key, Some(prevValue))
}
})
executor.execute(task)
return task
}
}
val cache: LoadingCache[FirstPageSearch, Array[String]] =
CacheBuilder.newBuilder()
.maximumSize(10)
.refreshAfterWrite(5, TimeUnit.MINUTES)
.build(cacheLoader)
loadKeyFunc
is just an anonymous function of the form val loadKeyFunc: (Key, Option[Value]) => Value
.
The cacheLoader uses an executor (Executors.newFixedThreadPool(6)
) to make the refreshing async.
The system receives an HTTP request that always goes through this cache (issue a get(key)
to the cache), always fetching the result from the cache. When it is too old, it recalculates on background and serves it on the next request.
Everything works fine for several days and maybe weeks. But sometimes (generally on very low usage hours) the cache stops refreshing. New requests starts receiving always the same old data. I have a log statement inside loadKeyFunc
and I know it isn't being called.
It seems that for some reason LoadingCache
is not seeing that the data is much older than 5 minutes.
After I restart the system (an HTTP server) it all becomes normal again.
Any ideas?
PS: The loadKeyFunc we use is just a simple log statement followed by a call to a stateless object that queries our search backend system returning a String array (each array position is a search page).
PS2: It is a Scalatra-based running an embedded Jetty HTTP server. The LoadingCache
is created inside the ScalatraServlet
object.
A small cleaned-up log (/first_page is the request that always uses the cache, and the "Executing..." is the log statement inside the (re)load method of CacheLoader
):
[INFO] [qtp48202314-8659] 2012-11-03 04:55:58 - Request(/first_page)
[INFO] [pool-2-thread-10] 2012-11-03 04:55:58 - Executing FirstPageSearch to put in cache
[INFO] [qtp48202314-8659] 2012-11-03 05:19:17 - Request(/first_page)
[INFO] [qtp48202314-8659] 2012-11-03 05:20:32 - Request(/first_page)
[INFO] [qtp48202314-8661] 2012-11-03 05:25:22 - Request(/first_page)
[INFO] [qtp48202314-8659] 2012-11-03 05:26:09 - Request(/first_page)
[INFO] [qtp48202314-8659] 2012-11-03 05:26:18 - Request(/first_page)
[INFO] [qtp48202314-8661] 2012-11-03 05:38:37 - Request(/first_page)
[INFO] [qtp48202314-8659] 2012-11-03 06:54:36 - Request(/first_page)
[INFO] [qtp48202314-26] 2012-11-03 11:31:37 - Request(/first_page)
[INFO] [pool-2-thread-1] 2012-11-03 11:31:37 - Executing FirstPageSearch to put in cache
[INFO] [qtp48202314-25] 2012-11-03 11:41:53 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 14:48:58 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 14:54:45 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:31:32 - Request(/first_page)
[INFO] [qtp48202314-26] 2012-11-03 15:31:48 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:32:05 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:44:44 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:44:44 - Request(/first_page)
[INFO] [qtp48202314-26] 2012-11-03 15:47:39 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:51:20 - Request(/first_page)
[INFO] [qtp48202314-26] 2012-11-03 15:52:59 - Request(/first_page)
[INFO] [qtp48202314-8674] 2012-11-03 15:54:18 - Request(/first_page)
[INFO] [qtp48202314-26] 2012-11-03 15:55:37 - Request(/first_page)
Upvotes: 0
Views: 1474
Reputation: 591
You can avoid serving stale data using expireAfterWrite
. This makes a hard guarantee not to serve stale data. I generally recommend combining expireAfterWrite
with refreshAFterWrite
, but obviously with a larger delay before expiration, to allow time for the refresh to be performed.
Upvotes: 0
Reputation: 198361
From the CacheBuilder.refreshAfterWrite
Javadoc:
Currently automatic refreshes are performed when the first stale request for an entry occurs. The request triggering refresh will make a blocking call to
CacheLoader.reload(K, V)
and immediately return the new value if the returned future is complete, and the old value otherwise.
So when a value is stale, the refresh isn't triggered until you actually query on that key, at which point it'll return the old value and trigger the refresh asynchronously. As soon as the value is done being refreshed, it'll start getting returned from the cache.
Are you sure that's not why you're seeing this behavior?
The other possibility that comes to mind is that for some reason your loadKeyFunc isn't terminating. In a fixed-size threadpool, if you have six queries that locked up for any reason, that might be blocking new queries from entering the threadpool at all, which seems like it would cause exactly the problem you observe. Perhaps you should use an Executors.newCachedThreadPool
, which would avoid that problem -- though you'd still run into memory leaks caused by locked-up threads. =/
Upvotes: 1