Reputation: 966
We have these objects:
// Listing 3.12
@Immutable
class OneValueCache {
private final BigInteger lastNumber;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger i, BigInteger[] factors) {
lastNumber = i;
lastFactors = Arrays.copyOf(factors, factors.length);
}
public BigInteger[] getFactors(BigInteger i) {
if (lastNumber == null || !lastNumber.equals(i)) {
return null;
} else {
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
}
// Listing 3.13
@ThreadSafe
public class VolatileCachedFactorizer implements Servlet {
private volatile OneValueCache cache =
new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factors = factor(i);
cache = new OneValueCache(i, factors);
}
encodeIntoResponse(resp, factors);
}
}
In the book it is said that VolatileCachedFactorizer is thread save. Why?
For example:
Can this flow be considered thread save? What do I not understand?
Upvotes: 3
Views: 274
Reputation: 1
It's thread-safety. The ultimate goal is to ensure that the service
method output is correct. The OneValueCache always keep the right relationship between lastNumber
and lastFactors
.
Upvotes: -1
Reputation: 43436
Short answer:
Thread safety is not really an absolute. You have to determine the desired behavior, and then ask whether the implementation gives you that behavior that in the presence of multithreading.
Longer answer:
So, what's the desired behavior here? Is it just that the right answer is always given, or is it also that it's always implemented exactly once if two threads ask for it in a row?
If it's the latter — that is, if you really want to save every bit of CPU — then you're right, this isn't thread-safe. Two requests could come in at the same time (or close enough to it) to get the factors for the same number N, and if the timings worked out, both threads could end up calculating that number.
But with a single-value cache, you already have the problem of recalculating things you already knew. For instance, what if three requests come in, for N, K, and N again? The request for K would invalidate the cache at N, and so you'd have to recalculate it.
So, this cache is really optimized for "streaks" of the same value, and as such the cost of twice-calculating the first couple (or even few!) answers in that streak might be an acceptable cost: in return, you get code that's free of any blocking and pretty simple to understand.
What's crucial is that it never gives you the wrong answer. That is, if you ask for N and K at the same time, the response for K should never give you the answer for N. This implementation gets you that guarantee, so I would call it thread safe.
Upvotes: 2