user79074
user79074

Reputation: 5270

Can a synchronized block with a future cause a deadlock

Say I do the following:

def foo: Future[Int] = ...

var cache: Option[Int] = None

def getValue: Future[Int] = synchronized {
    cache match {
        case Some(value) => Future(value)
        case None =>
            foo.map { value =>
                cache = Some(value)
                value
            }
    }
}

Is there a risk of deadlock with the above code? Or can I assume that the synchronzied block applies even within the future map block?

Upvotes: 0

Views: 217

Answers (3)

Levi Ramsey
Levi Ramsey

Reputation: 20551

Note that Future itself functions as a cache (see GPI's answer). However, GPI's answer isn't quite equivalent to your code: your code will only cache a successful value and will retry, while if the initial call to expensiveComputation in GPI's answer fails, getValue will always fail.

This however, gives us retry until successful:

def foo: Future[Int] = ???
private def retryFoo(): Future[Int] = foo.recoverWith{ case _ => retryFoo() }
lazy val getValue: Future[Int] = retryFoo()

In general, anything related to Futures which is asynchronous will not respect the synchronized block, unless you happen to Await on the asynchronous part within the synchronized block (which kind of defeats the point). In your case, it's absolutely possible for the following sequence (among many others) to occur:

  • Initial state: cache = None
  • Thread A calls getValue, obtains lock
  • Thread A pattern matches to None, calls foo to get a Future[Int] (fA0), schedules a callback to run in some thread B on fA0's successful completion (fA1)
  • Thread A releases lock
  • Thread A returns fA1
  • Thread C calls getValue, obtains lock
  • Thread C patter matches to None, calls foo to get a Future[Int] (fC0), schedules a callback to run in some thread D on fC0's successful completion (fC1)
  • fA0 completes successfully with value 42
  • Thread B runs callback on fA0, sets cache = Some(42), completes successfully with value 42
  • Thread C releases lock
  • Thread C returns fC1
  • fC1 completes successfull with value 7
  • Thread D runs callback on fC0, sets cache = Some(7), completes successfully with value 7

The code above can't deadlock, but there's no guarantee that foo will successfully complete exactly once (it could successfully complete arbitrarily many times), nor is there any guarantee as to which particular value of foo will be returned by a given call to getValue.

EDIT to add: You could also replace

cache = Some(value)
value

with

cache.synchronized { cache = cache.orElse(Some(value)) }
cache.get

Which would prevent cache from being assigned to multiple times (i.e. it would always contain the value returned by the first map callback to execute on a future returned by foo). It probably still wouldn't deadlock (I find that if I have to reason about a deadlock, my time is probably better spent reasoning about a better abstraction), but is this elaborate/verbose machinery better than just using a retry-on-failure Future as a cache?

Upvotes: 1

Michael Zajac
Michael Zajac

Reputation: 55569

No, but synchronized isn't actually doing much here. getValue returns almost immediately with a Future (which may or may not be completed yet), so the lock on getValue is extremely short-lived. It does not wait for foo.map to evaluate before releasing the lock, because that is executed only after foo is completed, which will almost certainly happen after getValue returns.

Upvotes: 0

GPI
GPI

Reputation: 9328

For a deadlock to exist, at least two different lock operations are to be called (in a possibly out of order sequence).

From what you show here (but we do not see what the foo implementation is), this is not the case. Only one lock exist and it is reentrant (if you try to enter twice on the same syncrhronized block from the same thread, you won't lock yourself out).

Therefore, no deadlock is possible from the code you've shown.

Still, I question this design. Maybe it is a simplification of your actual code, but from what I understand, you have

  1. A function that can generate a int
  2. You want to call this function only once and cache its result

I'd simplify your implementation greatly if that's the case :

def expensiveComputation: Int = ???
val foo = Future { expensiveComputation() }
def getValue: Future[Int] = foo

You'd have a single call to expensiveComputation (per instance of your enclosing object), and a synchronized cache on its return value, because Future is in and of itself a concurrency-safe construct.

Upvotes: 1

Related Questions