sectornitad
sectornitad

Reputation: 981

Play Futures and network latency - what blocks, at what level?

So we have a JVM on an OS. In the JVM is running a Play (scala) app. This App uses Futures to go out and get three api calls:

object MyAwsomeController extends Controller {

  val call1:Future[T] = scala.concurrent.future { ** call out across the wire to some awesome service ** }

  val call2:Future[T] = scala.concurrent.future { ** call out across the wire to another awesome service ** }

  val call3:Future[T] = scala.concurrent.future { ** call out across the wire to yet another different awesome service ** }

  def index() = Action { implicit request => 

      for {
        res1 <- call1
        res2 <- call2
        res3 <- call3
      } yield {
        Ok(views.html.index(res1, res2, res3)
      }

  }
}

Now, as I understand it by declaring the futures as vals in the object they will be called upon instantiation of the (singleton) MyAwesomeController object - which may not be a great idea as the Future[T] will then be immutable for the duration of the play app. However it serves my purpose to frame my question in. The Play app (which is probably a StaticApplication() invocation) is not blocking on its thread, but somewhere, somehow something must have blocked to wait for the result of the three 'concurrent' calls out across the wire. So is this blocked at the JVM level, or a socket block on the OS?

Now, if I had the api calls inside the for-yield comprehension, thusly:

for {
  res1 <- scala.concurrent.future { ** call out across the wire ** }
  res2 <- scala.concurrent.future { ** call out across the wire ** }
  res3 <- scala.concurrent.future { ** call out across the wire ** }

} ... 

then each call to this route/controller would call the apis but would only yield when all three have completed (or timed out?). So again, somewhere on the system something is blocking. So surely then a whole bunch of client's requesting that route/controller will actually fire off all 3 api calls for every request. Get busy, and the route/controller is requested by 1000s of browsers and somewhere on the box a bunch of threads are indeed actually blocking (assuming no caching). So sure, Play itself is not blocking but the system as a whole is using a lot of blocked resources...?

I am trying to get a level understanding of the whole stack here. At some point there has to be blocking going on. Some thread (OS or JVM) has to be waiting twiddling its fingers waiting for the results, even if the main Play thread is able to scoot on and serve other requests.

Am I the proverbial dog barking up the proverbial wrong tree here, or am I onto something?

Thanks for your assistance in advance! Future[Thanks]

Upvotes: 1

Views: 127

Answers (2)

poroszd
poroszd

Reputation: 602

I am trying to get a level understanding of the whole stack here. At some point there has to be blocking going on.

No, this is not the case, if you are using asynchronous I/O libraries (like NIO or Netty). Understanding how they avoid blocking will lead you to low-level kernel functions like select, poll or epoll, and interrupts.

Wiki might be a good starting point: http://en.wikipedia.org/wiki/Asynchronous_I/O

Upvotes: 0

Rado Buransky
Rado Buransky

Reputation: 3294

Given the details that you have provided (and in fact you have no "problem") I would recommend this article which should explain everything: http://www.playframework.com/documentation/2.2.x/ThreadPools

Upvotes: 0

Related Questions