Kr0e
Kr0e

Reputation: 2249

Does Vert.x has real concurrency for single verticles?

the question might look like a troll but it is actually about how vert.x manages concurrency since a verticle itself runs in a dedicated thread.

Let's look at this simple vert.x http server written in Java:

import org.vertx.java.core.Handler;
import org.vertx.java.core.http.HttpServerRequest;
import org.vertx.java.platform.Verticle;

public class Server extends Verticle {
    public void start() {
        vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() {
           public void handle(HttpServerRequest req) {
                req.response().end("Hello");
           }
        }).listen(8080);
    }
}

As far as I understand the docs, this whole file represents a verticle. So the start method is called within the dedicated verticle thread, so far so good. But where is the requestHandler invoked ? If it is invoked on exactly this thread I can't see where it is better than node.js.

I'm pretty familiar with Netty, which is the network/concurrency library vert.x is based on. Every incoming connection is mapped to a dedicated thread which scales quite nicely. So.. does this mean that incoming connections represent verticles as well ? But how can then the verticle instance "Server" communicate with those clients ? In fact I would say that this concept is as limited as Node.js.

Please help me to understand the concepts right!

Regards, Chris

Upvotes: 8

Views: 5077

Answers (3)

Michal Boska
Michal Boska

Reputation: 1041

As you correctly answered to yourself, vertex indeed uses async non-blocking programming (like node.js) so you can't do blocking operations because you would otherwise stop the whole (application) world from turning.

You can scale servers as you correctly stated, by spawning more (n=CPU cores) verticle instances, each trying to listen on same TCP/HTTP port.

Where it shines compared to node.js is that the JVM itself is multi-threaded, which gives you more advantages (from the runtime point of view, not including type safety of Java etc etc):

  • Multithreaded (cross-verticle) communication, while still being constrained to thread-safe Actor-like model, does not require IPC (Inter Process Communication) to pass messages between verticles - everything happens inside the same process, same memory region. Which is faster than node.js spawning every forked task in a new system process and using IPC to communicate
  • Ability to do compute-heavy and/or blocking tasks within the same JVM process: http://vertx.io/docs/vertx-core/java/#blocking_code or http://vertx.io/docs/vertx-core/java/#worker_verticles
  • Speed of HotSpot JVM compared to V8 :)

Upvotes: 1

raniejade
raniejade

Reputation: 515

Every verticle is single threaded, upon startup the vertx subsystem assigns an event loop to that verticle. Every code in that verticle will be executed in that event loop. Next time you should ask questions in http://groups.google.com/forum/#!forum/vertx, the group is very lively your question will most likely be answered immediately.

Upvotes: 1

Kr0e
Kr0e

Reputation: 2249

I've talked to someone who is quite involved in vert.x and he told me that I'm basically right about the "concurrency" issue.

BUT: He showed me a section in the docs which I totally missed where "Scaling servers" is explained in detail.

The basic concept is, that when you write a verticle you just have single core performance. But it is possible to start the vert.x platform using the -instance parameter which defines how many instances of a given verticle are run. Vert.x does a bit of magic under the hood so that 10 instances of my server do not try to open 10 server sockets but actually a single on instead. This way vert.x is horizontally scalable even for single verticles.

This is really a great concept and especially a great framework!!

Upvotes: 7

Related Questions