Reputation: 13
I don't quite get how vert.x is applied for a webserver.
The concept I know for webserver is the thread-based one.
So it is clearly defined which thread is doing the work for which socket. However for every socket you need a new thread, which is expensive in the long run for many sockets.
Then there is the event-based concept that vert.x supplies. So far I have understood, it should work anyhow like this:
As a webserver example:
class WebServer: AbstractVerticle() {
lateinit var server: HttpServer
override fun start() {
server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
var router = Router.router(vertx);
router.route("/test").handler { routingContext ->
var response = routingContext.response();
response.end("Hello from my first HttpServer")
}
server.requestHandler(router).listen()
}
}
This WebServer can be deployed multiple times in a Vertx instance. And as it seems, each WebServer instance gets its own thread. When I try to connect 100 Clients and reply with a simple response, it seems like each Client is handled synchronously. Because when I do a Thread.sleep statement in each server handler, then the every second one client gets a response. However it should be that all server handlers should start their 1 second sleep and then almost identically reply to all clients after this time.
This is the code to start 100 clients:
fun main(){
Vertx.vertx().deployVerticle(object : AbstractVerticle(){
override fun start() {
for(i in 0 .. 100)
MyWebClient(vertx)
}
})
}
class MyWebClient(val vertx: Vertx) {
init {
println("Client starting ...")
val webClient = WebClient.create(vertx, WebClientOptions().setDefaultPort(1234).setDefaultHost("localhost"))
webClient.get("/test").send { ar ->
if(ar.succeeded()){
val response: HttpResponse<Buffer> = ar.result()
println("Received response with status code ${response.statusCode()} + ${response.body()}")
} else {
println("Something went wrong " + ar.cause().message)
}
}
}
}
Does anybody know an explanation for this?
Upvotes: 1
Views: 707
Reputation: 17701
There are some major issues there.
When you do this:
class WebServer: AbstractVerticle() {
lateinit var server: HttpServer
override fun start() {
server = vertx.createHttpServer(HttpServerOptions().setPort(1234).setHost("localhost"))
...
}
}
Then something like this:
vertx.deployVerticle(WebServer::class.java.name, DeploymentOptions().setInstances(4)
You'll get 4 verticles, but only single one of them will actually listen on the port. So, you're not getting any more concurrency.
Second, when you use Thread.sleep
in your Vert.x code, you're blocking the event loop thread.
Third, your test with client is incorrect. Creation of a WebClient is very expensive, so by creating those one after the other, you're actually issuing requests very slowly. If you really want to test your web application, use something like https://github.com/wg/wrk
Upvotes: 1
Reputation: 2017
The issue with your code is that by default Vert.x only uses a maximum of one thread per verticle (if there are more verticles than available threads, a single thread has to handle multiple verticles).
Therefore, if you perform 100 requests against a single instance of a single verticle, the requests are processed by a single thread.
To solve your issue, you should deploy multiple instances of your verticle, i.e.
vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setInstances(4))
when doing that, always 4 responses will be received at nearly the same time, because 4 instances of the verticle are running and thus 4 threads are utilized.
In previous versions of Vert.x, you could also simply configure multi-threading for a verticle if you didn't want to set a specific amount of instances.
vertx.deployVerticle(MainVerticle::class.java.name, DeploymentOptions().setWorker(true).setMultiThreaded(true))
However, this feature has been deprecated and replaced with customer worker pools.
For more information concerning this topic, I encourage you to take a look at the Vert.x-core Kotlin documentation
Upvotes: 0