Alex Sergeenko
Alex Sergeenko

Reputation: 640

Will WebFlux have any bottlenecks in such architecture?

We're currently about to migrate from monolithic design to the microservice architecture, trying to choose the best way to replace JAX-WS with RESTful and considering to use Spring WebFlux.

We currently have an JAX-WS endpoint deployed at Tomcat EE serving requests from third-party clients. Webservice endpoint makes a long running blocking call to the database and then sends a SOAP-response to the client with a data retrieved from DB (Oracle).

Oracle DB will be replaced with one of NoSQL databases soon (possibly it will be MongoDB). Since MongoDB supports asynchronous calls we're considering to substitute current implementation with a microservice exposing REST endpoint based on WebFlux.

We have about 2500 req/sec at peaks, so current endpoint often gets down with a OutOfMemoryError. It was a root cause that pushed us towards migration.

My thoughts are to create a non-blocking endpoint which will call MongoDB in asynchronous manner and send a REST-response to the client. So I have a few questions considering basic features that WebFlux provides:

  1. As far as I concerned there is a built-in backpressure control at the business-level (not TCP flow control) in WebFlux and it works generally via Reactive Streams. Since our clients are not reactive, does it means that such way of a backpressure control is not implementable here?

  2. Suppose that calls to a new database remains long-running in a new architecture. Since Netty uses EventLoop to serve incoming requests, is there possible a situation when the microservice has accepted all incoming HTTP connections, invoke an async call to the db and subscribed a resulted Mono to the scheduler, but, since the request quantity keeps growing explosively, application keep creating new workers at scheduler pools that leads to a crashing? Is this a realistic scenario?

  3. Suppose that calls to the database remained synchronous. Is there a way to handle them using WebFlux in a such way that microservice will remain reachable under load?

  4. Which bottlenecks can be found in such design? Does this solution looks adequate?

  5. Does Netty (or Reactor-Netty, or whatever) has a tool to limit a quantity of requests processing simultaneously? Say I would to limit the endpoint to serve not more than 100 parallel requests and skip all requests above that point, is it possible?

  6. Suppose I will create a huge amount of threads serving async (or maybe sync) calls to the DB. Where is a breaking point when the application will crash or stop responding to the incoming HTTP-requests? What will happened there - we will ran out of memory or..?

Upvotes: 1

Views: 433

Answers (1)

Alex Sergeenko
Alex Sergeenko

Reputation: 640

Finally, there were no any major issues concerning perfomance during our pilot project. But unfortunately we didn't take in account some specific Linux (and also OpenShift) TCP tuning props.

They may significanly affect the overall perfomance, in our case we've gained about 10 times more requests after tuning.

So pay attention to the net.core.somaxconn and other related parameters. I've summarized our expertise in the article.

Upvotes: 1

Related Questions