froi
froi

Reputation: 7768

Is sharing database with multiple serverless functions good practice?

Is sharing database with multiple serverless functions good practice?

Like in a CRUD application, normally the Create, Update, Delete and Read are different operations sharing the same database. If we migrate that idea on serverless, is that still ideal? All of those operations accessing the same database.

My hesitation comes from the idea of sharing databases between different microservices. Since that increases coupling and makes things more fragile.

Upvotes: 9

Views: 1749

Answers (1)

ketcham
ketcham

Reputation: 932

The answer to this question is dependent on the circumstances of both the database and the operations.

These concerns can be boiled down to the following characteristics of the database:

  • Can it handle concurrency? This is the number one reason that can stop a network of serverless functions from operating on the database. For example, S3 buckets cannot handle concurrency, so a workaround such as firehose or an SQS would need to be implemented in order for multiple lambdas to operate simultaneously. DynamoDB, on the other hand, can handle concurrency with no problem.
  • Is it a transactional or analytical database? This would limit how fast reads vs. writes take place, and if they're too slow, your lambdas will get exponentially slower. This means that if, for example, writes are slow, then they should be done in large batches- not in small increments from multiple instances.
  • What are its limitations for operation frequency? There can be limitations from both sides of the equation. Lambdas on default have a maximum concurrency of 1000 if they all exist in the same region. Databases often also have limitations for how many operations can take place at the same time.

In most cases, the first two bullets are most important, since limitations normally are not reached except for a few rare cases.

Upvotes: 5

Related Questions