Reputation: 671
We are running our website on the AWS serverless infrastructure through Laravel vapor now. It runs on a serverless Aurora database and uses Amazon SQS as the queue driver. Basically the defaults for a Laravel Vapor deploy.
We are currently running into a performance issue that while we are trying to resolve it, has us smash our head into other walls we meet on our journey. So here we go.
What we want to do
Our customers can have various types of subscriptions that need to be renewed. These subscriptions come in all kinds of flavours, can have different billing intervals, different currencies and different prices.
On our customer dashboard we want to present them with a nice card informing the customer about their upcoming renewals, with a good estimate of what the cost will be, so they can make sure they have enough credits:
As our subscriptions come with different flavours, we have one table that holds the basic subscription details, such as net_renewal_date
. It also references a subscriptionable_id
of subscriptionable_type
which is a morph relation ship to the different types of models that a user can have a subscription for.
Our first attempt
In our first attempt, we basically added a REST endpoint which fetched the upcoming renewal forecast.
It basically took all Subscriptions that were up for renewal the coming 30 days. For each of those items, we would calculate the current price in its currency, and add tax caculations.
That was then returned into a collection that we further used to: 1/ calculated the total per currency 2/ filter that collection for items within the next 14 days, and calculate the same total per currency.
We would then basically just convert the different amounts to our base currency (EUR) and return the sum thereof.
This worked great for the vast majority of our clients. Even customers with 100 subscriptions were no issue at all.
But then we migrated one of our larger customers to the new platform. He basically had 1500+ subcsriptions renewing in the upcoming 30 days, so that didn't go well...
Our second attempt Because going through the above code simply doens't work in an acceptable amount of time, we decided we had to move the simulation calculation into a seperate job.
We added an attribute to the subscriptions table and called it 'simulated_renewal_amount'
This job would need to run every time when: - a price changes - the customer's discount would change (based on his loyalty, we provide seperate prices - the exchanges rates change.
So the idea was to listen for any of these changes, and then dispatch a job to recalculate the simulated amount to any of the involved subscriptions. This however means that a change in an exchange rate for instance can easily trigger 10,000 jobs to be processed.
And this is where it becomes tricky. Even though running just one job only takes less than 1200ms in most cases, it seems that dispatching a lot of jobs that need to do the same calculations for a set of subscriptions is causing jobs running 60+ seconds when they are being aborted.
What is the best practice to setup such a queued job? Should I just created one job in stead and process them sequentially?
Any insights on how we can best set this up to start with, would be very welcome. We've played a lot with it, and it always seems to be ending up with the same kind of issues.
FYI - we host the site on laravel vapor, so serverless on AWS infrastructure with an Aurora database.
Upvotes: 1
Views: 1750
Reputation: 1421
We have got the same issue. Vapor supports multiple queues but it does not allow you to set job concurrency on a per queue basis, so its not very configurable for drip feeding lots of jobs. We have solved this by making a seeder job that pulls out serialized jobs from an "instant jobs" table. We added a sleep loop also to allow granular processing throughout the whole minute (a new seeder job is scheduled each minute).
public function handle()
{
$killAt = Carbon::now()->addSeconds(50);
do{
InstantJob::orderBy('id')->cursor()->each(function(InstantJob $job){
if($job->isThrottled()){
return true;
}
$job->dispatch();
});
sleep(5);
} while (Carbon::now()->lessThan($killAt));
}
The throttle, if you are interested works off a throttle key (job group/name etc.) and looks like:
public function isThrottled(): bool
{
Redis::connection('cache')
->throttle($this->throttle_key)
->block(0)
->allow(10) //jobs
->every(5) // seconds
->then(function () use(&$throttled) {
$throttled = false;
}, function () use(&$throttled){
$throttled = true;
});
return $throttled;
}
This actually solves our problem of drip feeding jobs onto the queue without actually starting them.
One question for you... We are currently using a small RDS instance and we get a lot of issues with too many concurrent connections. Do you see this issue with serverless db's? Do they scale fast enough to ensure no drop outs?
Upvotes: 0