Tal
Tal

Reputation: 7997

How to control concurrency per queue?

Sidekiq documentation suggests I can only control the global concurrency of sidekiq, rather than per queue. I am raising a question here with hope that there's a solution for a per-queue concurrency setting. Some 3rd party services just won't accept high concurrency, and limiting the entire sidekiq just for those is painful.

I'm on sidekiq 3.3

Upvotes: 16

Views: 17418

Answers (4)

Brett Green
Brett Green

Reputation: 3755

I simply remove concurrency from the yml file and change it when I launch the worker.

sidekiq.yml

:queues:
  - default
  - long_running

I am running my workers in a docker environment. I just run separate workers for each queue:

Default worker starts with

sidekiq -q default -c 5

Long running worker starts with

sidekiq -q long_running -c 2

Alternatively, you could have separate yml configs and use the -C option to pass a different config

Upvotes: 4

ajit123jain
ajit123jain

Reputation: 67

To handle the queue wise concurrency you can use this gem sidekiq-limit-fetch. which fix the maximum no of threads that queue can use.

Upvotes: 1

Jared
Jared

Reputation: 816

Using Heroku I was able to control concurrency per queue by setting an environment variable in Procfile and then utilizing it in an sidekiq.rb initializer:

Sidekiq.configure_server do |config|
 config.options[:concurrency] = (ENV['SIDEKIQ_WORKERS_PROCFILE'] || ENV['SIDEKIQ_WORKERS'] || 1).to_i
 ...
end

SIDEKIQ_WORKERS_PROCFILE is set in Procfile for one queue - other queues use SIDEKIQ_WORKERS that is set in Heroku settings.

I'm not sure if this could be anyhow helpful in your scenario though.

UPDATE

To clarify this, the idea involves deployment on Heroku and every queue is processed in a separate dyno. This makes it still use global sidekiq concurrency settings, dynos are just a workaround which does the job in my use case.

My Procfile looks like this:

web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
default: env HEROKU_PROCESS=default bundle exec sidekiq -c 5
important: env HEROKU_PROCESS=important bundle exec sidekiq -q important -c 5
instant: env HEROKU_PROCESS=instant bundle exec sidekiq -q instant -c 5
matrices: env HEROKU_PROCESS=matrices SIDEKIQ_WORKERS_PROCFILE=1 bundle exec sidekiq -q matrices -c 1

You can see that the matrices worker has SIDEKIQ_WORKERS_PROCFILE variable set to 1 - this makes it possible to run the worker with the queue with different concurrency. The variable is read by the sidekiq.rb initializer. Please note that there is also the -c 1 option - I don't know if that matters however.

The initializer is already up there.

All set up and in the sidekiq dashboard I can see that the matrices queue is running 1 thread while other use 3 (the SIDEKIQ_WORKERS variable is set to 3 in Heroku settings env. variable):

enter image description here

Upvotes: 5

David Hempy
David Hempy

Reputation: 6237

The Enterprise version of Sidekiq has enhanced concurrency control:

https://github.com/mperham/sidekiq/wiki/Ent-Rate-Limiting

In particular, you can limit concurrency of arbitrary jobs by count or interval. e.g. up to 20 jobs at a time; or up to 20 jobs per (60 seconds/1 hour, etc). The interval can be rolling or clock-aligned.

This could satisfy the per-queue control you're asking about with the correct usage. But it can be much more flexible, controlling concurrency by your own groupings. For example, you could specify up to 30 jobs per second can hit PayPal. Or, you could specify 30 jobs per second per state can hit Paypay, and 15 jobs per state can hit Stripe simultaneously. (Assuming 'state' is an attribute in your data, of course.)

There's no limit to how fine-grained you want to define your groups.

Upvotes: 5

Related Questions