Reputation: 7789
I am facing an odd behavior of the Symfony Messenger component. I set it up according to the documentation and I am issuing a messenger:stop-workers
signal on each deploy as instructed here. However, a bug occurred in our system which I traced back to the fact that an old version of the code was being used by the Messenger workers.
Upon some more investigation, this is what happens in our setup:
app/console messenger:consume --env=prod -vv async
to see what happensapp/console messenger:stop-workers --env=prod
The supervisor-managed workers are limited to 1 hour of process time, after which they are stopped and restarted. I can see in the supervisord.log
that this works well. Every hour there are log entries about the processes stopping and starting. But there is nothing whatsoever about them stopping from the messenger:stop-workers
command.
I'm looking for ideas on why this would happen. I read the implementation of the workers and the shutdown signal is sent through a cache, but I did not find any problems in our configuration of it.
Upvotes: 15
Views: 8581
Reputation: 666
I added following task to Ansistrano after_cleanup_tasks.yaml file:
- name: Stop running Symfony Messenger consumers shell: chdir: "{{ ansistrano_release_path.stdout }}/../../current" cmd: php bin/console messenger:stop-workers
Upvotes: 1
Reputation: 6460
Another option is pkill
in case it is available and allowed to be used:
desc('Stop Messenger workers');
task('messenger:stop-workers', function (): void {
run('pkill --uid {{remote_user}} --echo --full messenger:consume');
});
after('deploy:symlink', 'messenger:stop-workers');
This reliably (and still cleanly) terminates all Messenger workers, independent from any release. The arguments in detail:
--uid {{remote_user}}
limits the matching to processes of the SSH user--echo
outputs the name and former PID of the terminated processes--full
also includes the command arguments for matchingmessenger:consume
matches the .../console messenger:consume <flags>
command lineThis makes a few assumptions:
->set('remote_user', '...')
on the host()
to deployconsole messenger:consume
processes are started with the same user by SupervisordUpvotes: 1
Reputation: 1060
Here a workaround for this problem while we are waiting for better solutions.
You can use this workaround only if your Symfony Messenger supports the --failure-limit option committed here: https://github.com/symfony/symfony/pull/35453/commits/ea79206470ac3b71520a35129d36ca0d11ce4a09
Launch messenger through supervisor always with one max failure:
php bin/console messenger:consume async --failure-limit=1
In your code base define a Message RestartMessenger
and the corresponding Handler RestartMessengerHandler
that simply throws an Exception such like MessengerNeedsToBeRestartedException
Create a Symfony command called app:messenger-restart-request
that dispatches RestartMessenger
In your deploy script (bash, Ansible or others) add as last step: php bin/console app:messenger-restart-request
. This will throw an exception that will cause the restart of the Messenger because --failure-limit=1
Upvotes: 2
Reputation: 146
I was running into a similar problem.
To force the stop of the rest of the consumers, as you have seen, the command uses a cache pool. In my case (and probably yours too), is the filesystem pool which is stored into /your_symfony_app/var/cache/{env}/pools
So if you are using Deployer or any other deployment system that replaces a symbolic link with every new deployment, you need to execute the command messenger:stop-workers
inside the folder of your previous release.
Another option is to configure a cache pool shared by all the releases, like memcached or redis.
In my case, using Deployer (with a software still in development) I have been able to solve it by declaring a task like this, and putting it inside the deploy main task:
task('messenger:stop', function () {
if (has('previous_release')) {
run('{{bin/php}} {{previous_release}}/bin/console messenger:stop-workers');
}
})->desc('Stop workers');
Upvotes: 12