pulse00
pulse00

Reputation: 1314

Handling doctrine 2 connections in long running background scripts

I'm running PHP commandline scripts as rabbitmq consumers which need to connect to a MySQL database. Those scripts run as Symfony2 commands using Doctrine2 ORM, meaning opening and closing the database connection is handled behind the scenes. The connection is normally closed automatically when the cli command exits - which is by definition not happening for a long time in a background consumer.

This is a problem when the consumer is idle (no incoming messages) longer then the wait_timeout setting in the MySQL server configuration. If no message is consumed longer than that period, the database server will close the connection and the next message will fail with a MySQL server has gone away exception.

I've thought about 2 solutions for the problem:

  1. Open the connection before each message and close the connection manually after handling the message.
  2. Implementing a ping message which runs a dummy SQL query like SELECT 1 FROM table each n minutes and call it using a cronjob.

The problem with the first approach is: If the traffic on that queue is high, there might be a significant overhead for the consumer in opening/closing connections. The second approach just sounds like an ugly hack to deal with the issue, but at least i can use a single connection during high load times.

Are there any better solutions for handling doctrine connections in background scripts?

Upvotes: 2

Views: 5965

Answers (3)

amcastror
amcastror

Reputation: 568

My approach is a little bit different. My workers only process one message, then die. I have supervisor configured to create a new worker every time. So, a worker will:

  1. Ask for a new message.
  2. If there are no messages, sleep for 20 seconds. If not, supervisor will think there's something wrong and stop creating the worker.
  3. If there is a message, process it.
  4. Maybe, if processing a message is super fast, sleep for the same reason than 2.
  5. After processing the message, just finish.

This has worked very well using AWS SQS.

Comments are welcomed.

Upvotes: 1

flxPeters
flxPeters

Reputation: 1542

Here is another Solution. Try to avoid long running Symfony 2 Workers. They will always cause problems due to their long execution time. The kernel isn't made for that.

The solution here is to build a proxy in front of the real Symfony command. So every message will trigger a fresh Symfony kernel. Sound's like a good solution for me.

http://blog.vandenbrand.org/2015/01/09/symfony2-and-rabbitmq-lessons-learned/

Upvotes: 2

flxPeters
flxPeters

Reputation: 1542

This is a big problem when running PHP-Scripts for too long. For me, the best solution is to restart the script some times. You can see how to do this in this Topic: How to restart PHP script every 1 hour?

You should also run multiple instances of your consumer. Add a counter to any one and terminate them after some runs. Now you need a tool to ensure a consistent amount of worker processes. Something like this: http://kamisama.me/2012/10/12/background-jobs-with-php-and-resque-part-4-managing-worker/

Upvotes: 0

Related Questions