Edward Barnard
Edward Barnard

Reputation: 356

rabbitMQ unable to get heartbeat working with php-amqplib

I have observed RabbitMQ "stuck" with unacked messages. The queue shows a consumer which no longer exists, and I assume what's happening is that RabbitMQ is continuing to deliver messages to that consumer. They show as an ever-increasing count of unacked messages. I'm doing this in PHP with php-amqplib.

I can produce the problem by killing the consumer process (control-C on command line).

I tried specifying a heartbeat of 3 seconds and tried keep-alive both true and false. With heartbeat, the consumer will eventually fail:

Exception fwrite(): send of 573 bytes failed with errno=32 Broken pipe
PhpAmqpLib\Wire\IO\StreamIO->error_handler(8, 'fwrite(): send ...',
php-amqplib/PhpAmqpLib/Wire/IO/StreamIO.php(281): fwrite(Resource id #176, '\x01\x00\x01\x00\x00\x00\x15\x00<\x00(\x00\x00\fb...', 8192)

Issue #374 might relate: https://github.com/php-amqplib/php-amqplib/issues/374

The consumer is consuming from multiple queues, but I believe that shouldn't matter.

The problem I'm trying to solve is that RabbitMQ continues to think that a consumer exists when it doesn't, with the result that RabbitMQ delivers those messages nowhere, and they go unacknowledged. I'm looking for a way to get rid of that spurious connection so that those messages can be re-delivered to a live consumer. I think that's what heartbeat is for, but I haven't gotten it to work.

Upvotes: 3

Views: 3815

Answers (1)

Paulo Victor
Paulo Victor

Reputation: 4102

The first and more important think that we need to do in this case is try to "print" your content message, and only return true to consumer. Don't process your real code, if you can "consume" the messages the problem isn't in rabbit but in our process, because probably we expend to much time to acknowledge message to rabbit and Rabbit closes our connections.

I'm not saying that its you case, but I'm just trying to help debugging the problem.

In my case I change the approach of this problem, because I have many product ids(my case) for each message and its expend long time to ACK process cause they reach database, I fit my messages and it works well after do that.

We can change the approach like create another queues to fit this messages, I don't know, but 90% of problems is it.

You can read more about Detecting Dead TCP Connections with Heartbeats here

Upvotes: 1

Related Questions