Symfony2 and RabbitMQ: Lessons learned

Last year we introduced RabbitMQ into our stack at We were in desperate need of a worker queue and after fiddling around with Gearman, Beanstalkd and RabbitMQ we made our choice: RabbitMQ it will be.

Now there’s quite some information to be found on RabbitMQ and how to use it, but a lot of things you have to find out yourself. Questions like:

  • what happens to messages on the queue after a service restart?
  • what happens to messages on the queue after a reboot?
  • how do we notice that a worker crashed?
  • what happens to my message when the consumer dies while processing?
  • etc.

Using RabbitMQ and Symfony2 (or php in general) is quite easy. There is a bundle for Symfony2 called OldSoundRabbitMqBundle and a php library called php-amqplib which work very well. Both are from the same author, you should probably thank him for that 🙂 .

First try: pure php consumers

We’re running a fairly common setup. Because we’ve been warned that php consumer die out every now and then, we’re using Supervisor to start new consumers when needed. There is a lot of information out there on this subject so I won’t go in there.

Despite the warnings we started with pure php consumers powered by the commands in OldSoundRabbitMqBundle. The first workers were started like this:

This means we’re consuming from the async_event queue without any limit to the messages. Basically this means it will run forever, or better said: until php crashes. Or worse: your consumer ends up in non-response state. Which means it doesn’t process any message any more and Supervisor thinks all is fine because you still have a running process. This happened once to our mail queue. I can assure you it’s better to prevent these kind of things.

Second try: pure php consumers with limited messages

So after the mail-gate I was searching for a quick way to make our setup more error proof. The OldSoundRabbitMqBundle supports limiting the messages to process. So I limited our workers so that they got restarted a couple of times a day:

After that things got running more smoothly and it took a while before we encountered new problems. While spitting trough the logs I notices some consumers produced some errors. A brief summary:

  • General error: 2006 MySQL server has gone away
  • Warning: Error while sending QUERY packet.

Because the consumer is one process that keeps running, that also means that the service container and stuff keeps existing in memory. When you’ve done some queries the database connection keeps open in the background. And if it’s quiet on our queue, it may take some time before we reach the message limit. If that time exceeds the connect_timeout of your MySQL server, you’ll run into the warnings and errors about lost connections.

Of course we should close the connection after the message is processed or could try catch for Doctrine DBAL connection exceptions or increase the connect_timeout setting but thats just denying the real problem. Running consumers with a booted Symfony2 kernel just doesn’t work so well.

A final resort could be to strip down the consumers and don’t use the Symfony2 kernel and container but we don’t like that. Our messages are most of the time serialized events which get dispatched again after the consumer picks them up. At the application level we don’t want to know wether we are in a RabbitMQ consumer or in a normal HTTP request.

Real solution: rabbitmq-cli-consumer

So it took a couple of months to learn the hard way we needed some different solution for our consumers. I found this interesting blog post about the same problem. He solved it with Java and Ruby consumers. We all learned java in college right, but I don’t like to run the memory eating jvm on our servers. The Ruby consumer unfortunately misses some good documenten for me as Ruby virgin. So I got a bit lost there.

That was the point where Go got in. Go is a kind of improved C with not real OO but a lot of cool stuff in it. I wrote a application that makes it possible to consume messages from RabbitMQ queue and pipe them into an command line application. I called it: rabbitmq-cli-consumer.

The main advantages for using rabbitmq-cli-consumer are:

  • no more stability issues to deal with
  • lightweight and fast
  • no need to restart your workers after a fresh deployment

We still use supervisor to start and stop the consumers because it’s the right tool for it. An example of how we start a consumer:

An example of a Symfony2 command we use:

Final tip: use the management plugin

Before even starting with RabbitMQ make sure you have the management plugin installed. It gives you a good overview about whats happening. Also you can purge queues, add users, add vhosts etc.

13 thoughts on “Symfony2 and RabbitMQ: Lessons learned

  1. In your second try you speak about timeouts with the sql server. I don’t know what you’re doing in your worker, but maybe you shouldn’t do any work that leave something open for a long time and extract the work on the database through an api?

    Other option: as I understood your worker can wait for a long time, so why not run a consumer that consume 1 message, and when it finishes spawn a new one? In that case the new process can wait for a long time without reaching a timeout with the sql server as doctrine is not bootstrapped. And if the load increase, run new consumers in parallel.

    1. Our current approach is to dispatch events which we wrap so that they are serialized and put on the queue (e.g. we wrap a UserCreated event in a AsyncEvent and this AsyncEvent is listened on by a producer and this puts in the RabbitMQ queue). The benefit of this approach is that on the development side we handle it like any other event (on our development boxes we don’t put these on the queue to ease devving) but the hard part is that we don’t like to mess our code with close connections calls or other clean up stuff.
      Of course this could be delegated to an API but in our app we don’t have any API functionality yet, and I don’t like to hack some dirty solution in it just for this purpose. So it’s certainly a solution for the problem, but not something we chosen for.

      Your second solution could work depending on how your services are configured. If you’re consumer service depends on services which initialise the EntityManager your lost (our case). One could argue this could be refactored too, but in my opinion it’s a bit cumbersome to do all this extra work just because the processed keep running.
      Restarting a consumer every time is expensive too. I won’t say the current approach by starting a cli app is most performant, but its a bit better. A good read on this topic I can recommend:

      Also keep in mind that database connections aren’t the only thing to worry about. When sending e-mail via consumers you have to take care of closing the SMTP connection and if you don’t you’ll get this fwrite errors once in a while (when using SwiftMailer, which you probably do).

      1. Hey,
        I see you use a smart event dispatching system for production/development by allowing a switch async/sync. Can you explain more about this? I foresee quite a lot of serialization issues, etc. with this approach, but at the same time I would love to adopt such a solution. Can you share your experience?

  2. Hi Richard,
    I’m trying to use your rabbitmq-cli-consumer with my symfony 2.3 application; I already use oldsound_rabbitmq/supervisord but, as you said, php really sucks on long term tasks and I’d like to solve high memory load of php processes using your consumer.

    Have you ever tried your consumer with rpc calls? My application uses rpc, and I’m having some troubles working with your consumer, because my app doesn’t receive any message (I get AMQP timeout error) (maybe because I’ve to set a queue name in rabbitcli conf file, and in rpc configuration in symfony I don’t have a named queue?)

    Do you have some hints for me?

    Thanks a lot.

    Best regards,

  3. Hi, this is the error I get:

    2015/03/02 14:45:53 Processing message…
    2015/03/02 14:45:54 Failed. Check error log for details.
    2015/03/02 14:45:54 Failed:
    Too many arguments.
    rabbitmq:rpc-server [-m|–messages[=”…”]] [-d|–debug[=”…”]] name
    2015/03/02 14:45:54 Error: exit status 1

  4. I’ve tried removing queue name in rabbit cli conf file, now the process is running but no message received/consumed; restoring queue name I get previous error (Too many arguments).

    May I open a issue in github?


  5. Hi
    thanks for the article; I have a question though, you didnt answer or offer solutions for questions like:

    * what happens to messages on the queue after a service restart?
    * what happens to messages on the queue after a reboot?

    since i have these issues too.
    also my main problem is that i’m keep running out of file and socket descriptions!
    here is my Global counts based on web management ui:
    Connections: 12557 – Channels: 12557 – Exchanges: 8 – Queues: 16455 – Consumers: 12481

    but File descriptors for instance, keep increasing until they exhaust. limit is 200000; same as socket descriptors. any ideas?

  6. Funnily enough, as soon as I started messing around with RabbitMq and Symfony, I also hit the same roadblock. Otoh, instead of creating a Go client which invokes Sf commands, I created a php message consumer which does the same. To insure extra stability and more bandwidth in consuming messages, it even has a ‘watchodg’ process which can be used to keep alive at any time N message consumers and restart them automatically if they die.

    You can find it at

    It is not yet tagged stable, but it has been used in production for quite a while now, I am mostly working on new features and docs at the moment.

    As a side note: while I am happy with the stability given by this approach, I think that the overhead incurred by starting a new php command-line process to process every single message is non negligible – it just takes too long.
    A better solution is imho to have the message consumer invoke php via an http call, and have a webserver + php-fpm sitting next to it.
    This way we get the same nice isolation that php has always had, with db connections being opened/closed and memory getting released for each message, and all the benefits of the queueing system. Even if in the end the message-consumer turns into little more than a proxy amqp-to-http /-)

    If you are interested in the concept, feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *