Developing Symfony applications with Docker series part I: Getting started

In this series I’m gonna share all that I’ve learned while switching from a Vagrant powered environment – running all required software in a single VirtualBox instance – to a Dockerized setup where every process runs in a separate container. But what exactly is Docker? From the Docker site:

Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

Now that sounds great, doesn’t it? As a matter of fact it does, but you have to get a grip on the concept before it starts paying off. In these blog post series I’ll show you how to create a multi container Symfony application and how to get the full potential out of it. For now I’ll only focus on using Docker as develop environment. Perhaps a new series for Docker in production will follow in the near future :).

I’m a Mac OS X user so some problems I describe are related to the fact that I have to use a virtualisation layer to use Docker. If you’re happen to be on Linux, you can just skip those sections.

Installation

VirtualBox is required to run a Linux virtual machine so make sure you have a recent version installed. On the Docker site follow the installation instructions. When you’re done you should have docker, docker-compose and docker-machine binary available to you.

Create Linux virtual box

With docker-machine it’s fairly easy to create and manage a virtual machine for running Docker in. Let’s create a new instance:

If you need a box with more memory (this can happen when you have a lot of containers) you can create one with more RAM with --virtualbox-memory "2048". You’ll receive a no space left on device error when that happens.

All left to do is setting correct environment variables so Docker daemon knows how to connect our box:

Now let’s try to see if it works by listing the containers:

Create Symfony project

Now we’re ready to create a new Symfony project:

Configuring docker-compose

Docker-compose reads its configuration from a docker-compose.yml file, so create a empty one in the root of your shiny new project.

You should end up with a directory structure like this:

Install php and nginx

We’re almost there, so hang on. Obviously we’re gonna need php and a webserver so let’s install php-fpm and nginx. Never reinvent the wheel, so when you need a service containerized, always search it on Docker hub. As with bundles: there’s a container for that. We’ll use the official php and nginx images for now.

Heads up: When adding more images to your configuration take note from which image they derive. Most images extends from debian:jessie, which you probably want for your images. Docker works with a layered file system, so if all your images derive from the same parent, that will speed up the build process and also consume less space (also during transfer!).

Edit your docker-compose.yml like this:

The root element is the name of the container, you can pick whatever you like. I always try to keep these short so it’s easier (less typing) when running commands against a specific container.
The image field tells docker-compose which image we want to use for our container. The ports field allows us to expose ports on the container and forward port(s) from the container to the host, so we actually connect to the container. The value "8080:80" means we’re exposing port 80 on the container and forward it to port 8080 on the host.

You should start the containers now:

Docker will pull the images from the registry, build and start them. The -d flags tells Docker to run the containers daemonized in the background. I’ll get back on that later. When it’s done, verify if they’re both up and running:

Connecting to the box

We’ve forwarded port 8080 to our host, but connecting to localhost:8080 doesn’t work (you did try the link, didn’t you? :)). Because Docker runs in a virtual machine, we need to figure out its IP so we can connect it. Of course this isn’t very difficult:

Let’s try that IP on port 8080 and you’ll see it works: http://192.168.99.100:8080/. You’ll want to add an entry for that IP in your /etc/hosts file. Let’s pick symfony3.dev for now.

As you’ve probably discovered by now, we’re presented the default nginx page and not our shiny new Symfony application. To fix this we have to link the php container to the nginx container so they can communicate with each other. The php-fpm container needs access to our project’s php files in order to parse and serve them. Also, the nginx container requires a nginx configuration. We have to alter the Docker image and for that we need a Dockerfile.

Custom Dockerfile

The Dockerfile represents every step to be taken before the container is ready to use. Normally you would use a configuration management tool (Ansible, Puppet, Chef) to accomplish this, but in Docker you manage this via the Dockerfile.

It’s import to know that each line should contain one step. Each line creates a new layer and the number of layers is limited. One logical step per line improves the caching mechanism. For more information regarding this topic, refer to the best practices.

To configure nginx we’re going to use the nginx configuration supplied by the Symfony team. We have to copy it into the container. Create a new directory docker/nginx in the project root and add the following Dockerfile:

Create a symfony3.conf file in that docker/nginx directory as well and fill it with the following configuration:

In case you haven’t you should add symfony3.dev to your /etc/hosts file with the IP from docker-machine ip docker-tutorial.

Now let’s put it all together and update our docker-compose.yml accordingly:

Take note of the changes we’ve applied: image under nginx is replaced with build: docker/nginx which refers to the directory where the Dockerfile resides. The nginx container has a links key where we link it to the php container. Both containers have a volumes key where we mount the current directory into the container under /app path. This way the container has access to the project files.

Stop all containers and build them:

Then start them again:

When you visit http://symfony3.dev:8080/app_dev.php in your browser, you’ll see the You are not allowed to access this file. Check app_dev.php for more information. message. Remove the access check from app_dev.php and try again.

Unfortunately another well known error pops up: Failed to write cache file “/app/var/cache/dev/classes.php”..

Permissions

In my opinion the best solution to this problem is to run the console commands and php-fpm process under the same user. Without any modifications, the console commands are run under root and the php-fpm process runs under www-data. To accomplish this we also have to use a Dockerfile for the php container.

Again, stop all containers:

Create a new directory php-fpm under the docker directory. Add the following Dockerfile:

Also, add the following php-fpm.conf file:

Because I suck at naming new users I just use vagrant as my development user. Think of it as a tribute to vagrant :). The docker directory tree should be:

Now build and run the containers:

If you visit http://symfony3.dev:8080/app_dev.php now, you’ll see the Symfony welcome pages smileys at you. With this “hello world” for Symfony working we end this first post.

Next post I’ll show you how to speed up things if you’re on a Mac (the default Symfony app takes ~2000 ms to load in the current situation). Also, I’ll show you the possibilities to store your data when working with containers.

CQRS: How to handle file uploads?

If you are like me one who tries to keep up with all the cool stuff happening in the PHP world you’ve probably noticed the buzz around Domain Driven Design and more recently Event Sourcing and CQRS. Last year Qandidate released Broadway: a project providing infrastructure and helpers for introducing CQRS and Event Sourcing into your PHP stack. It wasn’t until last month before I got the change to get my hands on it. We adopted the framework in one of the latest projects at work. And it didn’t take long before we ran into all kind of problems and questions 🙂 .

So for every question we have I’ll try to write a blogpost so others can learn. Also I’m curious about how you handle the problems I describe in these posts, so don’t hesitate to comment if you have a different opinion. Let’s dive into what should be the first in a series of post about CQRS!

The problem

In the application we’re building one requirement is that users can configure attachments to be send to a user when performing some kind of action. We’re using Symfony2 and Broadway so I our code will be very specific to these frameworks. Consider the following form:

In the controller we validate the form, construct our UploadAttachment command – which is just a DTO – by passing all the values from to form to the command bus:

And the command handler calls the appropriate method on our aggregate:

Our aggregate creates a new event:

But as you probably noticed now we run into problems because we’re passing around a UploadedFile instance in an event. Imagine how this would get stored into the event store:

Storing the complete file in the event storage is theoretically possible but we prefer to store our files not in MySQL but somewhere in a S3 bucket in the cloud. If you do your event store will grow quickly and you’ll have other challenges to wrap your head around. Keep in mind events often will be transferred by some queue like RabbitMQ.

After some digging around on the internet I found some others with the same problem. On Freenode #qandidate I also asked for advice. In general everybody stores the file in the controller or command handler and passes on the id to the event.

The solution

We’ve chosen to store the file in our controller and pass on the UUID to the command. A code example is worth a thousand words:

Drawbacks

There are a couple of drawbacks in this method:

  1. every new attachment results in a new file, this could take up a lot of storage from unused files
  2. if something goes wrong in the command handler, the file is stored already

Personally I see it as a benefit we have a history of every single attachment uploaded. We can easily go back in time and revert an erroneous upload or debug what our users did wrong in case of a problem.

By only passing around the UUID our event keeps small and this makes it easy to be published on RabbitMQ.