This post is an overview of how we use Docker as our development environment in combination with Laravel at Weebly. My goal was to write this in a way that people with little to no Docker experience could easily follow along, and people with Docker experience could get some insight into how we used Docker with Laravel. If you have lots of Docker experience, much of this post might be reiterations of concepts you are already very familiar with.
We’ve been using the popular PHP framework Laravel for a recent project at work. Laravel’s “out of the box” approach to development is using VMWare/Vagrant (Homestead) - which works perfectly fine, but we were curious about using Docker/a containerized approach for a couple of different reasons:
1 - Lots of buzz around ‘containers/containerizing applications’. We were curious about all the fuss so figured we should see what it was all about.
2 - Our automation team has used/deployed apps using Docker previously, so we thought this might be an easy way to quickly spin up staging/integration environments for new services we build at Weebly. The old integration approach is very tailored to our monolith setup and no standard was in place for new services going forward.
3 - The idea of every change to our dev environment (every change to a container) being tracked in git felt like a potentially much cleaner solution than what we were doing with Vagrant. Not that you can’t do something similar with Vagrant, but the Docker container approach lends itself to this naturally.
For more insights into the differences between Vagrant and Docker, see this Quora post.
There is a neat project online called LaraDock. This was a cool way to get the app running using Docker in a matter of minutes, and also a great reference to see what the configuration for all your different containers might look like. Unfortunately it didn’t really help us truly understand what was happening under the hood, so we ended up starting from scratch but heavily borrowing from the skeleton that LaraDock provides. Docker itself has a pretty great tutorial which was also an excellent resource (you can skip around to relevant sections without doing the whole tutorial).
Once we made the decision to start from scratch, the first step was figuring what containers we would break our application down into. Initially this ended up being:
Why separate all of these things? Why not have everything in one place? The beauty of the containerized approach is the modularity you get. With our setup broken down into these separate containers, we could easily swap in PHP-FPM for HHVM, or MySQL for Postgres down the road (without touching any of the other containers). In fact, we actually ended up doing that PHP-FPM/HHVM swap with the release of PHP7. Also, to even more closely mimic our production environment over time we ended up with the following containers by the end as well:
A Closer Look At Containers
So we’ve decided on our various containers now, how do we actually build them? One of the greatest things about Docker are the thousands and thousands of images available on the Docker Registry. For many containers, your exact solution might already exist somewhere in the Docker universe. In our case, we often wanted to pull down an existing image and then build on top of that, which Docker lets us do.
In the above Dockerfile we are simply pulling this RabbitMQ Docker image from the Docker Registry, and adding our own command to enable a rabbitmq admin tool. For many containers you might not even need a Dockerfile since your exact solution might be available already, but often there is lots of tinkering needed on top of an existing image. Take a look at our PHP-FPM 7.0 Dockerfile for example:
There is a lot going on there but if you’ve ever installed and gotten a PHP application running on a linux machine, most of this is probably familiar. One thing to note is you can specify a file to run when the container gets created (running the Dockerfile itself just builds the image) with the following command:
And your start.sh script might run migrations or anything else that needs to happen when the container is spun up.
After getting our various containers building individually, it was now time to have a structured way to spin up all relevant containers at once with the appropriate configuration options. One great way to do this is with a docker-compose file. Docker-compose is an easy way to manage your Docker services/networks/volumes all in one place without having to memorize long CLI commands. For example, when you spin up a container, you may want to copy some log files to a location on your local file system, map docker network ports to your local machine’s ports, or more. Instead of running a terminal command that might look like:
We could have all this fun stuff defined in our docker-compose.yml. Below is an example of what our docker-compose.yml file might look like.
Now that we have all our docker container configurations in one place we can simply run docker-compose up, and it will automatically spin up our containers defined in our docker-compose.yml with the appropriate configurations. This is great because now you have a file you can commit to source control so all developers have the same docker network configuration. Ultimately, our docker-compose.yml had a lot more going on than in the example above. Below is a slightly meatier version.
As you can see there are many more configuration options you can specify on each of your containers. Also of notable interest are the volumes_source container and the volumes_data container. These are special containers known as data volumes and in our case we used them to:
Here is Docker’s longer/better explanation of data volume containers:
In addition to providing structure to your Docker network, docker-compose is also a great way to separate configurations for your dev environment and whatever other environments you may use in conjunction with docker (staging/integration/production). You can have separate docker-compose files, i.e. docker-compose.dev.yml, docker-compose.integration.yml, and you can have a base docker-compose.yml that they inherit from. This is especially useful if your integration environments ports differ from your local machine, or if you want to pass in certain environment variables only to one setting.
Docker has definitely had some pain points, and it’s still a growing community. There is lots of new terminology to learn entering into the Docker/Container world. Overall, our development team that adapted this approach has been extremely happy with the decision. Some reasons (some reiterating here):