Once Docker had gained steam within the corporate world, I had been asked a couple of times, “What would it take to Dockerize our application?” At the time, I didn’t really know anything about Docker, but since then I have figured a few things out.
I want to share what I’ve found in taking a pre-existing application and “Dockerizing” it. It’s not taking a final product and giving you another final product; rather, this is more of an exploratory post about the options available in hopes that it helps somebody out there.
With that said, I’ll take a generic PHP application that just uploads an image and lists what’s stored in the database in order to figure out the media storage, database storage, docker-compose.yml file, and more. This will simulate what I would need to take a currently-running application on VMs and move it over to Docker.
We’ll tackle it in sections:
- NGINX, PHP with Codeigniter and Unirest
- Images/Media storage with SeaweedFS
- Database storage with StorageOS and MariaDB
What you’ll need to replicate this are three VMs that can access the internet.
Once you have downloaded the Vagrant file, just type the following to get it up and running. Note: if you are getting timeout messages, you can set: config.vm.boot_timeout = 1000
I want to note that this example doesn’t consider every single thing you’ll need. There are reverse proxies, db optimizations, php settings, seaweedfs settings to allow for certain sizes, backups, and more. We are just going to focus on “can this even be done.” Hopefully, I have shown you a few examples here to help implement whatever you are building.
NGINX and PHP
When I was looking to convert my PHP application (running Codeigniter 3) to run on Docker, I looked for the PHP Docker container, as well as an NGINX container, and then tried to figure out how to wire them up. It was then that I thought, Surely someone has already done this and created a Docker image, and, to my relief, there was. richarvey/nginx-php-fpm had everything that I needed for my PHP application to get up and running. If you look at the wonderful docs on the Docker page, it has things listed for git and letsEncrypt, as well as xDebug. I don’t think you’ll need everything that that image comes with, I know I didn’t, but you can remove what you don’t need by building your own image. For the sake of this article, I’m just going to leave it alone.
Here is an example of the Dockerfile I created for my PHP app:
Here, I copy over my Codeigniter code, that uses the Unirest PHP package to upload images, to the /var/www/html directory, as well as an NGINX config file, so when I build it, it will have an nginx.conf file dedicated to my Codeigniter setup. I also set the webroot to /var/www/html/public since I moved the index.php file in there. Note: if you do this, you’ll need to update the index.php file to get the proper path to the application and system folders.
Here is what my folder structure and nginx.conf file looks like:
I also updated my /etc/hosts file to set local.dev to point to 127.0.0.1. so when I run ‘docker-compose up’ (locally, not in the Docker swarm yet) I can use the proper BASE_URL I set in the environment variables. I also added an entry to point web1.dev to 192.168.50.100 for the virtualbox VMs that I’ll be using for this example. When you would deploy this using the Docker stack command, the BASE_URL would be set to http://web1.dev/ so that Codeigniter won't use the internal Docker swarm ip address that isn’t accessible from the outside.
Switching the OS?
Now I want to note that most of my applications that are in production run on Ubuntu, but in this case it will be running on Apline Linux. Why the switch? When it comes to deployment, Alpine is smaller than Ubuntu; it doesn’t have a lot of things pre-installed, making it a small Linux distro, so that in turn makes it a lighter OS for deployments. That being said, richarvey/nginx-php-fpm Docker image was perfect, since it was built on top of this. I’ll admit that finding this specific image was a Godsend because the options out there are pretty daunting; there are so many images to choose from. My advice is to find something close to what you need -- if you are not going to build it yourself -- and take out what you don’t need. I looked at the number or ‘Pulls’ and the frequency of its updates to make sure I’m not getting some stale image.
For my example Codeigniter app to use the environment variables defined in the Dockerfile (or the docker-compose.yml file…down below), I needed to update my code to get these settings by calling the PHP method getenv(). In my application/config/database.php file of the example app, I used the getenv function to get what I set in the Dockerfile.
So, my webapp part of things is complete. This would be similar if you were making a microservice in PHP using something like slimPHP or Lumen. You’d just need to have the proper nginx.conf file and the proper environment variables set. For this little example, it’s all I needed. I was ready to build my web application image to be used in Docker Swarm. I built it using the command below and pushed it up to my repo.
As you can see from the output above, I have built and pushed the example-webapp a few times and Docker only updated the layers that changed.
I have this build ready to go if you want to pull it down at https://hub.docker.com/r/trackleaf/example-webapp/
Images with SeaweedFS
In my example application, I’ve given it the ability to upload an image to an image service running on Docker; this mimics something that I did in production. I created a service in PHP that accepted images and stored them away, but there was no replication as well as other features SeaweedFS has out of the box. In the example app, if you go to /upload it brings you to an upload form (it’s pretty much the CodeIgniter example for the docs) that uploads the images to the service I have running.
What I liked about this service is that instead of me coming up with a small PHP micro-service that essentially takes images, stores them and serves them, I could use this service, that is built on the Go programming language, that does this for me and also has the ability to replicate everything. There is an introduction and list of features here. I found this library after not wanting to figure out the whole replication thing for the small micro-service I already had running.
Now, to get this Dockerized, I created my own image of it using this Dockerfile:
When I run it, I add the command params to it along the lines of:
When it comes to each node that runs SeaweedFS this is how I prepared the directories
MariaDB and StorageOS
Currently, Docker does not have anything that will scale across nodes connected by Docker Swarm when it comes to the database files. I looked at two different solutions, external storage and a Docker volume plugin called StorageOS. When I discuss external storage, I mean something like having your database hosted on an external service like Amazon, Google, or some VM you have a DB installed on. Even though this is in no way utilizing Docker, there is no shame in it. Honestly, if you are comfortable where you are with your DB setup, there is no shame in leaving it alone and just white-listing the nodes that the containers reside on to get DB access. For every person I talk to, blog I read, book I buy, there is always a different solution when it comes to deployments.
Now for those looking to have Docker containers persist the data and not worry about it going away once the container is blown away or the database being restricted to a specific node in the swarm, we can use a Docker volume plugin called StorageOS. For those that won’t be using the vagrant file, which takes care of the StorageOS setup below, you can use the commands I have listed here to get StorageOS up and running; it’s basically the commands the Vagrant file has in it.
SSH into each of the VMs. The following commands have been broken into sections for easier reference.
Set ENV Variables
That should have StorageOS good to go on your VMs.
Now that we have our directories created for SeaweedFS and everything set up for StorageOS, let’s get Docker Swarm up and running. We’ll follow StorageOS’ setup; you can read the docs here for more information as to why.
Once we have the swarm up and running, you can use the following docker-compose.yml file to wire everything up.
docker-compose.yml file for the stack
If you’ve updated your /etc/hosts file, you should be able to access the application at http://web1.dev/upload.
You should also be able to upload images and have everything working as it should.
So with that docker-compose.yml file deploying our stack, we have everything that the PHP web application needs. We have our database, image server, and environment variables to take this application and “Dockerize” it. This is by no means exhaustive when it comes to optimizations or anything like that; it’s more the proof-of-concept that you and I can use to build on. There is so much more to Docker, StorageOS and SeaweedFS that we can do. Below are some links to get you started on these technologies.
projekt202 is the leader in experience-driven software strategy, design and development. We have a unique and established methodology for understanding people in context — we reveal unmet needs — which drives everything we do. This leads to a crisp, clear understanding of the customer, which shapes the design and development of new solutions and experiences. We have the expertise, teams, skills and scale to deliver sophisticated software solutions that improve any and all touchpoints across the user journey.
projekt202 has spent over 15 years bringing to life and to market compelling experiences through our Experience Strategy & Insight, User Experience, Software Development, Marketing & Analytics, and Program Management practices. Our talented team has delivered emotionally-rich and intuitive solutions for global brands and clients such as Capital One, Dell, Mercedes-Benz Financial Services, Samsung Electronics, Neiman Marcus, and The Container Store, among many others.