paint-brush
Stop deploying Laravel manually, steal this Docker configuration insteadby@getlionel
26,763 reads
26,763 reads

Stop deploying Laravel manually, steal this Docker configuration instead

by sMarch 2nd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong><em>Thanks for stopping by! You might also want to check this previous article about </em></strong><a href="https://hackernoon.com/laravel-on-aws-a-reference-architecture-a680755130d0" target="_blank"><strong><em>deploying Laravel on AWS</em></strong></a><strong><em> or even download the book using the form below!</em></strong>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Stop deploying Laravel manually, steal this Docker configuration instead
s HackerNoon profile picture

Thanks for stopping by! You might also want to check this previous article about deploying Laravel on AWS or even download the book using the form below!

The things we do for our web applications…


Coding and testing put aside, we provision servers for them, configure their database, search engine, cache engine, workers, crons, queues, configure their web server, get them SSL certificates, update DNS for them and finally build and deploy them.And we do this on a daily basis so hopefully we can set up this easily enough so that we can re-deploy a few times a day without too much thinking!



As respectable developers, there are also things we won’t do for our web apps: — We won’t click around the AWS console trying to remember how we did it last time — We won’t SSH into a VM and run apt-get installs all over the place, trying to remember how we did it last time — We won’t redeploy our application by anything other than one single command we can rollback just as easily

That would pave the way to so many possible human errors, make deployments stressful and risky, making our application unstable and potentially insecure (ouch).

Instead, we will script and automate. We will commit our configuration as code. We will use a repeatable self-documented process that any developer (including ourselves later on) can take over, without risk. We will use that same repeatable process for our later projects, improving it over time, increasing its reliability, reducing labour and deployment errors. We will automate as many low-business-value activities as we can and make our companies better off.

Who is this guide for? This guide is designed for small tech companies who might have been doing too many of their own deployments manually and been burnt once or twice out of it. It is well suited to any team which wants to get started with AWS, firstly by using its basic services and then ramping up complexity later on.

This is one of the procedure I use to deploy my clients’ Laravel applications on AWS. I hope this can be helpful to deploy yours. If your use case is more complex or if you would like me to mentor your developers into devOps best practices come and have a chat with me on https://getlionel.com







**What do we need to know about AWS?**The good news is, for a simple Laravel application with only a few backing services (database, cache, queue, file storage and search engine perhaps), you don’t have to know much about AWS: — We won’t use specific networking (like private subnets that aren’t accessible from the internet) so we’ll use the default VPC and public subnets from our AWS account — We will deploy all of our services on a single EC2 server and use S3 for file storage, so we don’t have to worry much about other AWS services — We will deploy a stateless application and backup the database to S3 so we don’t have to worry about EBS (EC2’s persistent storage option) — There will be a single firewall configuration to setup — Actually most of what we’ll do will be portable to a cheaper hosting provider (except the S3 file storage, which can be used separately while running the application on a VM from a different hosting provider)


**Why Docker?**Docker will be our swiss army knife for this article. It will help us with provisioning our servers, configuring (as-code) and orchestrating our services:

Server provisioning is a set of actions to prepare a server with appropriate systems, data and software, and make it ready for network operation

Configuration-as-code is a DevOps practice that promotes storing of application configuration as code within source code repository

Orchestration is the automated arrangement, coordination, and management of computer systems and services





Docker by itself wouldn’t be adequate for more complex cases (i.e. if we had 10+ microservices that needed to be updated and scaled separately, running on more than one server). However, our application here only has a handful of backing services and the various Docker tools are all we need: — We will use Docker Machine to provision our EC2 server(s) into our AWS account straight from the command line — We will use Docker images to define our services’ configuration as code — We will use Docker Compose to orchestrate our services together — We will eventually use Docker Swarm to manually scale our application


**Step 1. Provision our servers with Docker Machine**Docker Machine is a tool that provisions our servers by installing an appropriate Linux distribution and the Docker daemon in one go. It can connect to AWS by calling the Amazon API on our behalf and create an EC2 instance into our AWS account.




What happened here!?  Firstly, Docker Machine created an EC2 instance in our AWS account, of the size we specified (t2.large) and in the AWS region we specified (us-east-1) — It also created a security group (an instance-level AWS firewall) allowing any ingress traffic to ports 80 and 443 — It then installed the Docker daemon and configured it to be remotely accessible through port 2376, using a new TLS certificate it created just for this machine — It also created a new SSH key and installed its public part on the server while saving the private key on your machine. Port 22 has been opened for SSH access. You can SSH into your machine at any time using docker-machine ssh name_of_your_machine

That’s already a fair amount of work automated!


You can now have a look at where/how Docker Machine saved all this configuration on your local machine, in ~/.docker/machine/machines. There is a new directory named after your Docker Machine you just created (laravel here) containing the SSH keys and TLS certificates mentioned above.You can then use the command docker-machine ls to see the list of all of your machines created through Docker Machine.



A note about sharing access to your Docker Machines:Now that you know how the magic works behinds the scenes, you’ve probably figured out that you can just share this folder with a teammate to give her access to your Docker Machine.Even though that would work, it is definitely not a secure way to share secrets/SSH keys across your company and Docker Machine has yet to provide an enterprise-level solution for this. Docker Machine is not a solution for provisioning large scale projects and more complex tools like Terraform provide a remote state backend to safely share configuration with your team. In the meantime, if you work with a small team and want to get started quickly, Docker Machine fits the bill.Edit: I’ve written about sharing Docker Machines here




**Step 2. Configure our services with Docker**The next step is to have each of our services built as separate Docker images, which will then be run as separate Docker containers. The great news here is that most of our services will require no specific configuration (database, cache engine, search engine and queue) and therefore we can just use their official Docker images out-of-the-box!For the rest (our Nginx server, Laravel application, Laravel worker and cron), we will have to build our own Docker images from our source code.We do this by writing Dockerfiles that describe how the images are built. We will commit these Dockerfiles with our code, effectively achieving configuration-as-code (yaaayy!)

This is an overview of our Dockerfiles and config files:




















root of your Laravel app|--deploy| |-- nginx| | |-- ssl| | | +-- ssl.cert # our SSL certificate| | | +-- ssl.key # our SSL certificate| | +-- default.conf # our Nginx config| | +-- index.php| | +-- nginx.conf| | +-- robots.txt| |-- cron| | +-- artisan-schedule-run # our artisan scheduler| |-- php-fpm| | +-- php-fpm.conf| | +-- php.ini # PHP configuration| | +-- www.conf+-- Dockerfile # our Laravel Dockerfile+-- Dockerfile-nginx # our Nginx Dockerfile+-- docker-compose.yml # Docker Compose file+-- docker-compose.env # our environment variables

Let’s look at the Nginx configuration (default.conf):

And the Docker file to build Nginx with our custom configuration:

For our Laravel app workers and cron, we build one single image using one single Dockerfile and we will override the Docker CMD for each container.









**Step 3. Orchestrate our services with Docker Compose**Docker Compose is a tool to orchestrate multi-containers applications with Docker. In our case, we will have up to 7 containers running at a time: — the Laravel application — Nginx as a reverse proxy to PHP-FPM — Redis as a cache engine and queue engine — PostgreSQL or MySQL for our database — Laravel worker running in a separate container — Laravel cron in another container as well — Eventually running ElasticSearch if our application needs more than the database native search capabilities

Why do we need 3 Laravel containers then? They all need to run on the same Laravel code but we want to limit processes to one per container: PHP-FPM, the Laravel Artisan worker and cron respectively. Since Docker can only start one process per container, we would otherwise have to hack our way around it by using a process control system inside Docker. Not very clean.







Before we move forward, there are a few concepts about Docker Compose we need to understand at this point: —network: Docker Compose creates a bridge network which is a private network internal to the host so containers on this network can communicate. Behind the scenes, the Docker Engine creates the necessary Linux bridges, internal interfaces, iptables rules, and host routes to make this connectivity possible. —links: containers can be connected to each other via links. Links are Docker service discovery mechanism: this is how a service name can be resolved into a container IP. We use this in the Nginx config namely to point to our PHP-FPM container. —port mappings: a port is by default only exposed on the current container and accessible to the containers linked to it. To expose it to the internet, and provided our host itself is connected to the internet, Docker can map the container port to a host port —environment variables: Docker Compose enables you to define in the .yml file the environment variables you need inside your container. This is great as you can deploy the same image in different environments (staging, production, etc) without rebuilding your image. —volumes: by default, data written inside a Docker container is lost when deleting that container. Data can be persisted onto the host by using a Docker volume (we will use this for the database) —logs: Docker Compose has a few logging drivers, one of which is AWS CloudWatch. We will have each of our services stream logs directly to CloudWatch with just a few lines in docker-compose.yml

Let’s build our application!

Let’s check that all our images have been built:

Docker Compose default prefix for images is our current directory’s name

Now before launching our application, we can check that Docker Compose will execute our application using the appropriate environment variables:

Docker Compose compiled our docker-compose.yml file by injecting environment variables and secrets from docker-compose.env

Looking good, now let’s run our app:

We retrieve our server public IP with docker-machine ip laravel and … bingo!

At this stage, you can just run away with the Dockerfile and the docker-compose.yml file, copy them at the root of every new Laravel project you have and deploy a new project in a handful of minutes… How does that sound!?

The next steps are about centralising all our logs into CloudWatch, setting up automatically renewed and free SSL certificates from Let’s Encrypt in our Nginx image and periodically backuping up our database to S3.

Hey! Would you rather have me doing all of this for you or train your team to Docker, AWS and devOps best practices? Come talk to me at https://getlionel.com


**Step 4. (Optional) Stream logs into CloudWatch**Docker-Compose offers a CloudWatch driver so that everything hitting the Docker standard output can be streamed into a new CloudWatch logs group.

All you need to do is create an instance profile to associate to your EC2 instance at creation time:

And uncomment the logging directive into each of our services’ definition in the docker-compose.yml file. For example:

The above will create a log group named “laravel” in your CloudWatch dashboard and start pushing log events there. Once all our services have been connected to CloudWatch, this is what you’ll see:

Logs for all of our services are centralised in CloudWatch

From there, you can use all of CloudWatch goodness, like setting alarms and notifications, without the pain of running your own ElasticSearch/Kibana stack.


**Step 5. (Optional) Setup SSL and redirect all HTTP traffic to HTTPS**You should really setup HTTPS from the first release and save yourself a lot of later trouble. First, we will update our Nginx configuration to redirect all HTTP traffic to HTTPS:

Then we order SSL certificates from Let’s Encrypt. Let’s Encrypt certificates are free and valid for 3 months. We won’t cover here how to setup Nginx to automatically renew the certificates, but at least we have a solution to get free certificates in place in a couple of minutes:

We then copy the files fullchain.pem (our certificate) and privkey.pem (the private key for the certificate) into our deploy/nginx/ssl directory and update our Nginx Dockerfile to import SSL certificates:

Rebuild your Nginx image and re-start your application. Use cURL to check that the certificate is valid and that the redirection to HTTPS is working as expected:

Nginx successfully forcing redirection to HTTPS

And we can check with Chrome that the certificate is working:

Nginx serving our Laravel application through HTTPS


Step 6. (Optional) Automatic database backups…where we will add a container to periodically backup our database every day to S3.

Coming soon



**Going further: (Optional) Scaling our app on multiple instances, hosting our database on RDS, etc..**Wow, that is already a lot of work automated! Now what if we want to run the database on AWS RDS, span multiple workers, auto-scale our application on multiple servers etc?Can we go further than just using Docker Machine and Docker Compose? Not really. The clustering solution by Docker is Docker Swarm and is loosing traction and being made redundant by much more popular solutions like Kubernetes.

Well, that’s for another time but, in the meantime, you can read my other articles on Laravel and AWS “How to deploy Laravel on AWS using CloudFormation” and “How to continuously deploy your Laravel application on AWS”.

Lionel is Chief Technology Officer of London-based startup Wi5 and author of the Future-Proof Engineering Culture course. You can reach out to him on https://getlionel.com