In a previous article, we saw how to deploy a simple React (or any UI) application using nginx and docker. In this article, we are going to take this one step further and deploy the dockerized application to AWS. To do that, we will buy, configure, and set up a custom domain name from GoDaddy and then deploy our application with SSL certificate generated using Let’s Encrypt.
That was certainly mouthful. Before moving ahead, we will break this down into small steps which can be easily implemented and understood. By no means am I an expert in the field of DNS and SSL/TLS encryption so I will provide additional material as we cruise through these steps.
Although the steps look daunting. Do not worry, they are very easy to understand and perform them yourself. Let’s get started.
Since this is not the critical step of what we are trying to achieve, we can just skim over this and create a lean application. We can use create-react-app to create this sample application.
Once the application is created we are simply replacing all the boiler plate code with our version of placeholder as follows:
That’s it. We are not very concerned about styling the application at this point so we can revisit that when we have more information.
Once we search and buy a domain of our choice, we need to edit the DNS configuration of this domain by going into the DNS settings (which can also be accessed from the Manage Domains option in the main menu). Among all the settings available, we are only concerned with Nameservers right now.
A Nameserver is a reverse address lookup which the browser performs when a user requests a website. When we request for kashyap.app in our browser, the browser first uses DNS to retrieve the current Nameservers associated with the domain name which then provide the browser with the A record i.e. the IP address for our website. The browser is then able to communicate with our server and load the information that we requested.
Coming back to the GoDaddy dashboard, we go to the list of all the Domain Names and then select the option to edit the DNS settings for the domain of our choice. After navigating to the edit screen, one of the options that we would see there is the Nameservers which are currently handling the network route to our domain. It looks similar to what we see below:
Next, we want to hand off the Nameserver resolution over to AWS so that we can manage everything in one location instead of having our servers on AWS and domain configuration on GoDaddy. This is also helpful if we wish to automate our entire workflow. But, there might be better alternatives available for the automation such as clustered deployments using Docker Swarm or Kubernetes.
To complete the DNS handover to AWS, we first need to set up AWS and in particular their DNS web service called Route 53.
As the name suggests, we will be creating a hosted zone in Route 53 which is a DNS web service provided by AWS. To create a Hosted Zone, we need to know the domain name (which we already purchased in Step 2). To add a hosted zone, follow the below steps.
Navigate to AWS > Route 53 > Hosted Zones > Create Hosted Zone
This would open the Create Hosted Zone side panel on which we need to enter the Domain Name. Leave the rest as default values and click create.
On selecting the newly created Hosted Zone, we can see various records created under the Hosted Zone, such as Start of Authority (SOA)and Nameserver (NS). For now, we are only focussed on the NS records.
Once we have the Hosted Zone created, click on the name of the Hosted Zone to see the Nameserver (NS) values which are assigned to the domain.
We only care about the NS values at this time, which are typically 2–4 in number as shown below. Copy them and head on over to GoDaddy.
On GoDaddy, navigate to the DNS settings page as we did earlier, this time under the Nameservers section, click on Edit and select Custom from the dropdown to enter the NS values of our new AWS Hosted Zone.
Once we save the changes, it takes some time for the changes to propagate over the internet, but once it does, GoDaddy will notify us that they no longer maintain our domains DNS records:
That is all. We have successfully purchased a domain on GoDaddy and transferred it over to AWS Route 53. Next, we create an SSL certificate for our domain name.
Without going into a lot of detail about certificates and Certificate Authorities, here is a small summary which beautifully explains what Let’s Encrypt is all about:
To enable HTTPS on your website, you need to get a certificate (a type of file) from a Certificate Authority (CA). Let’s Encrypt is a CA. In order to get a certificate for your website’s domain from Let’s Encrypt, you have to demonstrate control over the domain
Let’s Encrypt provides a client called certbot which can be used to automate generation and renewal of SSL certificates. To assert control over the domain, certbot performs a challenge in which it communicates with our DNS and expects a certain outcome. The outcome differs based on the type of challenge that is being performed. In our example, we will go with the dns-01 challenge which expects the presence of a certain TXT record on the DNS.
To generate this TXT record, which we need to provide as proof, we can either install the cerbot client locally and run the commands on our machine or run the docker image for the cerbot which makes things simpler, cross platform compatible and allows us to potentially automate the process. While running the certbot image, it requests some information about the domain for which we are trying to generate the SSL certificate. This information can be entered at the prompt or can be automated via flags and options which can be passed into the container. Below is a script which consolidates all the options and sets some logical default values. Be sure to learn about their rate limit in case you plan to use it in bulk.
Running the bash script from the previous step, within the root folder of our project, would first download the certbot docker image and then provide us with the necessary information to be added to the DNS record as a part of the DNS challenge.
From above, we are asked to enter the provided value as an _acme-challenge TXT record on Route 53 Hosted Zone for our domain.
To enter the TXT record, we have to go back to the Hosted Zone for our application on Route 53 and and create a new record set with the value as shown below.
Once saved, wait for a few seconds (sometimes a few minutes) for the change to propagate and then press enter on the terminal. The certbot client will then perform the dns challenge and verify our domain ownership.
Because of the way we have mounted volumes to the certbot container, the certificates that are generated are now available in the letsencrypt folder in our projects root folder.
It is up to us to determine if we want to run the ssl generation script with every deployment or if we want to run it only once manually to refresh our certificates. The flag --keep-until-expiring will ensure that the new certs are only generated when the old certificates are expiring.
Once the certificates are generated, we need to provide the certificates to our web server. Since we only have a static website to deploy, we will be using an NGINX server to serve our files. And, because we want to use our newly generated SSL certificate, we will have to create the nginx config to include the certificate.
Above, we have a very simple nginx config file which is serving the application on two ports 80 and 443, for http and https respectively. If anyone tries to access the application over the http protocol, it would be redirected to https as defined by the 301 redirect in the first server block.
Now that our nginx config is ready, we can create our docker image which packages our shippable application. Notice that we are serving the ssl_certificate and ssl_certificate_key from the /etc/letsencrypt/ folder. We have to ensure that we place our certificates in this folder when we generate the docker image for our application.
The Dockerfile is now responsible for pulling the nginx image, copying the build/ folder for our source code, copying the certificates to nginx and finally copying the custom nginx configuration that we created above.
We can generate the build folder for a React application using the npm run build command. With that out of the way our Dockerfile as would appear as follows:
The most important thing to note is that we are copying the certificates over to the same location from which we are trying to read it in the nginx.config file.
After the Dockerfile is created, we can now use it to build an image and run it as a container anywhere we wish. Of course in this case, we cannot do it locally as our nginx configuration is setup with the certificates which are generated for our domain as seen in the previous steps. We will instead upload the image and run it on an AWS EC2 instance which is a *nix system provided by the AWS EC2 service.
Below is a minimal bash script which creates a default (T2 micro) EC2 instance and opens the port 80 and 443 on that instance. We also need to provide our AWS credentials as key and secret . In this case, I created a new user with programatic access and downloaded the key and secret from the AWS IAM service. Also, to have access to the EC2 service I assigned the AmazonEC2FullAccess permission to the user I created.
Running the above script takes a few minutes since EC2 startup takes some time. Also, because of the way we are building the image and deploying it to a new EC2 instance every time, the layers for the image are not cached and are re-downloaded each time we execute this script.
Do not take this as your go to production strategy. This is only a temporary by-pass to make things easier for the time being.
In the bash script above, we have also logged the public IP address of the EC2 instance that we just created. All that is left to do is to copy the IP address of the instance and provide it as the A record on our Hosted Zone.
After a few seconds, our application or https://kashyap.app in this case should up and running.
Creating an SSL certificate should be a free and easy process. In this article I hope to have provided the clarity necessary to achieve that goal, thanks to Let’s Encrypt. Please leave a comment to provide your feedback or click on the clap
If you enjoyed this blog be sure to give it a few claps or follow me on LinkedIn.
Create your free account to unlock your custom reading experience.