Before you go, check out these stories!

0
Hackernoon logoGoing live with Google Cloud Compute Engine by@radu.bogdan.gaspar

Going live with Google Cloud Compute Engine

Author profile picture

@radu.bogdan.gasparRadu B. Gaspar

In the previous article we saw how easy it’s to go live with Google’s App Engine. We also learned how costly that could get in the long run. A simpler and cheaper solution for start-ups and small/medium businesses would be Google’s Compute Engine.

Disclaimer: this topic is aimed at developers who are familiar with or don’t shy away from a terminal.

The most frequent question I get when discussing this topic is: “What’s the difference between App Engine and Compute Engine?”. The oversimplified answer is that App Engine does magic out of the box whereas Compute Engine lets you implement your own magic.

A more in-depth answer is that:

  • App Engine is a Platform as a Service in which you simply deploy your code and it handles other more complex systems, like scaling, automatically.
  • Compute Engine is a Infrastructure as a Service in which all configurations are provided by you, the developer. The operating system, server setup, scaling, SSL …all of that will be handled manually.

For all that manual labor, you do have some major benefits, two of which are: flexibility and cost.

Please read the previous article in order to create your Google Cloud project and install Google Cloud SDK. From this point forward let’s assume again that we’re the owners of example.com.

Go to your Google Cloud / Compute Engine / VM instances section and select Create instance. From here you should:

  • Give your instance a name (warning: it can’t be changed easily, so give it a good name)
  • Select a zone — this decides where your data is stored and what computing power you’ll have available (details here). Please not that this step affects the overall cost… some zone are more expensive than others.
  • Pick a CPU — this will also affect cost. In our case we’ll pick the smallest possible (micro 1 shared vCPU with 0.6GB RAM, a.k.a. f1-micro)
  • Pick the OS — pick one that you’re familiar / comfortable with (I’ll pick CentOS 7)
  • Also make sure you check HTTP, HTTPS traffic. We’ll be adding our own SSL certificate later.

You should be able to see your instance in the VM instances view, like so:

You’ll see that your instance has an internal IP and an external IP. The external IP is what we care about, since we’ll be adding that to our registrars DNS Manager so we can link the domain name to this server.

Before we do that however, we need to make sure we reserve a static IP. The one we’re currently seeing is what is known as an ephemeral IP (meaning it won’t last long).

To do that, go to Google Cloud / VPC Network / External IP addresses and click on Reserve static address:

  • Give it a name so it’s easy to remember what it belongs to, I usually use the domain name
  • Pick an IP version (if you’re not sure what you should pick, go with IPv4 — type Regional) and
  • make sure you attach it to your VM instance. You get one static IP for “free” as part of your account. Other static IP addresses are billed hourly if they’re not attached to a machine.

Go back to Google Cloud / Compute Engine / VM instances and you’ll see that you probably have a different external IP address. This one is static and we can add it to our registrar. We’ll do that after our server is actually displaying something.

We need to install a few things on the server, so click on the SSH button in the table with our instance name:

Based on the OS you’ve chosen, these next steps will be different but if you’ve chosen CentOS, the package manager will be yum. We need to run the following commands:

sudo yum install epel-release # the CentOS EPEL repository
sudo yum install nginx # our web server
sudo systemctl start nginx # to run nginx

At this point, if we type our static IP address in a browser, we’ll see the default Nginx landing page. So far so good, now we need to teach it to serve our own site. We’ll configure it to server a static HTML page:

  • first let’s create a folder which will hold our site
sudo mkdir -p /var/www/example.com
  • create an index.html file with some random content
echo "Hello World" > /var/www/example.com/index.html
  • we also need to change ownership of this folder from the root user to our Google cloud user. Luckly that user is stored in a system variable as $USER
sudo chown -R $USER:$USER /var/www/example.com
  • we should also change permissions on these folders so that they can be read by the system
sudo chmod -R 755 /var/www/example.com #755 for folders
sudo chmod -R 644 /var/www/example.com/* #644 for files
  • create an nginx config file which will server our site on port 80. Here we have the option of creating the site-available and sites-enabled folders or take a shortcut and create our file in the conf.d folder. We’ll go with option two. Traditionally, you’d name your config file the same as your domain, followed by the .conf extension, like so:
sudo vim /etc/nginx/conf.d/example.com.conf

Don’t worry if VIM is tricky to use, we only need to paste this into it:

server {
server_name localhost; #example.com www.example.com;

location / {
root /var/www/example.com;
index index.html index.htm;
try_files $uri $uri/ =404;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Then press ESC then :wq to save and exit the file (remember this). This is the most basic config file… you give it the root to our site, the name of the server and your done. The server_name will be localhost until we link the domain with the server IP address, after that we can write the actual domain name here.

Next we need to disable that default page that nginx is runing, so edit the nginx.conf file like this:

sudo vim /etc/nginx/nginx.conf

Press the Insert key so you can edit the file, find the http {} section and comment out the entire server {} block by adding a # in front of each line, then save and exit the file.

Almost there, we just need to restart nginx.

sudo systemctl restart nginx

Open a browser tab, navigate to the server static IP address aaaaaaaand:

Why tho`, we did everything right… right? Well no… welcome to the wonderful world of SELinux. If you’re on a distribution of Linux which doesn’t have the SELinux module, like Ubuntu… you’re probably up and running already. The rest of us on Cent OS or other Red Hat distributions are stuck here.

SELinux was originally authored by NSA (yes, that NSA) and Red Hat Software which provides a security mechanism for access control. We have the configs in place but SELinux won’t let nginx access them. To make it work, simple run the following command:

sudo restorecon -v /var/www/example.com/index.html

Remember it, as you’ll need to run it every time you add new folders or files here. As for WHY it works, I’ll leave you with this article, which explains the basics of SELinux.

Refresh the page again and eureka! We’re serving a static HTML page. Next, let’s link our domain name with this machine.

In the previous article I mentioned I’m using GoDaddy as my domain registrar. Regardless of what you’re using, the next step should be similar. Log into your domain registrar and go to your domain DNS Manager. Assuming you’re using IPv4 for your static IP, add two new records with:

  • Type: A, Host: @, Points To: the external (static) IP of your VM instance
  • Type: A, Host: www, Points To: the external (static) IP of your VM instance
Registrar DNS Records

In the table above, the Name is the Host, and the Value is the static IP. If you’ve chosen IPv6, you do the same thing, but instead of A records, you’ll be creating AAAA records in the same fashion.

Edit your server config file and add the correct domain_name (which is currently localhost)

# edit config file
sudo vim /etc/nginx/conf.d/example.com.conf
# update server_name from this
server_name localhost; #example.com www.example.com;
# to your domain name
server_name example.com www.example.com;
# make sure to restart nginx again
sudo systemctl restart nginx

The DNS changes can take a while to propagate, but once they do, you’ll be able to access your server using your own domain name instead of the static IP.

All that’s left is to create some SSL certificates so everything is running on HTTPS and we’re done. The simplest and free option is to use the certbot utility which generates certificates issued by Let’s Encrypt. Certbot also offers utils based on your web server (in our case nginx) and you install it like so:

# this assumes you've already installed the
# epel-release repository in the previous steps
sudo yum install certbot-nginx

If you’ve checked HTTP and HTTPS when you created the VM instance, you can skip this step. If not, we need to allow access through the firewall on ports 80 and 443 so run these commands:

# for systems with firewalld firewall
sudo firewall-cmd --add-service=http
sudo firewall-cmd --add-service=https
sudo firewall-cmd --runtime-to-permanent
# for systems with iptables firewall
sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT

Once we’ve done these steps, we can ask certbot to issue an SSL certificate for our domain and sub-domain like so:

sudo certbot --nginx -d example.com -d www.example.com

If you’re running it for the first time, certbot will ask you a few questions, one of which will be related to HTTP traffic; specifically if we allow HTTP traffic or only HTTPS, I will pick: “Secure - Make all requests redirect to secure HTTPS access

Certbot will automatically update your nginx config file with the newly generated SSL certificates. You can test the strength of your SSL certificates here. These are free but need renewal once every 3 months.

There are 2 ways to approach this, you can:

  • manually run certbot renew once every 3 months (#boring, #tedious, #iForgot) or
  • create a system cron job to run this command for you

If you create a SSL auto-renewal cron job, it’s recommended that you run this command at least once or twice per day. On CentOS 7, cronie is running by default.

# verify croni is installed
sudo rpm -q cronie
# install croni if necessary
sudo yum install cronie
# check if crond service is running
sudo systemctl status crond.service
# display help info about cron jobs
sudo cat /etc/crontab
# open cron jobs file in default editor
sudo crontab -e
# add cron job to update SSL every day at 2:00 AM
0 2 * * * /usr/bin/certbot renew --quiet
# restart crond service after making changes to cron job
sudo systemctl restart crond.service

If you want to use nginx as a reverse proxy (in case you’re running a NodeJS instance on another port) you only need to edit your example.com.conf config file to something like this:

server {
...

location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_pass "http://127.0.0.1:3000";
}

...
# SSL configs added by certbot
}

Take note of the proxy_pass and add the port your NodeJS server is running on. As a side note, you should probably run your NodeJS through PM2 or a similar process manager.

You now have a very basic site running on Google Cloud Compute Engine with SSL auto-renewal.

Congratulations and happy coding!

Tags

Join Hacker Noon

Create your free account to unlock your custom reading experience.