paint-brush
How To Use The Flexibility Of Nginx To Make Your Apps More Powerfulby@MichaelB
409 reads
409 reads

How To Use The Flexibility Of Nginx To Make Your Apps More Powerful

by MichaelNovember 12th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Nginx is an open-source web server that is a world leader in load balancing and traffic proxying. It comes with a plethora of plugins and capabilities that can customize an application’s behavior using a lightweight and easy-to-understand package. Nginx serves approximately 31-36% of active websites, putting it neck and neck with Apache as the world's preferred web server. To gain the benefits of PaaS, your application must conform to the vendor's constraints. To serve traffic to Heroku's web dynos, you need access to an environment variable called “PORT”

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How To Use The Flexibility Of Nginx To Make Your Apps More Powerful
Michael HackerNoon profile picture

Open-source application diversity is both the biggest boon in the Free and Open-Source Software (FOSS) movement, and its greatest hindrance to adoption. You don’t always own the application you're consuming, and it often comes with certain opinions and limitations imposed by the software author—either intentionally or otherwise.

Reverse proxies are one means of taking back control of the implementation details of these products. By filtering data into a Layer 7 (or, application-level) capable processor, you can manipulate, encrypt and decrypt, redirect, and otherwise control how data destined to your services can flow and behave.

What Is Nginx and Why Do You Need It?

One great example of controlling this data is Nginx. Nginx is an open-source web server that is a world leader in load balancing and traffic proxying. It comes with a plethora of plugins and capabilities that can customize an application’s behavior using a lightweight and easy-to-understand package. According to Netcraft and W3Techs, Nginx serves approximately 31-36% of active websites, putting it neck and neck with Apache as the world’s preferred web server. This means that not only is it well-respected, trusted, performant enough for a large portion of production systems, and compatible with just about any architecture, it also has a loyal following of engineers and developers supporting the project. These are key factors in considering the longevity of your application, how portable it can be, and where it can be hosted.

Heroku and Nginx

Let's look at a situation when you might need Nginx. In our example, you've created an app and deployed it on a Platform as a Service (PaaS)—in our case, Heroku. With PaaS, your life is easier, as decisions about the infrastructure, monitoring, and supportability have already been made for you, guaranteeing a clean environment for you to run your applications with ease. However, to gain these benefits of PaaS, your application must conform to the vendor's constraints. 

When you write the custom code yourself, this is not a problem; simply add the hooks required by the infrastructure and you’re off to the races. However, when you need to use a third-party service or product that doesn't fit the mold of this infrastructure, such as our example BookStack below, the only way to design this integration may be with a middle-tier traffic manipulator like Nginx.

So let's look at three ways you can use Nginx to customize the behaviors of your application in Heroku.

  1. Dynamically assigning server ports at container runtime
  2. Adding basic authentication to your application
  3. Mirroring traffic to test application changes without impacting your production service

Middle-Tier Dynamic Port Binding

First, let's look at dynamic port binding. To serve traffic to Heroku’s web dynos, you need access to an environment variable called “PORT”. This variable changes with each deployment and is not advertised before application start. This is a clear blocker for any service that does not have a way of binding to such a dynamic port. 

Heroku does offer buildpacks that can automate the deployment and configuration of such a middle-tier, but the solution for dynamic variables may not always be this easy. There are times when we might need to solve this or similar problems without the vendor's help. So let’s look at how we might manually build a solution that can transform a statically configured application into a dynamically configured one, using BookStack.

BookStack is a self-proclaimed opinionated wiki system built in Laravel with a MySQL backend. BookStack has taken several design considerations out of the application deployer's hands to simplify both its overall support architecture and to prevent the rabbit hole of wiki pages that are never found when they’re most needed.

To prep, we’ll need a few snippets gathered from BookStack & Nginx’s official documentation to put together a Dockerfile and some basic scaffolding files. You can see the whole project here: https://github.com/Tokugero/bookstack-demo.

Let's look at the Dockerfile:

FROM debian:stable-slim
ENV PORT="80"
ENV APP_URL="http://localhost/"
ADD
https://github.com/BookStackApp/BookStack/archive/release.zip /bookstack/
ADD https://getcomposer.org/installer /root/composer-setup.php
RUN apt-get update && \
        apt-get install -y \
        unzip \
        php-cli \
        php-mbstring \
        php7.3-curl \
        php7.3-dom \
        php7.3-gd \
        php7.3-mysql \
        php7.3-tidy \
        php7.3-xml \
        php-fpm \
        nginx && \
        apt-get clean && \
        rm -rf /var/lib/apt/lists/*
RUN unzip /bookstack/release.zip -d / && \
        rm /bookstack/release.zip && \
        php /root/composer-setup.php --install-dir=/usr/local/bin --filename=composer && \
        mkdir -p /var/lib/nginx && \
        touch /run/nginx.pid && \
        touch /var/log/php7.3-fpm.log
COPY config/bookstack.env /BookStack-release/.env
COPY config/nginx.conf /etc/nginx/nginx.conf
COPY scripts/run.sh /BookStack-release/run.sh
COPY config/nginx.htpasswd /BookStack-release/.htpasswd
RUN cd /BookStack-release && \
        composer install --no-dev && \
        chown -R www-data:www-data /BookStack-release/ && \
        chown -R www-data:www-data /etc/nginx/ && \
        chown -R www-data:www-data /var/lib/nginx/ && \
        chown -R www-data:www-data /etc/php/7.3/fpm/ && \
        chown www-data:www-data /run/nginx.pid && \
        chown www-data:www-data /var/log/php7.3-fpm.log && \
        chmod 600 .htpasswd
USER www-data
WORKDIR /BookStack-release/
ENTRYPOINT ["./run.sh"]

scripts/run.sh

#!/bin/bash
sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/nginx.conf
sed -i -e 's,APPURL,'${APP_URL}',g' /BookStack-release/.env
sed -i -e 's,listen = /run/php/php7.3-fpm.sock,listen = 127.0.0.1:9000,g' /etc/php/7.3/fpm/pool.d/www.conf
sed -i -e 's,pid = /run/php/php7.3-fpm.pid,pid = php7.3-fpm.pid,g' /etc/php/7.3/fpm/php-fpm.conf
cd /BookStack-release/ && \
        echo yes | php artisan key:generate && \
        echo yes | php artisan migrate
php-fpm7.3 & \
        nginx -g 'daemon off;'

config/nginx.conf 

worker_processes  4;
error_log  /dev/stderr;
user www-data;
include /etc/nginx/modules/*.conf;
events {
  worker_connections  4096## Default: 1024
}
http {
  include    /etc/nginx/fastcgi.conf;
  include    /etc/nginx/mime.types;
  index    index.html index.htm index.php;
  default_type application/octet-stream;
  access_log   /dev/stdout;
  sendfile     on;
  tcp_nopush   on;
  server {
    #This is updated via sed in ./scripts/run.sh at runtime
    listen       $PORT;
    server_name  _;
    root         /BookStack-release/public;
    client_max_body_size 0;
    location / {
        index index.php;
        try_files $uri $uri/ /index.php?$query_string;
    }
    location ~ \.php$ {
        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        if (!-f $document_root$fastcgi_script_name) {
            return 404;
        }
        # Mitigate https://httpoxy.org/ vulnerabilities
        fastcgi_param HTTP_PROXY "";
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        # include the fastcgi_param setting
        include fastcgi_params;
        # SCRIPT_FILENAME parameter is used for PHP FPM determining
        #  the script name. If it is not set in fastcgi_params file,
        # i.e. /etc/nginx/fastcgi_params or in the parent contexts,
        # please comment off following line:
        fastcgi_param  SCRIPT_FILENAME   $document_root$fastcgi_script_name;
    }
  }
}

config/bookstack.env

APP_KEY=replacemet
APP_URL=APPURL

Now this seems like a lot, but let’s break down some of the more important bits that make up the core functionality.

The Dockerfile has many lines that are primarily for installing the application itself. These were found on the official BookStack documentation and are used to manually install their service along with a few extra packages. The goal is to make the environment fit their application. To help make the service more dynamic there are three specific lines included:

...
ENV PORT="80"
ENV APP_URL="http://localhost/"
...
ENTRYPOINT ["./run.sh"]

This sets up a default environment variable and calls an arbitrary shell script to replace the Nginx configuration file values with environment values at run time. With this we can customize this application however we like. Instead of hard-coded variables to run this service we can now instantiate the service locally:

docker run -it -d -e APP_URL=http://localhost:9876 -e PORT=8080 -p 9876:8080 bookstack-demo

Notice how we can now declare the port at runtime without any special configuration of the application itself—the core requirement to exposing a service on Heroku’s web dynos. To pass these in, we simply utilized sed to substitute the environment variables with the hardcoded values at run time.

sed -i -e 's/$PORT/'"$PORT"'/g' /etc/nginx/nginx.conf;
sed -i -e 's,APPURL,'${APP_URL}',g' /BookStack-release/.env;

In our run.sh script that initiates the container primary command, we can use our stream editor/sed to replace the predefined variables in nginx's configuration file as well as the application's dedicated environment file. When we do this before our nginx initialization, we guarantee that the application is starting with the port that Heroku is defining after the container has been brought to life.

And one final deployment command:

heroku container:push web -a bookstack-demo && heroku container:release web -a bookstack-demo

Simple Basic Authentication

Some functions are just not available in our new infrastructure environment without special plugins. But maybe we want a simple password to prevent arbitrary requests from accessing our portal while we're in the initial stages of the project buildout. Nginx includes a large suite of capabilities automatically, one of which is basic authentication on a per-server or per-location basis. This allows us to password protect our FOSS application without any work on Heroku’s or BookStack’s end.

With all the previous legwork out of the way, we can simply add a few lines to our project to force a basic authentication on our application. By using the auth_basic module along with Apache’s htpasswd tool, we can add:

config/nginx.conf 

   ...
    location / {
        auth_basic "Under Construction";
        auth_basic_user_file /BookStack-release/.htpasswd;
        index index.php;
        try_files $uri $uri/ /index.php?$query_string;
    }
    ...

Remember to generate the password and include it in the Dockerfile:

bookstack-demo$ htpasswd config/nginx.htpasswd myuser
New password: mypass
Re-type new password: mypass
Updating password for user myuser

Here's what we now see when we try to access the app:

 Advanced Traffic Shadowing 

Our last example is a special Nginx module that mirrors traffic to any location of your choice (without impacting the original request’s destination). This is an excellent tool for testing code refactors, layout changes, and other alterations with real production traffic. 

config/nginx.conf 

  server {   
    mirror /mirror;
    mirror_request_body on;
    ...
    location = /mirror {
        resolver 1.1.1.1 valid=30s;
        internal;
        proxy_pass https://bookstack-mirror-demo.herokuapp.com$request_uri;
    }
    ...

Heroku Native Tooling

We mentioned Heroku’s Nginx buildpack earlier. It can automate some of this port management functionality, allowing us to hit the ground running without any specially curated functionality from Nginx. Our example compares directly with this approach, but the native tooling allows less back-and-forth between documentation resources.

To generate our mirror site, we’re going to utilize Heroku’s native buildpack to add Nginx functionality to an arbitrary project without all the custom dockerization in the examples above.

Simply spawn a new repository:

mkdir bookstack-mirror-demo; cd bookstack-mirror-demo; git init; 

Add the template files:

config/nginx.conf.erb

daemon off;
# Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;

events {
  use epoll;
  accept_mutex on;
  worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}

http {
  gzip on;
  gzip_comp_level 2;
  gzip_min_length 512;
  server_tokens off;
  log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
  access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
  error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
  include mime.types;
  default_type application/octet-stream;
  sendfile on;
  # Must read the body in 5 seconds.
  client_body_timeout <%= ENV['NGINX_CLIENT_BODY_TIMEOUT'] || 5 %>;
  server {
    listen <%= ENV["PORT"] %>;
    server_name _;
    keepalive_timeout 5;
    client_max_body_size <%= ENV['NGINX_CLIENT_MAX_BODY_SIZE'] || 1 %>M;
    root /app/public; # path to your app
  }
}

Procfile

web: bin/start-nginx-solo

Add your files to your Heroku Git:

heroku git:remote -a bookstack-mirror-demo
git add *; git commit -am “Initial commit”
heroku buildpacks:add https://github.com/heroku/heroku-buildpack-nginx -a bookstack-mirror-demo
git push heroku master

You can now start mirroring your traffic and cloning your service!

Conclusion

Free Open-Source Software is an excellent building block for any custom application suite. Regardless of its native restrictions on how it can be used, it can be customized to work within the confines of your environment. Hopefully you now have some ideas on how to abstract your application from any framework in which it resides, and how you might use the immense flexibility of a middle-tier like Nginx to empower your applications.