paint-brush
How to Automate the Creation of a Development Serverby@alexkuc
454 reads
454 reads

How to Automate the Creation of a Development Server

by Alexander KucheryukSeptember 17th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The aim of this article is to provide a more in-depths tutorial. In part one, I have shown how to create a remote development server using DigitalOcean and rsync. I will show how to automate the entire process using Bash script. The Bash script allows a certain degree of flexibility via environment variables. The droplet can be configured using either Bash or yaml. The script supports chaining, e.g. If you require a greater degree of customizability you can either submit a PR or fork the repository.

Coin Mentioned

Mention Thumbnail
featured image - How to Automate the Creation of a Development Server
Alexander Kucheryuk HackerNoon profile picture

In part one, I have shown how to create a remote development using DigitalOcean and rsync. In part two, I will show how to automate the entire process using Bash script.

TL;DR: For those who don't have time, this repository contains a concise summary. It contains the bare minimum required to get started. The aim of this article is to provide a more in-depths tutorial.

To get started, you will need to install and configure these dependencies:

  • DigitalOcean account
  • Bash, at least version 4
  • doctl
  • rsync (
    do.sh sync
    )
  • fswatch (
    do.sh watch
    )
  • scp (
    do.sh copy
    do.sh scp
    )

Once that is done, clone the repository. You can either set it up as a sub-module or as a standalone repo. To make 

do.sh
 accessible from anywhere, copy it or symlink it into your 
PATH
, e.g. 
/usr/local/bin
.

The droplet can be configured using either Bash or yaml configs. Examples of both are available in the repository. You would need to call the script either from one level up from 

do.sh
. Alternatively, export environment variable 
CLOUD_CONFIG
 with a different path which defaults to
./dev-server/cloud-config.yml
.

This script allows a certain degree of flexibility via environment variables. For instance, the config used can be specified via 

CLOUD_CONFIG
. If you require a greater degree of customizability, you can either submit a PR or fork the repository.

Cool Features

  • one-button solution, to get started, do:
    do.sh start
  • support command chaining, e.g.
    do.sh up prep sync
  • overwritable settings (environment variables)
  • separate command
    cmd
    to rewrite path in output
  • ssh socket to avoid constant reconnection
  • singleton pattern: avoid creating more than 1 droplet at a time
  • a number of useful built-in commands to type less

Usage

Using

do.sh
is very simple. To get started, type
do.sh help
which will show you a list of available commands. Some commands support chaining, e.g.,
do.sh up prep sync
which will run in sequential order. Generally, you can chain commands which have a fixed number of arguments such as
up
or
down
. Commands like
ssh
,
cmd
and
copy
can have any number of arguments so these do not support chaining. A good workaround is to add these commands at the very end, e.g.
do.sh up copy file1 file2 file3
.

Below is a list of available commands:

up         create dev server *
down       destory dev server *
reset      re-create dev server *
sync       rsync from local to remote *
watch      watch local for changes and sync
deps       install Node deps on remote *
prep[are]  shortcut for sync -> deps -> watch
ssh        start interactive ssh session
ssh <cmd>  execute command on droplet
cmd <cmd>  ssh <cmd> and replace cwd with local
scp <path> copy from remote to local (cwd)
copy<path> copy from local to remote (~/.repo/)
cp <path>  alias to copy command
dist       shortcut to copying dist/ from remote *
host       show public ip of remote *
config     create config from env var CLOUD_CONFIG *
help       show available commands

* these commands support chaining, e.g. do.sh up prep sync

Here is an example of my workflow. I start with

up
, followed by
prep
. As this script supports chaining, here is what I do:
do.sh up prep
. If I need to run a command after copying files, I execute
do.sh sync cmd <cmd>
. Path re-write is useful if I want to be able to copy and paste path from the error stack straight away (
cmd
). For instance, I use iTerm which supports semantic history and with path re-write, I can open files directly from console on my local system.

Environment Variables

This script supports settings via environment variables. Here is a list of variables:

  • NAME
    name of the droplet, defaults to
    dev-server
  • IMAGE
    os (image) to be used, defaults to
    ubuntu-20-04-x64
  • SPECS
    droplet specs, defaults to
    s-2vcpu-2gb
    ; find out more specs by running
    doctl compute size list
  • REGION
    droplet datacenter, defaults to
    lon1
  • CLOUD_CONFIG
     location of cloud config, defaults to 
    ./dev-server/cloud-config.yml
  • SSH_KEY
    local path to private ssh key, defaults to
    ~/.ssh/developer
  • SSH_USER
     ssh user, defaults to 
    developer
  • SSH_HOST
    ssh host, defaults to none; the value is determined at a runtime when
    up
    command is run and saved to
    SSH_OUTPUT
  • SSH_SOCKET
    local path for ssh socket, defaults to none; once
    SSH_HOST
    is available, the value becomes
    ${HOME}/.ssh/sockets/$SSH_USER@$SSH_HOST
  • SSH_CWD
    value of
    pwd
    on remote host, configured at runtime
  • LOCAL_CWD
    value of
    pwd
    on local host
  • SSH_HOST_FILE
    local path where
    SSH_HOST
    value is saved, defaults to
    /tmp/dev_ssh_host
  • SSH_CWD_FILE
    local path where
    pwd
    of remote host is saved, defaults to
    /tmp/dev_ssh_cwd


Conclusion

In this article I have shown a Bash script which automates creation of remote development server. Part one went into technical details of setting up the droplet while this part (part two) automates the entire process.

Also published on: https://alexkuc.github.io/articles/create-remote-dev-server-part-2/