In , I have shown how to create a remote development using DigitalOcean and rsync. In part two, I will show how to automate the entire process using Bash script. part one TL;DR: For those who don't have time, this contains a concise summary. It contains the bare minimum required to get started. The aim of this article is to provide a more in-depths tutorial. repository To get started, you will need to install and configure these dependencies: DigitalOcean account Bash, at least version 4 doctl rsync ( ) do.sh sync fswatch ( ) do.sh watch scp ( ) do.sh copy do.sh scp Once that is done, clone . You can either set it up as a sub-module or as a standalone repo. To make accessible from anywhere, copy it or symlink it into your , e.g. . the repository do.sh PATH /usr/local/bin The droplet can be configured using either Bash or yaml configs. Examples of both are available in the repository. You would need to call the script either from one level up from . Alternatively, export environment variable with a different path which defaults to . do.sh CLOUD_CONFIG ./dev-server/cloud-config.yml This script allows a certain degree of flexibility via environment variables. For instance, the config used can be specified via . If you require a greater degree of customizability, you can either submit a PR or fork the repository. CLOUD_CONFIG Cool Features one-button solution, to get started, do: do.sh start support command chaining, e.g. do.sh up prep sync overwritable settings (environment variables) separate command to rewrite path in output cmd ssh socket to avoid constant reconnection singleton pattern: avoid creating more than 1 droplet at a time a number of useful built-in commands to type less for more details, please see the repository Usage Using is very simple. To get started, type which will show you a list of available commands. Some commands support chaining, e.g., which will run in sequential order. Generally, you can chain commands which have a fixed number of arguments such as or . Commands like , and can have any number of arguments so these do not support chaining. A good workaround is to add these commands at the very end, e.g. . do.sh do.sh help do.sh up prep sync up down ssh cmd copy do.sh up copy file1 file2 file3 Below is a list of available commands: up create dev server * down destory dev server * reset re-create dev server * sync rsync local to remote * watch watch local changes and sync deps install Node deps on remote * prep[are] shortcut sync -> deps -> watch ssh start interactive ssh session ssh <cmd> execute command on droplet cmd <cmd> ssh <cmd> and replace cwd local scp <path> copy remote to local (cwd) copy<path> copy local to remote (~ ) cp <path> alias to copy command dist shortcut to copying dist/ remote * host show public ip remote * config create config env CLOUD_CONFIG * help show available commands * these commands support chaining, e.g. do.sh up prep sync from for for with from from /.repo/ from of from var Here is an example of my workflow. I start with , followed by . As this script supports chaining, here is what I do: . If I need to run a command after copying files, I execute . Path re-write is useful if I want to be able to copy and paste path from the error stack straight away ( ). For instance, I use iTerm which supports semantic history and with path re-write, I can open files directly from console on my local system. up prep do.sh up prep do.sh sync cmd <cmd> cmd Environment Variables This script supports settings via environment variables. Here is a list of variables: name of the droplet, defaults to NAME dev-server os (image) to be used, defaults to IMAGE ubuntu-20-04-x64 droplet specs, defaults to ; find out more specs by running SPECS s-2vcpu-2gb doctl compute size list droplet datacenter, defaults to REGION lon1 location of cloud config, defaults to CLOUD_CONFIG ./dev-server/cloud-config.yml local path to private ssh key, defaults to SSH_KEY ~/.ssh/developer ssh user, defaults to SSH_USER developer ssh host, defaults to none; the value is determined at a runtime when command is run and saved to SSH_HOST up SSH_OUTPUT local path for ssh socket, defaults to none; once is available, the value becomes SSH_SOCKET SSH_HOST ${HOME}/.ssh/sockets/$SSH_USER@$SSH_HOST value of on remote host, configured at runtime SSH_CWD pwd value of on local host LOCAL_CWD pwd local path where value is saved, defaults to SSH_HOST_FILE SSH_HOST /tmp/dev_ssh_host local path where of remote host is saved, defaults to SSH_CWD_FILE pwd /tmp/dev_ssh_cwd Conclusion In this article I have shown a Bash script which automates creation of remote development server. went into technical details of setting up the droplet while this part (part two) automates the entire process. Part one Also published on: https://alexkuc.github.io/articles/create-remote-dev-server-part-2/