Whether you need to build lambda code or hydrate a test database, most projects that you work on will involve some piece of DevOps scripting. If your application is using Node, there's no reason not to use it in your scripting too. This makes it easier for other Node developers working on the project to maintain the scripts that support it. With just a few packages you can make a great script that other developers will appreciate.
The first step of any scripting setup is getting your code to run. I could just write pure JavaScript and use Node to execute it directly, but I like the safety that I get with TypeScript. Also, since all of my application code is written in TypeScript, there are a lot of times that I will need to import a controller that I wrote for the application in my script.
As I described in my previous article Running TypeScript without Compiling, ts-node was my go-to solution for this for a long time. I've since started using esbuild to transpile all of my TypeScript and have discovered TypeScript Execute, which is similar to ts-node except that it uses esbuild under the hood.
Now, I globally install tsx
:
npm i -g tsx
then I can just add a shebang to the top of the script
#!/usr/bin/env tsx
and finally, mark the file as executable
chmod +x ./script.ts
Once the script is running, I'll probably need some sort of parameters. If I'm in a CI/CD environment, most of the parameters will probably come from environment variables so that I can update them without updating the pipeline definition. If I'm running the script locally, however, I'll need to be able to supply parameters manually or override them.
For this, I use the commander package because it's easy to use and, if I need to expand later, it supports adding multiple commands.
To get started, import the program
and define some options. The nice thing is that I can specify an environment variable as the default value for an option, so I know whatever value is passed through that option is the right one.
import { program } from 'commander';
program
.option('--someFlag')
.option('--foo <foo>',
'A foo value',
process.env.FOO_VALUE);
Now that the options are defined, it's time to define what happens with those options. This is done by calling action
on the builder. Usually, I just add an action to the end of my option definitions, so the above sample would become:
import { program } from 'commander';
import { inspect } from 'util';
interface CommandOptions {
someFlag: boolean;
foo: string;
}
program
.option('--someFlag')
.option('--foo <foo>',
'A foo value',
process.env.FOO_VALUE)
.action(async (options: CommandOptions) => {
console.info(inspect(options));
await someCommand(options);
});
Now that everything is defined, you have to kick off the actual program. The default function to use is parse()
. If you're cool and return a Promise
from the action
function, however, you'll need to call parseAsync()
. This function will grab the values in process.argv
, parse them, call the appropriate action
function that you defined, and wait for its result.
program.parseAsync()
.then(() => console.info('🎉 Done!'));
There are two types of scripts that I hate: ones littered with console.info()
calls, and the ones that don't show any progress. If the script will finish in a short amount of time (read under a couple of seconds) then I think it's okay to just output a result at the end. For most scripts, however, there are multiple steps and it can take a little while.
To reassure me that everything is progressing, I define the steps of the script with listr. I know it hasn't been updated in 4 years as of this writing, but it just works. It displays a list of steps with a spinner and check marks when running locally, and it prints out logs when running in a CI/CD environment.
To use listr, you provide it a list of steps. The task can return a Promise
or a new Listr
instance with sub-tasks. To define the list, create a new instance of Listr and pass in an array of task objects, each with at least a title
and task
property.
The task
function accepts a context
parameter that is passed through every step. The initial values for the context will be passed in when we execute the task list too, so for now we can assume it'll just be the CommandOptions
from earlier.
const tasks = new Listr<CommandOptions>([
{
title: 'First Step',
async task(ctx) {
// TODO: Actually do something 🙄
}
}
]);
What if you want to skip a task in some cases? You can provide a skip
function in the task definition. The skip
function gets the context
parameter and returns a boolean
(or a Promise<boolean>
) which tells listr if the task should get skipped or not. Makes sense.
So if you wanted to skip that first step if the someFlag
variable is true, you would just return it from the skip
function.
const tasks = new Listr<CommandOptions>([
{
title: 'First Step',
skip: (ctx) => ctx.someFlag, // Stop 🛑
async task(ctx) {
// TODO: Actually do something 🙄 or not? 🤷
}
}
]);
One of my favorite features of Listr is that I can map an array of items to create a list of sub-tasks and then run them in parallel. To run Listr tasks in parallel, pass in an options object after the list of tasks and set the concurrent
property to true
.
const tasks = new Listr([
{
title: 'Get Items',
async task(ctx) {
ctx.items = await getItems();
}
},
{
title: 'Process Items',
task: (ctx) => new Listr(
ctx.items.map(processItem),
{ concurrent: true }
)
}
]);
Once the task list is defined, you need to tell listr to run it and pass in the initial context value. This couldn't be easier, just call the run
method on your listr instance and pass in the context. It returns a Promise
with the final context, so you'll want to await
the result.
// Imagine all the options calls from before...
.action(async (options: CommandOptions) => {
const result = await tasks.run(options);
displayResult(result);
});
Now you have a few essentials that you can use. Next time you find yourself slogging through a manual process to build some code for the 15th time, try throwing a script together.
Cover photo by Jefferson Santos on Unsplash
Also published here.