paint-brush
Your CLI Tool Should be a Server (Maybe)by@john.m.murray786
1,720 reads
1,720 reads

Your CLI Tool Should be a Server (Maybe)

by John MurrayJanuary 19th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong>If you have written a CLI, maybe it should have <em>also</em> been a&nbsp;server.</strong>

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Your CLI Tool Should be a Server (Maybe)
John Murray HackerNoon profile picture

If you have written a CLI, maybe it should have also been a server.

As authors of CLIs and tooling, I’m sure the idea of integration has crossed your mind before. However, if your idea of integration includes the use of grep, then maybe it’s time to take a step back and consider other options.

Lately, I’ve been rewriting an internal company tool we have for managing database changes across various environments. The tool itself is rather straightforward to operate and is used in a multitude of environments. Developers use it when setting up and tearing down development environments or locally testing alters. Our CI environments use it to verify DB changes against the prod data-set and it is also used by consumers of the DB to setup prod+1 environments. Lastly, we have internal systems that create “mock” production environments for developer/automated testing that uses the tool (or by-products of the tool) to replicate production state.

The major point I’m trying to make is that the tool is part of many workflows and must be able to integrate. This is true for many CLIs. However, the problem is that integration is usually regulated to string munging on the command line or a scripting language. This style of integration is very fragile. What if the text-output of the CLI tool changes? What are the expected contracts? Well… There typically are none, and that’s a problem when the tool is part of release pipelines and other important processes.

Geez, I think you’ve made your point by now. So what do you suggest, smarty pants?

The Server CLI

Instead of writing a CLI, imagine that you instead wrote an (HTTP) service. One that defines service contracts and exposes those contracts to clients. This service is ideal for integration as the client and server can now share assumptions. Even better, the client can likely generate their client-side code from the published contracts, making integration (relatively) easy and painless.

This is cool and all, but my CLI works in the local directory. How am I supposed to make that a server?

It’s common for CLIs to work in the local directories, or in some directory specified by the user. Servers, on the other hand, are running in the cloud (or usually on a machine not your own, call that whatever you want). However, the Server CLI is a server that runs on your local machine and thus, can be configured to run within the context of a specific directory.

So when I run ls you’re suggesting it start a server? Dude… overkill.

Yeah, that’s totally overkill and not necessary. For starters, ls is not needed within other tooling. Most languages have the ability to list the contents of a directory. But also, this is not the kind of tool that would benefit from being a server, which is also the tricky part to define. A tool that makes a good candidate for “server-ification” is one that, for a particular session, operates primarily in a single or fixed set of directories. Some examples:

  • git
  • sbt/gradel/other build tools
  • gem/npm/other dependency-management tools
  • language-analysis tools

In general, these tools are usually used to run multiple commands, possibly manipulating state, within some context. I hope this better defines what types of tools would benefit from having a server-component and what sort of value you’d receive as someone integrating with those tools.

What about libraries? Take libgit2 for example.

That is a really good point and libraries are super awesome! However, libraries have some disadvantages compared to a server for the purpose of integration.

First, they have to be written in a common language that can be used by everyone, usually this is C. And even though C is the lingua franca of programming languages, it is far from free and usually requires someone to write a binding library. In my opinion, this is a pretty high bar for the average person trying to integrate with some tool. Also, when it comes to writing bindings, there will always be the purists who then rewrite the libraries in pure implementations of the target language. Rewriting a library N times (one for each language people typically use) is a huge waste of resources and is its own basket of problems.

Second, libraries have a tendency to expose lower-level semantics. Take our libgit2 example as evidence. This library exposes much of the internals and underlying structures of git and is a super-set of what can be accomplished from the command line. This can be fantastic if you are looking to extend the functionality of git but, not the most intuitive if you’re just trying to integrate with existing behaviors.

So… Libraries. They’re awesome in certain contexts, but they’re not the right hammer for all the screws.

Structuring Your Tool

Fine, you’ve done it. I’m kinda-sorta convinced but, how do I even go about this?

If you’re going to build a server for your CLI, and clients integrate with your server, then that’s where the logic should live. Your CLI then just becomes another client. Visually we can think of it like:

(created with www.draw.io)

Let’s break down this picture a bit:

The CLI and the server occupy the same process space. This means that anytime you call a command on the CLI a server is started, used, and shut-down. You don’t need to start up some secondary server before being able to do your work, which very important since you don’t want to break the user-experience for the command-line. Also, since our server is likely working in the context of a fixed set of directories, it may not make sense to have an already running server unless you’re going to create a server for each instance you could want to use the tool in.

The library is NOT accessible by the CLI. As you can tell from the picture, and the previous paragraph, this is not strictly enforced. This is a requirement you should follow to ensure clients integrating with the server do not lack features over what they can do in the CLI. I’m sorry, but you’ll just have to show some self-control here. ;-)

Users may interact with the CLI, the server, or both. This one is kind of obvious as it’s the whole point, but it’s important to define what this looks like. In general a user using your CLI should look no different than usual:

# Normal looking command-line interactions
$ schema list -v
$ schema up -n2
$ schema rebuild -f

Interacting with the server, on the other hand will require that the user be able to start your server with some user-supplied options, such as the directory to operate in and the port to bind on:

$ schema --directory=~/schemas/metrics-app --port=8787 >log &
# Starts server and pushes to the background

$ curl localhost:8787/list
$ cat <<EOF | curl localhost:8787/up -XPOST -d@-
{
    "force": true,
    "alters_to_run": 2
}
EOF

If your CLI offers some sort of interactive console/session, then you might also consider exposing the same server-configuration options to the user. This can be a useful workflow when creating and testing integrations. Such as below where I attempt to show two terminals:

# TERM-1
#   List current schema state, apply 2 alters
$ schema console --port=8787
> list
> up -n2

# TERM-2
#   Verify the alters were applied from the
#   console session. Rollback 1 alter.
$ curl localhost:8787/list
$ cat <<EOF | curl localhost:8787/down -XPOST -d@-
{
  "force": false,
  "alters_to_run": 1
}
EOF

# TERM-1
#   Verify rollback was done via console.
> list

This sort of workflow lends itself well to quick iterations and rapid development.

Final Thoughts

When developing CLI tools, it’s important to keep both the use-cases and the audience in mind. If you know your tool will be, or already happens to be, integrated into various workflows then it may be time to reconsider your integration story. Is simply a CLI enough? Do you need to expose functionality as a library for extensibility? Would having a server-component ease integration pains? And most importantly, if you expose a CLI, library, and server to your users; can you call it the Burger King pattern? :-D

As a final, closing thought I think it’s important to point to at least a couple of examples where building a server similar in nature to the one we’re describing here has been very successful and may provide inspiration for your future projects.

  • LangServ — A server-protocol for editors to integrate with program-analyzer servers. These language servers operate within the context of a fixed set of directories and represent tooling that would have otherwise been represented as CLIs or libraries.
  • Docker — The CLIs for Docker talk to local services running a docker daemon. While not totally the same as what I’ve described in this article (because only one instance of the docker service needs to be running at one time) it still allows developers flexibility of talking to the process directly through the API (see Go client), or through the CLI.

Have some other, awesome examples? Please share in your responses!