This blog post is part of a series where I share our migration from monolithical applications (each with their own source repository) deployed on AWS to a distributed services architecture (with all source code hosted in a monorepo) deployed on Google Cloud Platform.
I see consistency as an integral part of successfully delivering software. Too often have I joined or worked with software teams that had little to no consistency.
It applies to all aspects of development: Code style, comments, tools, onboarding, creation of new services. It also extends to product management, defining and tracking tasks, and generally company processes.
Let’s take the “creation of new packages and services” case of my current project where we migrate from monolithical applications to smaller, independent and distributed services. Here’s how we created the first 3 services:
Can you spot the consistency? Correct, Copy & paste looks pretty consistent. What happens when people who spent a week building service 1 move on? Who knows what needs to be tweaked to create service 8? Imagine the nightmare when a fundamental bug occurs and impacts all services… 👻.
Now the question is, how do we make this more consistent? I asked a few friends and many replied: Document the process.
As someone who’s spent more than half of his life in the software industry, I realized the simplest way to stay sane in this fast-moving environment is to write scripts that do the work for me.
Documentation is great, as long as it is accurate. There are situations where documentation is necessary, but for the use case we discuss in this blog post (creating new packages and services), documentation is the wrong approach.
Every new package or service has a certain shape that’s fairly similar among all others, like every house has some sort of foundation, some walls and windows and a roof.
Imagine the following procedure to create 3 services:
Now let’s not only imagine the above procedure, let’s see how we achieved exactly that in our project.
To be transparent, we obviously had something like service 0 where we hand-crafted everything, tested the service, deployed, tweaked etc. However, we knew we want to automate this process so we paid close attention to that from the very beginning.
We currently have two generators:
All template files live in a
_templates folder. The directory structure is:
│ ├── packages
│ │ ├── README.md
│ │ ├── iso
│ │ ├── svr
│ │ └── web
│ └── services
│ ├── README.md
│ ├── svr
│ └── web
│ └── index.js
README.md template files exist once for packages and once for services. This ensures each package (and each service) follow the same structure. A
README.md file contains the necessary information for anyone to contribute to a package or service.
Further down, the
generators are defined. The generator entrypoint and the package generator look something like this:
The service generators are a bit more complex since they also take care of some additional service setup, such as creating a RuntimeConfig resource in GCP, creating a channel in Slack, adding a new component in Jira, etc.
The generator can be nicely bundled into a NPM script in the repository’s root
package.json like so:
"generate": "plop --plopfile ./scripts/generators/index.js"
All it takes now to generate a new package is
yarn generate. An interactive CLI then guides the developers through a few questions. A nice-to-have feature is the fact that you can pass the generator name as an argument, e.g.
yarn generate service brings you right to the service-related questions.
Create your free account to unlock your custom reading experience.