We, realMethods, are a small System Integrator. As a small SI with limited resources, we needed an advantage to compete for large engagements. We needed a lever we could pull that the bigger SIs could not. Before plowing head first into developing something, it was important to reflect on what we actually needed, had tried and experienced, what worked and did not work, and what mattered today that would still be relevant tomorrow.
We came up with 9 statements.
1. Simple to use, Then Get Out Of The Way
If a tool can serve a single purpose with minimum input and very useful output, they have me as a loyal user. Good tool developers recognize they don’t own the entire tool-belt and find a place to fit for a well defined task. And once used, they know how to get out of the way so work can continue. They add value to the work continuum without needing to be the star of the show.
It must fit a current need and do so without disrupting a Developer’s normal course of work.
2. Overcome “Generator Bias”
Some of the team initially had no interest in heading down the “Generator” path. Generators fundamentally have a tenuous reputation. Most come with a lot of hype and promise followed by a let down.
Although automation is a pillar of DevOps, automation of the required code still has its skeptics. More interesting is how “everything as code” is widely accepted yet using code to create code is not an integral DevOps phase.
We decided to use logic over prior bias. If we could generate entire projects, we could do more with fewer resources for lower cost in less time, leading to us winning more client engagements.
3. We need contextual projects generated, not just applications or code
A bit of common understanding
Code Generator: creates the beginning of an application with no promise of entire one.
Application Generator: creates code with the expectation of minimally functionality. All the pieces are likely able to be built, but the result still lacks the configuration required of an operational environment.
Project Generator: Creates an application along with all the files to build, test, archive, package, containerize, and finally deploy the application.
Model-Driven Project Generator: Creates everything by applying an entity domain model wherever appropriate among the generated assets. It is the closest a first commit of a project can come to its final commit.
It must generate all business contextual code and files such that an operational CI/CD pipeline can flow the resulting application from build through deployment.
4. Have To Not Be Able Live Without It
We need to build something that when used once, we will never want to go back to life without it. This is true for us and our clients. Not only because it offers a clear competitive advantage. It provides early project lift we have become professionally and psychologically dependent on. We don’t have to worry about writing the low level plumbing any more.
We don’t want to. Just as we have become dependent on the leverage our tool chain offers, having to instead manually reproduce the 10s to 100s of 1000s of lines of code would be demoralizing.
It should provide so much value, once used a user will never want to start a DevOps project without it.
5. Can Never Predict What A Client Needs
We start new projects with a client,and until told, we have no idea about:
It has to be fully customizable to accommodate a project’s requirements.
6. Support Declarative DevOps
There is “as-code” for containers,orchestration, infrastructure, configuration, and pipelines, so why not a project?
What we built had to follow suit. If a project’s technical requirements can be described, then those requirements can be declared and codified in a YAML file. The tool had to hide the messy details of the "how" in order to create the "what". Just like other "as-code" implementations, it should be easy to use.
Simply stated, Project-as-Code allows us to make the following declarative statement to turn requirements into a running CI pipeline with a functional application flowing through it:
"I have a business model I would like to apply to an Angular7 tech stack. I want >to store data using MongoDB. I would like the resulting project files to be >committed to my GitHub repo, the source files to be built and tested using >CircleCI, and the resulting application pushed as a Docker image to my container >repository then finally deployed to a designated Kubernetes cluster on GCP."
The resulting YAML looks something like this:
techstack:
identifier: Angular7Stack
model:
identifier: Some business model file location
options:
application:
name: MyAngular7App
version: 0.0.1
cicd:
platform: circleci
git:
username: gitUId
password: gitPwd
repository: gitRepo
tag: anyTag
host: anyGitHost
docker:
userName: dockerUId
password: dockerPwd
orgName: dockerOrg
repo: dockerRepo
tag: dockerTag
kubernetes:
host: https://xxx.xxx.xxx.xxx
hostTarget: google
username: k8UId
password: k8Pwd
artifact-repo:
type: jFrog
userName: jFrogUId
password: jFrogPwd
repoUrl: http://xxx.xxx.xxx.xxx:8081/repository/npm-public
mongodb:
serverAddress: localhost:27017
databaseName: angularDemoDB
The tool must hide the heavy lifting of project generation by accepting a single configuration file containing the project declarations to then execute the intent of the file’s author
7. Being Small and Lazy Is A Good Thing
A small SI needs to be super efficient doing as little as possible while still fulfilling all required tasks. Each individual task is evaluated as a generation candidate and if found not to be, we look to discover if it (or a variant) has already been built. If it has, we either reuse or extend it. That allows a small team to create output at the scale and pace of much larger teams.
Generate all you can, especially all the things nobody wants to write but are needed by the project.
8. Make Accommodations For Smarter People
By design we chose to capture expertise through the templatizing of a tech stack (we call it a tech stack package). This package contains everything needed to represent the intent of its author, along with other information to interact with the tool during generation time.
The tool cannot make assumptions about the purpose of a tech stack package, so long as it is well formed according to documented standards. This allows expertise to be consumed indiscriminately.
Although the focus of the tool is on model-driven DevOps projects, a fortunate consequence of our design is that it can generate any type of application, project, and other packages relying on file-based outcomes.
There are too many permutations of languages and technologies to think we could capture them all at once or capture any perfectly.
9. A Client Entity Model and Our Meta Model Make The Difference
Two projects might use the same programming language with similar technical requirements but differ in their entity models. We support standard model definition types to allow the inclusion of business context into the generation. Without this, we would only be able to generate a glorified “Hello World” app.
Importantly, we developed a meta-model with support for factory plug-ins for each model definition type. A plug-in uses the tools’ meta-model API to help translate the input model to the meta-model structure. In turn, a tech stack package consumes the meta-model through the same API. A template within a tech stack package makes output decisions based on the content of the meta-model among other things.
The tool must support an extensible way of including business context into project generation.
Conclusion
We built a tool that we and our clients use to generate DevOps projects with minimal upfront effort. Tens to hundreds of thousands of lines of code we need suddenly appear. With Project-as-Code, there is now something to automate the automation behind DevOps. It serves as a catalyst to DevOps to help lower hurdles to get from idea to container easier and faster.
Project-as-Code is a real thing because it makes sense. It fits nicely between discovering the intent of a project and realizing its first commit. It will take time before project generation becomes a standard phase in the DevOps life-cycle.
We believe it will eventually and for us and some very forward thinking companies, it already has.