This is Part 1 of 6 in the Let’s Explore ARK Core series which documents the development of the next major release of ARK Core, alongside some tips & tricks on how to get started with contributing and building your next idea today.
In Part 1 of this series, we will focus on the infrastructure improvements that have been implemented in ARK Core 3.0. Those improvements include how the application is bootstrapped, how components are wired up and how it has become easier to extend, complement and test the system with your functionality without having to modify our default implementations directly, ending up with conflicts that become tiresome to resolve.
ARK Core v3.0 Github Repository
Before we begin, let’s set the premise under which Core 3.0 was started by listing the issues that Core 2.0 had, how those arose and then we’ll look into how Core 3.0 aims to resolve those.
process.env
variables.The application instance is the central entry point to ARK Core. It is responsible for loading and verifying configurations, deciding what packages should be registered, bootstrapping the packages and serve as the connection to share state between all packages that developers add to their installation.
Core 2.0 provided an application instance that was difficult to work with because it consisted of only hardcoded entities like a configuration manager that was not accessible, the package loader was not accessible, there was no easy way to resolve paths or environment-specific and more. All of those factors combined made testing and developing packages unreasonably difficult as the developer experience (DX) was too tedious in the end.
Tackling all of those issues in the current state would’ve been difficult so a complete rework from scratch was the cleanest solution. Core 3.0 has a completely new application object which has been rewritten from scratch with simplicity, extensibility, and testability in mind from the start.
// Core 2.0
import { app } from "@arkecosystem/core-container";
process.env.CORE_NETWORK // get the name of the network
process.env.CORE_TOKEN // get the name of the token
app.resolve("..."); // resolve a generic value
app.resolvePlugin("..."); // resolve a plugin
app.resolveOptions("..."); // resolve the options of a plugin
// Core 3.0
import { app } from "@arkecosystem/core-container";
app.network() // get the name of the network
app.token() // get the name of the token
app.get("..."); // resolve a generic value
As you can see in the above example the usage is less verbose and you probably also noticed that the
resolvePlugin
and resolveOptions
methods are gone. This is a change that was made to loosen the coupling and give developers more freedom in how they develop their packages, store their configuration and access all of them.Let's have a look at the below code which is taken from the
@arkecosystem/core-api
package.import { Providers } from "@arkecosystem/core-kernel";
import { Server } from "./server";
export class ServiceProvider extends Providers.ServiceProvider {
public async register(): Promise<void> {
this.app
.bind("api.options")
.toConstantValue(this.config().all());
this.app
.bind<Server>("api")
.to(Server)
.inSingletonScope();
}
public async boot(): Promise<void> {
await this.app.get<Server>("api").start();
}
public async dispose(): Promise<void> {
await this.app.get<Server>("api").stop();
}
}
As you can see in the above code all control of how things are bound and resolved from the container is in the hands of the package developer rather than Core deciding how your data should be stored. Packages are no longer treated as special entities but rather just a provider that provides any number of services to Core 3.0 as opposed to Core 2.0 assuming that a single service is provided.
Don’t worry about the service providers now, we’ll look at those in part 2 of the series and explore how we can take advantage of them to build flexible packages that enhance the functionality of Core 3.0.
We hope that this newly provided simplicity, extensibility, and testability will encourage more developers to get involved with ARK Core and package development for the ecosystem to greatly enhance what ARK Core is capable of doing.
The container is what provides the bread and butter that is needed to build a solid foundation for the infrastructure that is necessary to achieve the goals ARK Core 3.0 set out to achieve. It allows us to bind values, functions, and classes into a single entity that takes care of storing and managing all interactions with them.
Core 2.0 has been using Awilix as its container and provided a wrapper around it since its implementation. At the time this worked fine as the requirements were rather low as the codebase was written in JavaScript which meant there was no concept of interfaces available, thus following the Design by Contract principle was rather difficult as it is usually going hand in hand with the Dependency Inversion principle which dictates that you should rely on abstractions rather than concrete implements.
Let's have a look at the Dependency Inversion principle to set the stage of what is coming up next. Take the code below, you might think that the implementation is fine since a car is just a car, so what does it matter how it is implemented.
class Car {
start(): void {}
}
const car: Car = new Car();
Now you have an implementation of a car which you can start, seems reasonable. Well, the issue you will encounter is that these days there are different types of cars, some run on electricity and some on diesel. With the above implementation, it will become messy to implement engine specific logic as you will have to make use of
if
statements to decide what should be done to start the car.A better approach is to provide an implementation contract that is abstract and makes no assumptions about the implementation as those are details that shouldn’t concern your application when it consumes the car entity. The car should just start, electro or diesel.
interface Car {
start(): void;
}
class ElectroCar implements Car {
start(): void {}
}
class DieselCar implements Car {
start(): void {}
}
const container: Container = new Container();
container.bind<Car>(ElectroCar).to(ElectroCar).whenTargetNamed("electro");
container.bind<Car>(DieselCar).to(DieselCar).whenTargetNamed("diesel");
If we take the above implementation and combine this with the Dependency Inversion principle you will notice that we are no longer coupled to a concrete car implementation but rather the
Car
implementation contract which is then resolved to either electro or diesel implementation. The benefits of this are that we don’t have to reference specific classes and also don’t have to worry about how something is implemented as long as it satisfies the contract we specified.Core 3.0 replaced Awilix with InversifyJS. A powerful and lightweight inversion of control container for JavaScript & Node.js apps powered by TypeScript.
Now you might wonder why we decided to replace container if Awilix was doing the job, fair point. The main reason is that Inversify is developed in and for TypeScript which means that true Dependency Injection is possible where you bind implementations contracts (interfaces) to concrete implementations. Awilix tries to cater to a JavaScript audience while supporting TypeScript support through type definitions which means you get the benefit of type hinting but not the ability to use interfaces in the way it is possible with Inversify.
Using the new container to its full capabilities is a breeze due to two factors.
Now that sounds great on paper but you are probably asking yourself what the heck you really gain from this. Let's illustrate the benefits with a few examples from Core 3.0 itself.
// Binding a hapi server instance as a singleton.
// This will be resolved once and then cached to always return the same instance.
import { Server } from "@hapi/hapi";
this.app.bind<Server>("api").to(Server).inSingletonScope();
// Binding a static/constant value
this.app.bind<object>("api.options").toConstantValue(Server);
// Binding a dynamically resolved value
this.app
.bind(Identifiers.CacheService)
.toDynamicValue((context: interfaces.Context) =>
context.container.get<CacheManager>(Identifiers.CacheManager).driver(),
);
// Accessing the container directly
import { app } from "@arkecosystem/core-kernel";
app.ioc // This is the internal instance of the Inversify container
As you can see the capabilities and syntax of the new container are expressive and simple while not giving up any functionality. We think that this simplicity will provide a better developer experience overall and give package developers more freedom and control.
This is only a fraction of what Inversify is capable of so make sure to take a look at the official Inversify repository and documentation. Visit their repository and Wiki to get a more in-depth guide into how the container works and what it is capable of.
Core 2.0 is severely lacking in the extensibility department due to the architectural issues outlined earlier in this article. Core 3.0 tries to remedy those issues as much as possible by implementing proven concepts and principles.
The pattern you will see most commonly across Core 3.0 is the Builder Pattern based on drivers in combination with a manager. We’ll have a look at the new log implementation to get an idea of how it works and what benefits it brings with it.
The
LogManager
is the entity that takes care of managing all interactions with logger instances. It is bound to the container to be accessible by packages and contains only log service-specific logic. It extends an abstract Manager that receives a type hint of the implementation contract of the logger to ensure type conformity during development.class LogManager extends Manager<Logger> {
protected async createConsoleDriver(): Promise<Logger> {
return this.app.resolve(ConsoleLogger).make();
}
protected getDefaultDriver(): string {
return "console";
}
}
The
ServiceProvider
takes care of several things, let us break them down to understand them.LogManager
is bound to the container as a singleton. This means it will only ever be instantiated once to ensure the same instance is shared across packages.LogManager
is booted which takes care of instantiating the default logger, in our case the console logger.driver
method that is responsible for resolving the configured logger is bound to the container. It is bound as a dynamic value to ensure that every time the method is called we resolve the configured logger as that logger could be changed at any time by a package.class ServiceProvider extends BaseServiceProvider {
public async register(): Promise<void> {
this.app
.bind<LogManager>(Identifiers.LogManager)
.to(LogManager)
.inSingletonScope();
await this.app.get<LogManager>(Identifiers.LogManager).boot();
this.app
.bind(Identifiers.LogService)
.toDynamicValue((context: interfaces.Context) =>
context.container.get<LogManager>(Identifiers.LogManager).driver(),
);
}
}
Custom Logger Implementation
Now that you’ve seen how the
LogManager
is created and registered within the application we’ll take a look at how to register your custom implementation through a package. Once again we’ll break it down to understand what is happening step-by-step.LogManager
from the container to make use of it the same way that Core is doing internally.extends
method on the LogManager
with a name and callback that is responsible for the creation of the logger instance.setDefaultDriver
method on the LogManager
to let the application know that the pino
logger should be returned when the LogManager.driver()
method is called. If we could skip this step we could have to manually call LogManager.driver(“pino”)
to get an instance of the Pino logger.class ServiceProvider extends Providers.ServiceProvider {
public async register(): Promise<void> {
const logManager: LogManager = this.app.get<LogManager>(Identifiers.LogManager);
await logManager.extend("pino", async () => new PinoLogger().make());
logManager.setDefaultDriver("pino");
}
}
As you can see it has become a lot easier to modify and extend Core in a more controlled and logical manner. Things are clearly named, structured and by applying the same patterns consistently across Core we provide a developer experience that is more predictable and enjoyable.
Extensibility is worth nothing without Configurability. If you can’t configure packages to your liking, or worse, not receive any feedback when something is configured wrong and the package still runs then all the previous work was wasted.
Core 3.0 internally uses HapiJS Joi which has recently received a major rework and performance improvements in its 17th major version. Joi’s focus from the beginning has been on providing a joyful developer experience, which perfectly aligns with our goals, and is already used across our codebase for various integrations that rely on the HapiJS Hapi server.
class ServiceProvider extends BaseServiceProvider {
public configDefaults(): object {
return { username: "johndoe" };
}
public configSchema(): object {
return Joi.object().keys({
username: Joi.string().alphanum().min(3).max(30).required(),
});
}
}
Let us break down what is happening here and how it is handled internally to give feedback.
Now if we would register our package with Core it would start as usual as the default configuration the above example is valid but there are 2 possible outcomes on failure.
The benefits of this new validation should be clear by now. Better user feedback that ensures configuration can’t end up in unwanted or faulty behaviour and all data is automatically cast to their respective types (i.e. ”1" becomes 1).
Think back a bit, in the beginning, we set the premise that Core had a tight coupling of all internals which results in a brittle architecture that is difficult to test. This generally results in developers writing fewer tests, brittle tests just to get over with it or worst not writing any tests at all which reduces confidence in the implementations that are added or modified.
Core 3.0 aims to make testing simpler and more enjoyable. A major step towards this has been the decoupling of the application object and container. The container is now passed to the application object when it is instantiated, over are the days of excessive mocking to create complete fake containers with certain values.
Now creating a custom application instance is just a matter of passing in a real container instance that contains your desired bindings rather than having to spent hours creating the perfect mocks which only end up giving you false confidence as any change to Core would go unnoticed and leave you wondering why feature X is no longer working even though your test suite is green and letting you know there are no issues.
This concludes Part 1 of the Let’s Explore ARK Core series. In the next part, we will delve into how the application is bootstrapped, configured and started in Core 3.0.
That's great news! If you want to help out, our GitHub repositories are wide open, but that is not all, we also have special Monthly Development GitHub Bounties on-going where you can earn money for every valid and merged Pull-Request.
To learn more about the program please read our Bounty Program Guidelines blog post.
Previously published at https://blog.ark.io/lets-explore-ark-core-v3-part-1-infrastructure-5c8ba13c9c42