Complicated patterns aren’t always that complicated. Usually it’s the simple ones that bite
3,603 reads
3,603 reads

Complicated patterns aren’t always that complicated. Usually it’s the simple ones that bite you.

by Patrick Lee ScottDecember 14th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Computer Science's John Defterios says complex code is never a good sign. He says we need to model a useful subset of the world to solve problems with software. The world is a big place. We cannot model the whole world. We must narrow the context or focus of what we are modeling to only the involved entities. This means they need to be tightly coupled at all times — all times when a single transaction is updated or when immediate consistency is required. The concept should be explicitly defined and easy to understand.

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Complicated patterns aren’t always that complicated. Usually it’s the simple ones that bite you.
Patrick Lee Scott HackerNoon profile picture

Staring at the maze of interconnected passageways of the microservice system, I immediately recognized the problems.

I was sitting with a new client doing a review of their system. This was the first time they were showing me the code which was described as “very interesting” and “definitely one of the most complex I’ve worked on!” with excitement.

I shuttered a bit.

I thought about my ironically misquoted t-shirt with a picture of Albert Einstein.

Any intelligent fool can make things bigger, more complex... It takes a touch of genius — and a lot of courage to move in the opposite direction. — E. F. Schumacher.

Complexity is never a good sign.

First things first, I wanted to understand the business, and more specifically where that was represented in the code.

This is where most people ask: “what are the nouns?”

My first question: “what are the events?”

Events occur in our world every day in a fascinating quantity. The man crossed the street as the car blew threw a light that had turned red moments before.

So many so that the improbable is probable, because billions of factors are constantly changing and effecting other actors and entities. When people exclaim “what a coincidence!” I think of how coincidences are statistically likely because of the sheer magnitude.

We consume the world in response to events. We read news about what happened in response to another actor’s actions. We run away in response to a threat. We feed ourselves in response to feeling hungry. Etc.

In addition to consuming the events that occur around us we also affect, and effect the world, and others. You have your specialty — the things that you make happen, and for other things you make requests to others to obtain the result you would like.

Through our own actions or the effects of our commands, we are in a sense, changing the state of the world.

If you know everything that has happened in the world, you can reconstruct the current state of the world.

In building software to automate problems of the world, we need to model a useful subset of the world. We create a set of abstractions that are useful for solving tasks at hands. You always hear about how the nouns are important, but I want to stress that as well as the nouns, the commands and the events happening within our context are very important and explicit parts of that story.

When modeling our small section of our world, we should always strive to make concepts explicitly defined and easy to understand.

Pro Tip: This is also a helpful tip in refactoring. If you find a concept is talked about and implied, but is not explicitly defined in the code, it is a sign of something wrong. A “code smell”. Make the implicit, explicit.

The nouns are connected by the events and their place in time, the only immutable construct of our reality.

The world is a big place. We cannot model the whole world.

As with any complex problem domain in Computer Science, to make it more solvable we can introduce constraints. In modeling terms, we must narrow the context or focus of what we are modeling to only the involved entities. Really, even narrower still, limiting the properties of those entities to only the ones that are actually useful in solving a specific problem.

We will sometimes know easily and obviously from conversation what some of the important entities of a domain are, and we can start breaking those into smaller subsystems sometimes referred to as “bounded contexts”.

For example, if you were building an e-commerce platform, you’d probably want a Product Model that describes the properties of the product as it pertains to being displayed on various mediums such as the internet or maybe a print catalog, an Inventory Model for tracking availability, an Order Ledger for tracking and managing customer orders, an e-commerce store front, maybe some marketing funnels.

Each is its own context.

Within those contexts, there are some more subtleties from a modeling perspective due to concepts known as “coupling” and “cohesion”.

Cohesion is a measurement of how closely concepts are intertwined. High cohesions means the entities are tightly coupled. This means they may need to be referentially complete at all times — updated within a single transaction, or when immediate consistency is required.

When things are very unrelated they are loosely coupled.

Back to the e-commerce store example – in the inventory system, it’s necessary to know how many of which items are available. The details of each product however are not necessary to satisfy the requirements of the inventory context/system.

What events are associated with an inventory? Here are a few:inventory.product.listed, inventory.product.quantity.decreased, inventory.product.outOfStock.

The first thing you’ll notice is the clarity of the language. It’s the same language that you use in discussions about the business.

Pro Tip: Careful not to accidentally force this language on others — as engineers we often have to name things. The wrong name can have profound effects. More on this coming up.

Generally with each event comes a payload, meta information about the event such as the id of the product in some of the above examples. It answers some questions, for instance which product, and also raises some questions: How does a product become listed, how does its inventory decrease?

We can trace these immutable facts of events that have occurred within the domain back to the actors that executed the commands, and back again to where and in what circumstances that command is issued.

We can also think, and search, in the opposite direction — when a product is out of stock, what systems are effected by that? The product description? Nope. That means there is low cohesion in that instance, and calls for loose coupling.

We are able to define it outwardly from what we know to understand how entities are effected by events, and how those events came to be.

An event communicates a lot about a domain in a very small space. It helps to define the scope of a domain, the size of a context, and levels of cohesion and coupling.

Following the trail of events and commands will lead us across boundaries to define mental models and see where patterns are followed and rules are broken. We can identify pieces that are improperly coupled or improperly cohesive and begin to evaluate the complexity.

Back to the code review…

What I was hoping to see in this review was explicitly defined commands and events.

Personally, I like a file for every event handler or command handler. It strongly communicates what each services does without needing to look at the code.

When you can navigate the codebase and find an entry point to a feature by searching for files named by what happened or what should, you can navigate quite effectively, as the concepts are clearly organized, defined, and communicated.

Unfortunately it’s not what I found. To be fair, it usually isn’t.

Let’s just say it was more like a blend of CRUD like services shouting across the room at each other, kinda like the movie depictions of the New York Stock Exchange in the 90s.

The complexity had gotten out of hand.

“Have you heard of a patterns called CQRS and Event Sourcing” I asked.

“Yes” the client replied. “We talked about that early on but decided to keep it simple and just use REST instead.”

… “How’d that work out?”

HINT: Not well.

In the effort to avoid complexity, much more complexity existed than if the problem were approached with patterns and discipline.

It’s a trap that I see really great engineers fall into all of the time, and I think it’s partially a failing of language, and partially a fallacy of complexity.

First —the failing of language — the project was described as an MVP.

This was 8 months into intensive development with a large team.

It was by no means an MVP.

An MVP is the smallest possible thing you can do to prove a hypothesis. To developers it communicates “quick and dirty”.

I‘ve pretty epically made this mistake several years back when I started my first company. I spent several months building an epic MVP with custom CSS animations and realtime updates that could be pushed out to any number our tablets that were connected in hotel rooms to offer guests cool experiences around the city. Turns out that while people may have wanted it, that didn’t really matter, because hotels didn’t.

Now, after a Lean Startup Machine, Lean Startup Conference, and an awesome accelerator led by Lean Analytics author Ben Yoskovitz in collaboration with Anheuser Busch InBev, in which I was the CTO for a machine learning startup that raised a follow on round and was the merged into ABI’s e-commerce division, I can confidently tell you that is not what it is.

In one of the lean startup machine experiments for example, I built a single page app with a form that collected email addresses to see if people would be interested in sharing a cab (before uber pool) cause the Upper East Side was damn far away! It took me about 30 minutes to make. That’s an MVP. A complicated one even!

It was enough to convince a stranger to ride in a cab with me. It took a specific hypothesis and proved or disproved it.

We were told by the judges that “CabPool” was not a great idea, because we only made $7 and most people were just creeped out about sharing a ride with a stranger. In retrospect, even if that were true, which it obviously isn’t, because of micro-niches, the small percentage of people who aren’t creeped out may be a big enough market anyway! It all depends on your goals.

Anyway, even an email could be an MVP. It just needs to prove a single hypothesis.

Which brings me to my point: Language is very important.

Without the proper language, you don’t have the proper model.

Which leads to the second major issue I found in the code — language wasn’t that important.


Models 👏 need 👏 to 👏 be 👏 clearly 👏 defined 👏.

That means not mixed into a REST API.

But what do you do when you are building an MVP? Might as well throw it into a REST API!

The purpose of an API is to be an interface, not a model. Also, at most, there should be one API service per context.

If every one of your services is a REST service you might as well just open up your computer and pour some spaghetti inside, because in 8 months it’s gonna seem so difficult that you might as well have a broken PC.

People in the front-end world know the front end is complex. Think of any front end dev you know and ask “is the front-end complex” and they’ll go on a rant about flux and unidirectional data-flow and functional components.

When it comes to the backend, flow and unidirectional workflows go out the window.

Pro Tip: Event sourcing is basically Redux.

Front-end people: You know this stuff. You know what life was like before Flux. Do you want that for your backend? Do you?

And back-end people: Even your front-end devs know this stuff! C’mon! 😜

Now what kind of article would this be if all I did was rant and not give an example of how simple this can be!

Let’s start to model an Inventory because it’s a pretty simple system at it’s core, but complex enough that it could be part of a real project.

First — a POJO — which stands for “Plain Old Java(script) Object”:

export default class Inventory {constructor() {this.products = []}

init(id) { = id}

catalogProduct(product) {this.products.push(product)}}

With me so far, right?

Now let’s add some magic sauce! 🧙‍♂️

import { SourcedEntity } from 'sourced'

export default class Inventory extends SourcedEntity {constructor(snapshot, events) {super()this.products = []

**this.rehydrate(snapshot, events)**  


init(id) { = idthis.digest('init', id)}

catalogProduct(product) {this.products.push(product)this.digest('catalogProduct', product) // same as command name }}

And you’re event sourcing!

Let’s break it down.

First, we extend our class from SourcedEntity, and therefore need to call super(). It’s important to be able to rehydrate our instance with previous events. This means to recreate the latest state by replaying the events. A snapshot is an optimization, as it’s used as a starting point so you don’t always need to go all the way back to zero.

Last, after we execute our command, we are calling a function called digest. A command comes into the model and is digested.

This is an area where I’ve seen people get tripped up mentally. It almost appears as if we are “command sourcing”, but I want you to think of it this way: the command has been digested. Every command gets digested.

What this means under the hood is that a property of your Entity called newEvents has a new object containing the name of the command that you just digested and the data associated with it.

Before we move on to persistence, there is one more important piece. After we digest a command, we may choose to emit an event about what has occurred so other parts of our service can decide to do something with that information.

import { SourcedEntity } from 'sourced'

export default class Inventory extends SourcedEntity {constructor(snapshot, events) {super()this.products = []

this.rehydrate(snapshot, events)  


init(id) { = idthis.digest('init', id) this.emit('inventory.initialized')}

catalogProduct(product) {this.products.push(product)this.digest('catalogProduct', product) // same as command namethis.emit('inventory.product.cataloged')}}

However, this gets a little bit tricky, so let’s slow down for a second.

What happens if the event is emitted before the events are fully committed and saved to a database?

Our system would start off like a Rube Goldberg machine:

Dog Rube Goldberg Machine (Giphy)

As you can imagine, we don’t want that to happen before we’re ready.

It’s a simple fix, we will use a special version of emit called “enqueue” instead.

Let’s take a look at the updated functions.

init(id) { = idthis.digest('init', id) this**.enqueue**('inventory.initialized')}

catalogProduct(product) {this.products.push(product)this.digest('catalogProduct', product) // same as command namethis.enqueue('inventory.product.cataloged')}

Now, the event will not be emitted until the event is successfully committed to our repository!

“What repository?” you may be wondering.

Glad you asked.

If you’ve noticed, our model is nice and simple and everything… but… doesn’t it need persistence?

Yes. Yes it does.

That is what we will use the “Repository Pattern” for. The Repository Pattern will allow us to keep our models nice and simple POJOs by moving the issue of persistence outside of the model.

Sourced is built with this in mind, and provides a MongoDB implementation for doing so.

Before we create the repository, though, let’s test our model using Jest, because testing is important!


import Inventory from 'models/Inventory.mjs'

describe('Inventory', () => {it('has a quick test for this article', () => {let inventory = new Inventory()let id = 'test-store-1'inventory.init(id)expect(

let product = { id: 1, quantity: 100 }  


That should be pretty close to 100% coverage for our model. Tell me again how getting 100% coverage is hard, and I’ll tell you to write simpler code!

Alright, let’s create that repository! Let’s create a new file:


import SourcedRepoMongo from 'sourced-repo-mongo'import Inventory from '../models/Inventory.mjs'

const { Repository } = SourcedRepoMongo

export const inventoryRepository = new Repository(Inventory)

Before we move on, there are some additional complexities that can be addressed by using this pattern.

For example, imagine our service receives events in bulk, and we want to process them as quickly as possible, but all events for a certain instance all need to be processed in order, otherwise the results would be incorrect.

Also, I want to use async / await instead of the repositories callback style implementation, so let’s knock that out with Bluebird’s promisifyAll while we are at it.

import SourcedRepoMongo from 'sourced-repo-mongo'**import Queued from 'sourced-queued-repo'import Bluebird from 'bluebird'**import Inventory from '../models/Inventory.mjs'

const { Repository } = SourcedRepoMongo

const repository = new Repository(Inventory)const queuedRepository = Queued(repository)

export const inventoryRepository = Bluebird.promisifyAll(queuedRepository)

Now, events will still process as quickly as possible, however, the repository will place a lock on the resource with the given id when it is retrieved from the database, and removes it when the change is complete and committed, ensuring that events are processed in order.

Let’s take a look at using our inventory — there are many ways to get the event to the process, and this article is already pretty long, so let’s just focus on what do once you receive the command, and not how you will receive it. We will implement the function listen which will be called when a command is received.


import Inventory from '../models/Inventory'import { inventoryRepository } from '../repos/inventoryRepository.mjs'

// This would be a good place for validationexport const listen = async function (data) {const { id } = datalet inventory = new Inventory()inventory.initialize(id)

try {await inventoryRepository.commitAsync(inventory)} catch (error) {throw new Error(`Error while committing repository - ${error.message}`)}


When the commit is successful, all enqueued events will now fire.

This time, let’s retrieve and modify an existing inventory.


import Inventory from '../models/Inventory'import { inventoryRepository } from '../repos/inventoryRepository.mjs'

export const listen = async function ({ inventoryId, product }) {let inventory

try {inventory = await inventoryRepository.getAsync(id)} catch (error) {throw new Error('Error while getting data from repository')}

if (!inventory) {throw new Error('Inventory does not exist - cannot add product. Initialize inventory first, or ensure you are using the correct id'))}


try {await inventoryRepository.commitAsync(inventory)} catch (error) {throw new Error(`Error while committing repository - ${error.message}`)}


And done!


“Complex patterns” sometimes have a more notorious reputation than they deserve. In this instance, avoiding them led to much more complicated solutions as the “MVP” grew beyond it’s original scope.

I’ve hoped you’ve learned how you can use Plain Old JavaScript Objects with a couple of design patterns, Event Sourcing, and the Repository Pattern to greatly simplify the most important parts of your codebase — your business logic, and why using patterns that are sometimes called “complicated” may not be the complicated after all.

The above code is a solid base for a microservice, you’ll just need to figure out to publish messages between services which can be done in all sorts of ways and probably add some validation. I personally recommend servicebus for the communication bit.

For a more complete sourced example, check out this test suite from sourced-repo-mongo.

For an example of using sourced with servicebus, check out my example repository here:

If you’re curious in learning more, I have an upcoming Microservice course called Microservice Driven. Sign up to be notified when it’s available here!

When people want to learn microservices I always recommend learning some DevOps and containerization as well as a prerequisite to ease the development and production process. Check out my DevOps Journey here! I personally think the future includes containers and serverless — and serverless workloads running in Kubernetes in most enterprise scenarios.

As always, if you’ve found this helpful the best way to help me is to give me 50 claps, follow me here on Medium, and share with others!