Gossip Protocol

@gossipprotocol

Growth of a Concept

by Max van der Werff

When was the last time you were in a situation where someone was too attached to their idea?

When,

“Here’s my idea, please give me feedback.”

actually means,

“Here’s my idea, please tell me it’s wonderful and don’t you dare change it.”

Coming into this team, I’ll admit I was a touch jaded. Two years into the industry and jaded? Yep. I’d seen a world where ideas were often served lukewarm, half baked, with the smudged fingerprints of the originator all over it.

I was a little less starry eyed and I know I’m not alone in that. I have the opinion that ideas are living entities — something I’ve seen refuted in practice again and again. Coming into this environment where no one is precious, ideas are picked apart for the better and improve because of it — is incredible.

Concepts are ideas. Ideas need space, they need to be offered time and energy. They can be the needy child, they can a fully functioning adult right out of the gates or they can be the unruly teenager.

Concepts need a healthy environment to grow, they should be added to, prodded and poked, have things stripped away and improved over time. It’s hard at times to let that beautiful gem of an idea out of your sight and let others influence it.

Ok, introduction over — buckle in and get ready for the tech chat.

The Concept

The example I’m going to cover is a perfect example of an idea that morphed and changed beyond it’s original concept and came out better because of it.

The example is currently a relatively core part of our infrastructure. Its something we knew we needed, something we wanted to probably handle a few situations but nothing more than that.

We currently run with a service based architecture, many distinct services going about their business, we then have services which may require a combination of those services’ data. Rather than allowing these services the power to hammer our core services, we wanted a caching/aggregator service to protect our soft underbelly — so to say.

“Requirements”

This service would come to be known as Galactus (yes, the devourer of worlds) it had very broad stroke requirements, it needed to;

  • hold a cache of our internal services
  • take a heavy beating
  • respond at the drop of a hat
  • handle sub document updates

And the final “requirement” was: we need to know when it changes. This is important as we want to update our services that care about this data with as little latency as possible. Watch this space.

There were a few other nice-to-haves but they were just that, nice to have. Things such as being a managed service — we currently host on AWS and make use of a lot of their managed services (less worry from our side), simple redundancies, replication or something similar and perhaps versioning.

The choice

We did a little research, measuring the options against each other and we chose to trial CouchDB (a pretty flexible document store) as it did most of what we needed practically out of the box.

Especially providing a continuous stream of update information if you care to listen for it. It was a light touch, ran in Erlang, had some lovely clustering tech (making use of this blogs namesake), had functions in JavaScript, ran in memory for some lightning fast responses.

The architecture

Galactus Architecture

The architecture surrounding Galactus is to have multiple internal services on the left hand side of the diagram, these are the sources of truth, when they have been updated, the services publish to a notification service (AWS SNS for us, replace with your pub/sub of choice).

This will trigger a process to pick up the changed data and throw it into the appropriate sub-document in Galactus. This would be present in the change logs from the continuous stream of changes mentioned above. There would be something listening to this stream and notify all of the services to the right hand side of Galactus, those hungry for updates, ready to push updates to external parties.

Change

You’ll notice the intentionally vague nature of some of the architecture above, because times change, ideas change, they grow and what you thought was important or minor might be backwards.

The “something listening to the stream of changes” changed shape multiple times before it was ever attempted;

  • a sidecar docker container?
  • a service dedicated to listening?
  • does it just pipe the stream into SNS? Or does it ask back to the database for more details?

Growing pains

The first iteration was a daemon which lived alongside the CouchDB instance, this proved inconsistent, hard to monitor and easily forgotten about (it’s a little invisible). That went the way of the dodo ☠️.

Next came a lambda which would poll every minute for changes, run through the document to figure out what changed, this was slow, restricted by a cron job and could struggle under load.

Eventually this “listener” was removed entirely, we had something piping data into Galactus anyway, as long as we had the 👍 back from Galactus, we knew we had everything in place and we could notify those hungry, hungry services.

Maturing

Over time we broke Galactus, we kicked him a little too much, a little too fast here and there, we found that the pain points were more likely to be found in availability and scaling — something which CouchDB excels in. We added extra nodes to the cluster, replicating across them. We relied more and more on sub-document updates.

Looking back, you can see our requirements changed, some gained different emphasis and others we thought to be absolutely core, basically disappeared entirely.

Hindsight

Would we do it again, would we do it differently, would we choose CouchDB again, should we drop things now and build something else?

  • We didn’t realise how core to the interaction sub-document updates would be- or how rare it is in other products.
  • We had a high weighting on listening for changes, which ended up falling off our requirements entirely.
  • Although building a little web service around a database isn’t exactly a big ask, CouchDB having a HTTP endpoint straight out of the box means it can run solo, with no wrappings or hand holding.
  • Versioning was something which was considered a nice to have, but those versions have helped with tracking, comparisons and many other things along the way. It’s saved my bacon multiple times. Having it straight out of the box is a delight.

Personally I think we landed on our feet, we might have fluffed the landing (maybe knocked a point or two off our final mark), but we recovered overall and have a service which has taken its lumps, it has gained its street smarts and is quite happily powering along in our architecture.

Galactus as a concept, an idea, went through it’s “teenage” stage and has emerged as an upstanding member of society.

It’s core to our experience, to let these ideas out of your grasp and improve.

About the Author

Engineer @ TravelNest. Solver of problems 🔎, proud nerd 😎 , investigator of rabbit holes 🐇 and author of rambling lists ✏️.

More by Gossip Protocol

Topics of interest

More Related Stories