paint-brush
Agile Architecture — the rise of messy, inconsistent and emergent architectureby@antmurphy
2,248 reads
2,248 reads

Agile Architecture — the rise of messy, inconsistent and emergent architecture

by Anthony MurphyJune 10th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong><em>(I prefix this whole article by saying that I am by no means an expert on this topic. This is purely based on my own experience from working with some amazing individuals and numerous failures across a number of different companies — all opinions are my own. Thank you, Adrian Cockcroft, for such a great keynote at the recent AWS Summit</em></strong> <strong><em>Developer Night in Sydney — this space needs more thought leadership like&nbsp;this!)</em></strong>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Agile Architecture — the rise of messy, inconsistent and emergent architecture
Anthony Murphy HackerNoon profile picture

Agile Architecture — the rise of messy, inconsistent and emergent architecture

Architecture can make or break teams and the products they build.

(I prefix this whole article by saying that I am by no means an expert on this topic. This is purely based on my own experience from working with some amazing individuals and numerous failures across a number of different companies — all opinions are my own. Thank you, Adrian Cockcroft, for such a great keynote at the recent AWS Summit Developer Night in Sydney — this space needs more thought leadership like this!)

Architecture is a topic which I think doesn’t get the attention, especially not in the agile space, that it really should.

There is an increasing demand to be more adaptable and responsive than everyone else, and subsequently, the need for reinventing how we think about and approach IT architecture is becoming ever prevalent.

This is why I got so excited at the above slide from Adrian Cockcroft’s keynote at the AWS Summit in Sydney the other week. Like a child being told that eating lollies for breakfast is ok, I was in awe! Adrian brilliantly presented the idea that the best architectures today are “minimalist, messy and inconsistent” — something which I’ve advocated for and often thought a lot about over the past couple of years.

Houston we have a problem…

The ship is sinking — the agile ship that is — as many companies continue to approach architecture the way they did 10+ years ago with the goal of enterprise architecture to maximise reuse, consistency and ultimately reduce operating costs. This has detracted many of them from their ability to be nimble and responsive. Over the decades many of these companies have built monolithic “enterprise solutions” which are widespread and shared by many teams across the organisation. Great from a $$ point of view but for speed-to-market and agility it does nothing but leave teams with their hands tied, bound by a proliferation of inter-dependencies.

As companies “go agile” to deal with this agility problem, they restructure themselves, often under the advice of the agile coaches to have end-to-end feature teams. After running this new structure for a couple of months they eventually have issues — managers and executives start to wonder where they went wrong, and why these end-to-end feature teams aren’t working for them. Although there are numerous reasons why these issues arise, I’ve often found in many cases that they are constrained by the organisation’s technical stack — much like a kid trying to jam a square peg into a round hole, the organisation is trying to make feature teams work with traditional architecture.

Trying to get end-to-end feature teams to work with a traditional architecture

These executives and managers start to reminisce about the good-old project-driven days and component teams. Perhaps things would be better off if they went back to those days? — and in many cases, I often wonder if perhaps they are correct!

Many places do exactly that, they degrade back to component and project-style teams — or somewhere in-between — and apply band-aid solutions such as an increased dependency management, big room planning events, scrum-of-scrums style meetings and giant dependency boards — but much like taking Panadol when you have a cold, the bitterness of such cheap pills are but a temporary remedy.

I’ve often wondered how we even get ourselves into these situations in the first place? Is it tunnel vision, blindly plow ahead with team structure and the “must implement feature team 101” handbook? We wouldn’t ask a team to release more often if they didn’t have the appropriate automated testing and deployment pipeline to support it — so why do we do this with structure?

There is a strong need for thought leadership like Adrian’s in this space.

Often when I suggest such radical ideas like deliberately having messy, duplicated and inconsistent architecture to enable autonomous teams, I’m usually met with crickets — blank faces staring back — you can see them trying to process what I just said and how it fundamentally goes against everything they’ve been taught to date on architecture and management 101 — they just can’t wrap their heads around it — “Duplicate? Inconsistent? Are you insane!”

Management trying to understand why anyone would want duplication and inconsistency

So why would anyone ever want to have inconsistent, duplicated and messy architecture?

The problem with monoliths

Take for example a typical enterprise architecture. There are very few components, it looks simple, components carefully picked to serve many applications, each is diverse and allows for wide reuse and coverage.

However, there is a hidden cost to all this, one which is not so visible in the old project and component teams days. Yes, the more we spread a component across multiple teams the lower the inventory cost (in this case I’m referring to lines of code and other assets required for our apps to run as inventory) however what we are not seeing is that this inadvertently increases the cost of change.

This is because now that the component is shared across multiple teams, there is a snowball effect every time one team wants to make a change to the component. I’ve seen this all too often, one team make a small change and boom! It breaks half a dozen other team’s applications. Soon after we are forced to conduct large coordinated testing cycles to ensure that all dependent teams are not impacted.

Changing one component which spans across multiple teams has a snowball effect.

I’m not advocating against having a single source of truth or a single point of change, I agree that both do indeed lower the cost-of-change but they are not the full picture. Yes, they lower the cost-of-change from a code level but from a wide organisation point of view often that leads to an increased coordination and alignment cost which are exponentially greater than the code change itself. Often this cost is one which is not so visible, definitely not as tangible as the physical code change — perhaps that’s why it often goes unnoticed. Maybe I should reframe cost-of-change, to be the increase in “time to value” as Adrian had put it at the AWS Summit — after all that’s what the goal is, not how quickly we can make a change, but rather how fast that change can be delivered to a customer and provide them value.

Think about every time you’ve had to align teams or build consensus or have a meeting for building a consistent approach? This is all an increase in cost-of-change and ultimately delaying value realisation — I doubt any of your customers care how many meetings it took to try and build alignment or consensus on a particular approach.

Consistent architecture or fast autonomous teams — you can’t have both.

In fact many tech leading companies we know today had identified this exact problem with monoliths — the likes of Netflix, Amazon and eBay to name a few — all started with a monolithic architectures and over time evolved away from them.

A new approach

Einstein famously said “No problem can be solved from the same level of consciousness that created it.” — I fundamentally think this is the reason why these kinds of situations happen, we are applying the same thinking to a whole new kind of problem.

If we want to be fast and adaptable we need to break these monolithic “enterprise” architectures. We need to understand that today’s problems demand a new approach and a new mindset, one which is ok with duplication, ok with mess and inconsistency and ok with the idea that architecture can be short lived and is expected to evolve.

There is a balance to be had, yes, consistency and alignment are great but they need to be balanced against the time to value. This means that at times we may consider to deliberately duplicating code in order to allow for autonomy and therefore speed to market.

Here are a few practical things I’ve found worked well in the past whilst trying to apply this type of architecture — call them guiding principles if you will.

It’s all about the customer — be ‘time to value’ centric

If we were to start with a problem, a customer problem, and then structured ourselves around that — I doubt that we would have ended up with the hierarchy and monolithic enterprise architecture that many companies have today.

Products should solve real problems, teams should be structured around products and architecture should enable such teams, not the other way around — Random Theory

I had a colleague say to me recently that “we should be structuring our architecture around our teams, not the other way around” and I couldn’t have put it better myself.

Melvin Conway is often famously quoted in this regard, to what has know become known as “Conway’s Law”. It all stated way back in 1967 when Melvin Conway wrote a thesis stating that essentially any organisation trying to produce something will inevitable be reduced to producing a copy of their internal structure — sound familiar? How often are product experiences divided in a similar way to their team structure? — Front-end vs back-end vs data store vs customer support team, all separated often leading to a disconnected customer experience.

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.” — Melvin Conway

Over 50 years later and organisations are struggling to apply this “law”. I often wonder if the cause is perhaps ulterior motives, such as reducing operating costs, because applying this law in my experience usually doesn’t result in a lowered operating cost — rather it generally increases it at the benefit of time to value.

So if we were to apply what my colleague had so elegantly put and what Mr Conway wrote a thesis about all those years ago, it would suggest that we start at what “system” (or rather what product) are we producing? Once established we should look to build out teams and our architecture around that.

Broken up and possibly duplicated architecture to allow for products teams to be independent from one another.

In many cases you’ll find that you will end up with the same component being used by multiple products, in these cases traditional thinking would suggest separating it, centralising it and maximising reuse and consistency. But what I’ve witnessed time and time again is doing so only creates a dependency for the team. One dependency sure fine, two ok not so bad, but you apply this thinking to the whole tech stack and you end up with teams hands tied, weighed down and brought to a halt but a proliferation of interdependencies.

Independent, micro and at times duplicated

One thing which I’ve found useful to help apply this mindset to your architecture is the use of microservices.

Microservices are a modern software architecture pattern which looks to break applications down into smaller decoupled parts which are loosely bound together to create a whole application or product.

Like many things the devil is often in the details, with microservices I’ve found that the real challenge is how one divides their application into small logical pieces which can act as lego blocks. Slicing it in the wrong way and you again simply end up with 100s, possibly 1000s, of microservices all dependent on one another rather than sliced in a way for whole features and products to be released independently.

Going back to Conways Law and structuring your architecture around your teams — who are ideally structured around your products — as a guiding principle I tend to first look at slicing your services by product and their features. This hopefully leads you down the path of independent products. Often in doing so I’ve found this leads to duplicated services and over time, yes a level of mess and inconsistency but it’s often that the price you pay for independence and faster value realisation — I guess it comes down to what you value? Cost or time to value?

It’s not all chaos, I’ve often found that if you play it smart you can often reduce the level of duplication — and yes at times (often at scale) you may decide to have a cross cutting platform and subsequently a platform team, but this is generally the exception not the rule.

👩‍🔬 A quick test for this is to ask yourself how many teams in my organisation are platform/component teams vs how many are product/feature teams? The latter should greatly outnumber the former. 👩‍🔬

Rather than having one component which can perform both products needs, an option might be to use of a shared library — a common source, each team then builds upon for their specific needs.

A great example of this pattern is to look at the atomic design language system pattern where we look to break an experience down into its “atoms” or smallest possible building blocks. Theses “atoms” on their own are not useful, in fact they’re not even things which you would deploy rather they are building blocks much like open sources libraries. There is still a level of orchestration which needs to happen on top of the “atom” but that’s where the specialisation comes in and decoupling from other services. Often we still take a monolithic approach even to our services and we find things which are similar and bucket them together into a single service, rather than decoupling it and possibly having a level of duplication.

Versioning is your friend

On top of using microservices patterns I’ve also had teams complement it with versioning — the two work together really well for enabling autonomy and faster time to value.

Versioning is a great way to not have to fully duplicate things, rather we build on top and create a new version of each microservice, thus allowing other teams to remain on a previous version and avoiding that snowball effect of changes — Team A have made changes and are now on v1.1 leaving Team B to remain on v1.0.

Often this again is left to how you implement it and I’ve seen places say that they’re using versioning but rather than allowing teams to remain on a previous version they still force everyone to upgrade by only ever having one version running in production – the latest one. This puts teams back into the monolithic situation where the change-snowball-effect remains — in these instances the only advantage I can see that they get from versioning is the ability to easily roll things back to a previous version but it still does nothing for autonomy and a shorter time to value.

Again versioning calls for a bit of mess, we need to increase our operating costs and have multiple different versions existing at the same time — for example, in order for Team A to be on v1.1 and Team B to simultaneously remain on v1.0, both versions (v1.1 and v1.0) need to be running in production at the same time, not have one version at all times – might as well ditch versioning if that’s the case.

👩‍🔬 A simple check for this is to see how many other products/applications get broken when you release a new version. Ideally there should be no more than one — the product which needs the new version. The rest should be on a previous version and remain unaffected. 👩‍🔬

Emergent and evolving

Much like the products we produce these days, architecture should be iterative and incremental – it should be emergent. We don’t have nor should we look for all the answers up front, rather we learn and adapt as we go. Such “atomic” architectures and the use of versioning enable modularity, whereas in the past it was often extremely difficult to have an emergent architecture with large monoliths.

This is also particularly important today as new technologies are becoming rapidly available which means staying up to date and being able to experiment with the latest and greatest is no longer a side-hustle, it’s a full time gig. And just like the products themselves, architecture needs to be constantly maintained, updated and allowed to evolve over time – the security climate is also making this need ever prevalent as hackers often exploit vulnerabilities in older technologies.

Credit: Gerd Altmann on [Pixabay](https://pixabay.com/illustrations/evolution-development-forward-3543775/)

A new breed of architecture

There is no perfect solution, rather it is a trade off between what you value more — cost, reuse and consistency? or are you happy for things to be a bit messy, inconsistent to enable for autonomy, adaptability and time to value?

I often ask clients in the past — do you want to be the cheapest company or the best? They’re competing goals, you cannot have one with the other.

Cheap is easy, just continue to do what you are doing but if you want faster value realisation, to be responsive and adaptable then you need to think and behave in a different way. This often means throwing out the architecture 101 book and looking towards more modern patterns and thought leadership like Adrian Cockcroft’s brilliant keynote at the AWS Summit — where the best IT architecture today will be “messy, inconsistent and minimalist”.

From an agile point of view we often push for the benefits of small autonomous teams, however we do very little to set them up for success. Often we leave architecture alone leaving these teams to be much like a sports car in a traffic jam of interdependencies caused by cross-cutting monolithic components. Rather than having such monolithic components we will look to break them down into small, atomic parts, decoupled from each other — a balance between duplication vs consistency.

Ultimately the future calls for the optimisation of time to value, something which conflicts with more traditional approaches which have previously driven towards consistency and reuse — rather the future will ask our architecture to enable autonomy through decoupled, messy, duplicated and emergent architecture.

— Random Hypothesis

<a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a>