Speed Up Your Updates Delivery: This Method Really Works

Written by bruiz | Published 2021/03/29
Tech Story Tags: software-development | software-engineering | development-methodology | problem-solving | hackernoon-top-story | team-productivity | product-management | optimization

TLDR Benoit Ruiz is a software developer and a senior software engineer. He explains the tradeoff between “quick and dirty” and “long and perfect’s” solutions. He says the “best” solution is not always the fastest one to implement, but we’ll talk about this later in this article. In theory, the lower the time to market (or TTM), the better, because a solution to the problem is found in a small amount of time. In practice, sacrifices are made to buy some time, and the increasing cost of these sacrifices.via the TL;DR App

Hello and welcome!
I’m a software developer. For the past 7 years, I have been working in a growing industry that evolves really fast and needs to adapt quickly to its environment. For this reason, we had to take shortcuts and deliver software evolutions rapidly, more often than not at the expense of its long-term quality.
I’ve grown more and more mature throughout the years of experience I went through in this professional world. When I started working, the company was a startup and all I wanted to do was to write some nice code. Now — a few years later in this company, which grew really well — I still want to write some nice code. The key difference is that I have now a better understanding of why I’m doing it, and how my work affects the present and future of the company.
To understand that, we need to take a step back and look at the bigger picture: the goal of a company is to create some form of wealth. It does so by bringing a product (or a set of products) that is useful in some way to a part of the population, in exchange for something (generally money).
Companies in the tech industry face problems everyday, be it e.g. features or improvements requested by users and clients, or bugs that need to be fixed. The most important role of a developer is to solve these problems.
Developers don’t write code just for the sake of writing code. Sure, it’s fun, but the main objective — the “why” —  is to find solutions that bring business value. The better the solution to a given problem, the more value is brought by the developer to the company. The “best” solution is not always the fastest one to implement though, but we’ll talk about this later in this article.
Developers should aim at bringing as much value as they can, by solving problems and helping others solve problems. A good developer is one that brings value, a better developer is one that helps others grow by bringing more value as well.
How can one solve the problems of an organisation? Well, there are pretty much infinite ways of doing that — the “how” — ranging from “quick and dirty” to “long and perfect” solutions. Both of these extremes are bad, and I’ll try to explain why in the following sections. Additionally, I’ll introduce another way that I think is a good tradeoff.

The “quick and dirty” way

Solving problems this way brings value quicker to the market. I’m not saying it’s the “best value” we can get, but it’s something.
“Compromise” by monkeyuser.com.
If you ask business stakeholders, they will tell you that the best developer is the one that solves their problems quickly. If you ask technical stakeholders though, they will probably tell you the opposite, since “quick” is often paired with “dirty”, and this “dirt” is cleaned up only by tech people, if they are granted the time to clean that mess up, which is not that common.
Therefore, in theory, the lower the time to market (or TTM), the better, because a solution to the problem is found in a small amount of time. In practice, sacrifices are made to buy some time, and the increasing cost of these sacrifices can jeopardise the entire velocity of the team, the more time passes. These “sacrifices” have actually a name that you might have already heard of: technical debt.
There are hundreds of definitions regarding technical debt. For me, it’s the fact that we borrow time from the future to solve problems faster now. By saving time now, we lose time later. The more technical debt is piled up, the longer it takes to solve problems.
Introducing “quick and dirty” solutions has some consequences that are best to avoid: it makes the code more complex, thus understanding, bringing changes, and debugging it gets harder and harder. Misunderstanding what a code base does is the perfect environment to introduce new bugs, adding frustration, and losing productivity.
At some point, it takes so much time to solve a new problem that it’s better to either rewrite entirely the software, or engage in a very long refactoring phase of the existing project. In any case, this undoubtedly slows down the company.
Obviously, if a critical bug is found and needs a quick fix, then “quick and dirty” is the way to go. Nevertheless, it’s very important to come back on that “hotfix” (after its release to production) to make it a “fix”, by refactoring what is needed in order to pay off the technical debt introduced in the first place. More on that later.

The “long and perfect” way

The more time we take to design a solution, the better it should be, right? Well I don’t really agree on this one. Sure, it’s really tempting to spend weeks thinking about the perfect architecture for a project or a feature, but in the end, perfection can never be achieved, and if it is, it never stays “perfect” for long. In the meantime, the business loses patience and trust.
Furthermore, what is considered the “perfect” implementation to someone could be considered an “awful” one to someone else. Writing code, i.e. finding a solution to a problem, is a subjective process. Even a“perfect” piece of code could be considered “awful” by the authors themselves, given enough time.
I remember writing a module in TypeScript using classes and a 1-level hierarchy between these classes. Back then, I considered that implementation to be pretty great (I’m not saying perfect, but I thought it was really nice). Six months and an intensive learning about functional programming later, going back to that module didn’t feel so good. I knew I was the one that wrote it, but I wasn’t satisfied anymore. I wanted to refactor it with what I considered to be “perfect” at that time, but what value would have I brought? There was no feature planned for this part of the code base, and it worked fine without any bugs. It just wasn’t beautiful anymore.
What is “beautiful code” you might ask? There are as many definitions as there are developers. I can give you my current definition, knowing that it will probably change in a few years anyway:
  • First and foremost, it solves the problems
  • It’s DRY (Don’t Repeat Yourself)
  • It’s SOLID
  • It’s consistent (it follows some conventions and guidelines)
  • It fits well with the existing code (no part that feels weird)
  • It’s easily readable and understandable by peers
  • It’s self-documented (meaningful naming, static types, purity)
  • It’s easily testable with unit tests (purity)
  • It mutate state only when necessary (immutability all the way, except for cache, centralised state management…)
I consider all these principles to be great, leading to a clean, beautiful code base. However, these principles are not universal. As I said, if you ask other developers what is “beautiful code” to them, they will answer differently. Furthermore, I takes some time and practice to master all these principles, and it’s quite hard to apply them in an existing code base that is not built on top of these foundations. Finally, some softwares may have a strong focus on performance, which is not always suitable for applying best practices and writing beautiful code.
Anyway, there’s no point in spending weeks to write the perfect solution to a problem. As soon as a new requirement is needed in that part of the code — which happens often since a product changes over time — the “perfect” solution falls apart. As soon as developers learn new things which happens quite frequently — the “perfect” solution falls apart.
So what should we do then?
Our role as developers is to find the best middle-ground between these 2 ends of the spectrum, in order to bring value as fast as possible, without increasing the technical debt. I like to call this pragmatism, or “being efficient at solving problems”.

The “pragmatic” way

What I’m introducing here is not a universal way of solving problems. This is what I consider to be ideal based on my professional experience over the past years. It doesn’t mean I have always been using this method. I wish I did, but sometimes there are just too many problems to solve in a short period of time, and sacrifices are made because the cost (technical debt) is considered lower than the revenue (value added to the market). I’m pretty sure we’ve all been in this situation!
Whenever we need to solve a new problem, we should go through the following steps:
  • Imagine what could be a great solution to the problem, but don’t implement it (yet).
  • Find and implement a solution that requires as fewer changes as possible, while following the guidelines of the project, e.g. don’t mutate object properties, use module X to make HTTP requests, etc.
  • Write high-level tests to make sure this solution answers the initial problem. Furthermore, you can create some dashboards for monitoring the production state, to witness the impact of your solution.
  • Ask for a review from your peers, then ship it to production as soon as you can. This will immediately bring value to the market.
  • Right after, refactor the part of the code that includes the solution from step 2 in order to implement the initial great solution you thought about. High-level tests and monitoring should ensure that your refactoring doesn’t break anything.
Imagine the great solution
First of all, what is a great solution? It’s a solution where the business problem is well understood and modelled with the correct abstractions, by using some properties of beautiful code.
Writing beautiful code leads to writing perfect solutions. But here, we’re not looking for writing the perfect solution, for all the reasons I mentioned earlier. We want to get closer to perfection without achieving it, because it would require too much resources (time and energy).
According to the Pareto principle, we can say that 80% of the effort is spent in the last 20% of the solution implementation. For example, if a solution takes 20 days to be implemented, Pareto tells us that 80% of the solution is done in the first 4 days, and 20% are done in remaining 16 days. This may not apply every time of course, but you get the rough idea.
As a rule of thumb, I’d say a great solution would account for 80% of the perfect solution. This means we can’t have all the sweetness available in the “beautiful code” description. We have to pick some elements by finding the best tradeoff.
How can we do that? In my opinion, a good practice would be to assign a note for each principle of “beautiful code” on the code parts that need to change, in order to solve the problem. Then, one can make decisions based on these notes. For example: is the “easily testable” note close to 0? Then your great solution should emphasise the effort on making the new code more testable. Then another iteration could be planned to increase the note of another principle, and so on.
While thinking about the great solution, we should look for the answers to the following questions:
  • How much time will it take to implement this solution? It’s very important to be reasonable. In the majority of cases, several weeks is not reasonable: by working on this solution, you won’t be able to bring immediate value for several weeks. “Freezing” the value you could bring for that long is hardly acceptable for the business stakeholders. However, keep in mind that freezing your value now — by reducing and/or preventing the technical debt — will allow the future solutions to be implemented quicker, i.e. faster time to market. As always in our job, it’s a matter of compromise: how much (reasonable) time should we spend now to keep our velocity in the long run?
  • Can the solution be split into smaller parts, in order to implement it iteratively? If the solution is too big and requires too much work, it’s best to break it down in smaller parts, instead of completely ditching it because it doesn’t fit the “reasonable” time frame. Furthermore, it should make the peer reviews easier.
  • Does the team agree with this solution? (do I want to use a new shiny library I learned recently? Does my team know this library? Does it need some kind of training if we decide to use that library, or that coding paradigm? Does it know something that I don’t which could break the project with that implementation?)
  • How much does the existing code base need to be adapted for this solution? (is it a standalone, isolated module/service? Is a “core” part of the code base affected by it? Do we need a whole new state management system to make it work?)
  • Can we easily write tests to avoid further regressions and make sure this solution is the answer to the problem we’re having?
  • Does it need some form of documentation, or is it self-documented (e.g. with features from statically typed languages)?
  • How often are changes made in this part of the code? If it’s rarely changed then maybe it’s not worth spending too much time on it. If it changes a lot, then adapting the code could be interesting to improve its understanding and making changes faster in the future.
This is not an exhaustive list, but it provides some direction.
Notice that at this point, we are not implementing the great solution yet. We’re only thinking about it, which shouldn’t take too much time. It helps us understand the existing code, and how it should be adapted so the new code fits well and doesn’t feel like an “alien” in the middle of the rest.
Implement an acceptable solution
“Tech Debt” by monkeyuser.com.
Once a great solution has been found, it is time to find a “less great” solution, while still avoiding the “quick and dirty” danger zone. Here, we are going to make as fewer changes as possible, while keeping some consistency with the existing code, and making sure that the new code is tested and works well. This part really depends on the project and its guidelines. Generally, this is where we add a new “if” branch, a new parameter to some existing function, a new method to an existing class, etc.
If the code was initially made to evolve in that direction, e.g. extending it by adding the support of a new data type, then chances are we are not going to need that great solution after all, because the new code already fits well in the existing code. However from my experience, this is not often the case, because we are more inclined to solve problems in the “quick and dirty” way, which doesn’t make the code easily extendable.
About the tests, I would advice writing high-level tests such as integration, functional or end-to-end tests. They are slower than unit tests, but the great solution we found earlier will more likely break and/or remove APIs/interfaces/modules from the existing code, which will break some unit tests as well. Here, it’s best not to invest in unit tests for the “acceptable solution”, because we know they will disappear anyway with the upcoming refactoring from a better solution.
Once we are done implementing this solution and the tests are written and passing, our peers should review the changes we made in the code, and once accepted, the changes should be shipped to the production state. We can confirm that everything works fine if we previously set up some monitoring dashboards, which I highly recommend.
Implement the great solution
Usually at this point, we found a solution to the problem, and another one waves at us, waiting for us to pick it up and find a fitting solution for it.
However, in this method, there’s a step before going after the next problem: implementing the great solution we thought about at the beginning. The only objective of this step is to pay off the technical debt introduced in the solution we implemented just before. It can also be used to remove previous debt introduced by other solutions implemented in this area of the code base. It really depends on how much reasonable time we want to spend on it.
This step should take more time than the previous one because we need to adapt the code we just changed in order to fit the new, greater solution. The high-level tests we wrote should help us avoid unwanted regressions. We should make sure to write unit tests for the new solution, since they are fast and very flexible regarding the combination of inputs and data involved.
If we got favorable answers to all the questions mentioned earlier, then we shouldn’t have any issue with the stakeholders regarding this step.
Keep in mind that this great solution could become a poor one in the future, depending on the requirements (i.e. new problems) that could emerge, as well as the learnings from the author and/or team along the way. When this happens, remember to stick with this method, and you should be able to bring value fast, while keeping the technical debt at a low level.

Summary

This chart is not based on real data, it’s just a way to represent the problem-solving methods we’ve been talking about in this article.
With the “quick and dirty” method (the blue ever-growing line), we bring value fast at the beginning, but the more time passes, the longer it takes to solve problems. It’s pretty easy to convince people to use it because business will be happy for a while, until the technical debt — which is hard to measure — becomes overwhelming.
With the “long and perfect” method (the red steady line), we bring value at a regular pace, but not really fast. It’s hard to convince people because business wants the solution to its problem as soon as possible (and who can blame it), it doesn’t care how “beautiful” the solution is. Furthermore, the definition of “beautiful” can change over time.
With the “pragmatic” method I introduced (the orange oscillating line), we bring value rather fast, and we make sure we keep this pace for the future. It should be easy to convince people regarding the first part. The second one (i.e. refactoring) is a bit challenging because it’s difficult to measure the consequences of technical debt, until it’s too late. It is our duty as software developers to explain why this debt shouldn’t be ignored, and act on it.
Thank you for reading! I hope I convinced you to test this “pragmatic” way for solving the problems of your organisation. Sometimes the lack of resources in a team and the volume of problems to solve is too important to apply this method. In such cases, you should always remember that you are borrowing time from the future to solve problems faster now. Reducing the technical debt by refactoring early means investing time early to avoid spending too much time later.
Special thanks to Tristan Sallé and Jérémy Grosjean for their reviews and suggestions.

Written by bruiz | Software Engineer at Datadog.
Published by HackerNoon on 2021/03/29