paint-brush
Controlled Experiments: The Safe Way to Use New Technology in ITby@vbakin
261 reads

Controlled Experiments: The Safe Way to Use New Technology in IT

by Vladislav BakinJune 28th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The IT industry is evolving so fast: almost every day, new libraries, frameworks, and even languages appear on the market. This state of affairs creates an obvious problem and question: when should we refactor our application to use new technology, and when not? The "controllable experiment" strategy helps significantly mitigate the risks of using new technologies and unlocks the potential to use them often. The method has three steps or stages: 1. Defining clear minimum requirements for the technology itself 2. Implementation using a controllable experiment 3. Using a trial period before scaling the result
featured image - Controlled Experiments: The Safe Way to Use New Technology in IT
Vladislav Bakin HackerNoon profile picture

The IT industry is evolving so fast: almost every day, new libraries, frameworks, and even languages appear on the market. Each of them tries to solve actual problems that the industry faces or simplify software engineers’ daily routines. However, there’s another side of the coin. Most of them will never become mainstream and will be small, niche, and even sometimes disappear. But some of them will change the industry standards and become the new mainstream.


This state of affairs creates an obvious problem and question: when should we refactor our application to use new technology, and when not? How can we predict if a popular library ends its support in a year and application maintenance becomes hell? But on the other side, how do we improve engineering performance without getting stuck with a completely outdated application after a few years?


Despite these questions being rhetorical, as we’ll never have a magic ball to get the right answer, this article will cover a method that helps mitigate these risks.


Regarding this article, new technology doesn't necessarily mean fresh technology on the market. It could be well-known and even mainstream technology, but you, your team, or your company don’t have an experience with it. So, most issues would be the same, and we may mitigate them similarly.


Let’s structure and list all potential risks that come with the idea of implementing new technology in the application:


  1. Technology may still be unstable, and during implementation, we’ll face a blocking issue that ruins the whole process, and we’ll be required to revert.
  2. We (and possibly the industry) may not have enough knowledge and experience. As a result, we’ll make mistakes and suboptimal decisions in the application design, which follows to a pretty bad result (aka implementing technology only for the implementation itself)
  3. Technology may become outdated or even die. As a result, the integration loses its benefits and just transforms the application into a legacy.
  4. The result might have no effect. We actually trade bad for worse. The technology is implemented and working correctly, but it doesn’t provide any benefits.


The "controllable experiment" strategy helps significantly mitigate these risks and unlocks the potential to use new technologies often. Despite not preventing failures, it reduces their count and, most importantly, decreases the scale and size of arising problems.


Define clear minimum requirements for the technology itself

No criteria guarantee that the popular technology will be stable and well-maintained. However, it’s still a good essential criterion.


It’s not rocket science that a library used by more than 10000 people and supported by ten engineers or well-known vendors would have more chances to be maintained for a long time rather than a library written by a single engineer and used by 1000 people. Even more important is the Lindy effect: the future life expectancy of technology is growing proportionally related to their current age. So, a great option is to check how long this technology has already been on the market, how well it’s maintained, and how quickly it is updated.


Define precise, unambiguous, measurable minimal requirements for technologies you can use in your application.


Implement using a controllable experiment

So, the technology matched the first criteria – it looks pretty stable and popular and promises to solve many issues and improve your application or engineering processes. You read many different articles, and it looks like it must solve it quickly, so implementing it in the entire application looks like a great idea. Unfortunately, there are still lots of risks. But we might use a controllable experiment technique to mitigate it.


In the beginning, define targeting goals, metrics, and criteria. They may be very vague, but it’s important to define them before you start an implementation. Adapting them later is still fine, but defining them is essential.


Secondly, choose a target application part or service that isn’t too critical for the business and the entire platform. Every innovation leads to mistakes, and isolating the experimental part is a great way to control the risk size. In other words, you set up a sandbox for the experiment, where you are pretty safe to fail.


Next point, set a deadline. Equally important is having it short, e.g., a fortnight or a month. You may extend it later, but as I mentioned before, having checkpoints is very helpful in mitigating unpredictable risks.


And the last one, be ready to roll back the changes. It’s always a tough decision to throw recently implemented things into a trash bin, but wasting more time might be less pleasant.


Start an experiment with a clear goal, defined scope, and strict time boundaries.


Set a trial period before scaling the result

It’s so tempting to scale improvement to the entire application or company when you reach a successful result of the experiment.


Yet, it’s not the smartest step. A better option would be to wait for a while, collect more information about new technology, and get used to it. In terms of success, it’s easy to miss an important detail, as well as a lack of experience with the new technology may result in a bigger failure.


After successful integration, the best option is to keep moving forward with a few additional controllable experiments.


Example

To illustrate what it usually looks like, I'll tell one story based on my experience.


Today, GraphQL is a common and mainstream technology. However, it wasn’t always like that. When GraphQL wasn’t commonly used several years ago, the team and I discovered that it might significantly increase our engineering speed and quality.


GraphQL was introduced and backed by Facebook and had several common libraries for working with it. Together with the team, we agreed that it looked interesting and promising and defined the following controlled experiment:

If we rewrite the specific application part to GraphQL usage within a week, and after that, we still would agree that it’s an improvement, we’ll start using it for the new application parts and services.


Unluckily, after the first week, the experiment failed. Relay (the library we chose) had some limitations and definitions that led to significant work on the backend and frontend parts to refactoring our current application. However, during the process, we got an experience and found another library, called Apollo, that doesn’t introduce these limitations but leads to the same improvements we expected. As a result, we decided to do a new experiment - starting from the beginning with the same experiment definition but using the alternative library. Surprisingly, this time it led to success, and after a week, no one doubted that we needed to use this technology.

Summary

Using new technologies is tempting but challenging at the same time. It always leads to additional expenses and failures. But we may mitigate risks and keep expenses under control by following a “controlled experiment” strategy. It also increases the overall amount of experiments and keeps the application modern and performant.


Of course, some experiments would lead to failures, but using controllable experiments limits expenses to a meaningful level. At the same time, the team will get experience with modern technologies.


Even more, some successful experiments may (and would) cover expenses for all unsuccessful attempts. After all, the failure is limited to a defined cost, while the successful one isn’t limited in benefits (you wouldn’t roll back the experiment if it’s overreached minimal success metrics, right?)