Hackernoon logoMoving Fast and Breaking Things vs Moving Slow so That it Doesn't Break - Technological Evolution by@erlenddahlen

Moving Fast and Breaking Things vs Moving Slow so That it Doesn't Break - Technological Evolution

Erlend Dahlen Hacker Noon profile picture

@erlenddahlenErlend Dahlen

Cybernetics & Robotics MSc Student. Weekly newsletter: erlenddahlen.substack.com

Digital technology is characterized by “move fast and break things”. Heavy asset industries, in contrast, are characterized by “this cannot break, so we have to move slow”. When digitalization meets industry, these mottos collide. The optimal result in this collision is “move fast without breaking things”, or at least try to avoid “move slow and break things”.

Experimenting in the physical world is slow, expensive and limited by the physical building blocks available. The architect, however, is a fast, inexpensive and creative mind. Digital technologies are now creating a new environment in the digital domain for the mind to experiment.


The physical world can be interpreted in an infinite number of manifestations. When the mind interprets and processes the world, it creates new information. Tools, such as sensors, are another way to create new information. When interpreting the physical world, the bottleneck for creating new information is the human mind and the tools that are used.

The term “capturing data” needs to be understood in this context, someone has to interpret the world and create the information. The data does not exist in the physical world by itself.

How to capture data is best explained through an example. An engineer wants to understand how water is flowing through different pipes in a complex process. To measure how much water is flowing through each pipe, she installs various sensors, connects and stores the data in a server.

Being able to do this requires a deep understanding from the engineer, obtained by processing huge amounts of other data throughout her life. After collecting data for some time, she accesses the database and starts analyzing the process. In doing all of this the engineer has gained deep insights within the specific problem, how the data is captured and the data itself.

Her frame of reference understands the relation between the data and the physical world, which sensors captures what data and how they are organized. Without this extra knowledge from the engineer, the data is useless.

Preserving the understanding of the data is done through contextualization. This process is best explained by inspecting a single data point, for instance the number “1”. The number is not connected to anything, there is no context, and the datapoint, in and of itself, is useless. To make the number understandable it needs to be connected to some other information, metadata.

The data point is collected from one of the sensors measuring the flow rate, and the number “1” should be connected to “liters per second”. It is also known that the liquid is water and an additional connection can be made. In the process of connecting the data point with metadata the context has transformed from “1” to “1 liters per second of water”.

Making new connections and transforming the context, are two attributes of data used in contextualization.


The engineer can make these connections because she has a good understanding of the data and process. Another person, with no prior understanding of the process or data, would provide zero insight. It is therefore essential to not only focus on capturing the data, but also on capturing the understanding of the people.

When broadening the scope to an entire plant with thousands of sensors this becomes increasingly important. To understand this better, assume a plant with no connections to the data. If all the data is understood well enough by the staff at the plant, the mapping could be solved manually. A simple solution, but it consumes a lot of time from highly skilled workers. Instead of making all the connections manually upfront, another approach is to build a model that proposes new connections to the engineers.

For instance, when an engineer performs maintenance on a sensor, the model could ask “Is this a valve?”. The last approach seems to be the most effective. In the database there are numerous time series, but some of them originate from similar sensors or measure similar processes. Using this insight, these time series can be identified and clustered together.

An engineer can then inspect some of the time series in a cluster and make the right connection for all the time series within that cluster.

After capturing and contextualizing the data, the next step is to visualize the information and extract insights. Visualization is a useful tool to better understand the problem, as it is hard to know which part of the process is causing the problem and how it propagates.

Although visualizing the problem will not solve it, it provides strong clues as to what to further inspect and how to proceed. An easy way to start visualizing is to display real-time or historical values from some sensors. This can provide improved understanding of the problem, but also insights that signals are periodic or spike at random times.

And comparing the behaviour of several sensors might discover that some sensors diverge from the rest. Easily implemented, low cost visualizations can provide vastly profound insights.


Implementing more comprehensive visualizations is often complex and complicated. Instead of visualizing sensors as separate entities, they can be aggregated in one visualization for the entire plant. The dynamics of the system and the relationships between sensors should be incorporated to capture the state of the plant and how it responds to changes.

This is conceptualized as a digital twin, a digital representation of a physical system. Everything that happens within the plant in the real world is measured and visualized in the digital model. To understand what is happening, people are not required to be at the plant, but can instead monitor the model remotely.

The digital twin can be implemented as a 3D-model, making the interface intuitive for the end user.

Implementing a digital twin enables new applications. First, simulations can be used to prepare for different scenarios. For example, how will extreme temperatures affect the risk in operations at the plant? Based on this simulation the plant can implement preventative measures and extraordinary procedures, such as reduced production or increased cooling.

Second, broader communication is enabled by the ability to describe and create an understanding of the specific processes and challenges to new people.

Engaging more people, from different disciplines and with different mindsets, is crucial to solve complex problems and foster innovation long term.

Digitalization can be understood in sequential steps, capture, contextualize, visualize. But there are different approaches to decide when to make the next step. The first approach is the Waterfall model, requiring that one step is completely finished before proceeding to the next one. This would require to first capture all the data before contextualizing everything, and so on for each step.

As previously discussed, this does not work, because there are no upper boundaries. There is always more data to capture, more context to connect, or new ways to visualize the data. This is solved by imposing limitations, only collecting relevant data, contextualizing what is necessary and visualizing what seems reasonable. This is hard and can be subjective, requiring good judgement from the people responsible for the digitalization.

Another problem is the assumption that the layers are independent of each other. In reality, what you want to visualize might determine how you should contextualize the data, and the contextualization might in turn require capturing the data in a specific way.

Another approach is the Agile methodology, viewing the steps as ever-evolving and interdependent, focusing on continuous iterations with small use cases. A use case is a specific problem that involves solving all the steps. In this approach several teams work in parallel on respective use cases. Some of the advantages with this approach are agility, quick results and feedback. At the same time, there is a huge pitfall.

A desired outcome is to build a database of the entire plant with all data captured and contextualized in a consistent way. If the teams do not coordinate and communicate with each other, the result will be a fragmented database with varying contextualization.

The use cases might be solved, but the long-term value potential of a digital plant will not be reached.


The waterfall and agile approaches differ in how they reach scale. The waterfall approach requires a lot of time and work solving and implementing each layer, without solving any real problems. However, when the infrastructure is complete, it can be used to solve problems fast, at scale.

The agile approach provides another perspective on how to scale. Each use case can be seen as a thin slice containing all the steps, and solving new use cases broadens your slice of the infrastructure.

The big difference being that the agile approach receives feedback and solves problems from the beginning.

Building a digital environment is a long journey, but will ultimately be the foundation for most experimentation in the future. Although experimentation will happen in the digital domain, the actual changes will manifest in the physical world.

It is difficult to predict how the digital environment will evolve, it therefore might be easier to understand the possible changes in the physical world.

Plants will increasingly be changed by digital experimentation. The growing trend in automation and digital twins will culminate in unmanned, autonomous plants. Removing people from plants, combined with digital experimentation, should result in a much leaner and efficient design. Future plants need to be mobile, both when it comes to physical movement of the machine parts, but also facilitating more communication between the parts along the way.

Machines that are upgradable or transportable should increasingly be rented instead of permanently owned. The result is substantially smarter and leaner plants. This should in turn change inventory, logistics and locations.

Subscribing to a service instead of buying the hardware yourself might sound familiar. The concept is well known in the Software industry as SaaS, or Software as a Service.

The idea is to sell the software as an online service that is centrally hosted by the company, so that the consumer does not have to set up their own hardware to host the service.

The example described above should in turn be termed MaaS, or Machine as a Service.

Here the machine is used on demand, continuously upgraded, maintained and transported by the company, so that the consumer doesn’t have to commit to purchasing their own machines. This service is crucial for transforming digital experimentation into physical changes. The first benefit is always having the newest and best machinery, as opposed to permanent and depreciating.

Another benefit is to have flexible production. The production can be adjusted in response to demand by renting more or different machines.

The digital environment operates much more like the mind, it is fast and inexpensive. And perhaps most importantly, it provides almost infinitely small and limitless building blocks. The mind is no longer restricted by physical objects, only its own imagination.

As the pace of change in the physical world accelerates, it will always lag the pace of the digital environment. Moving the experimentation from the physical world to the digital domain will not only enable the motto “move fast without breaking things”, it will also empower a new wave of creativity.


Join Hacker Noon

Create your free account to unlock your custom reading experience.