paint-brush
A Programmer's Guide to Crack 'Twice the Work, Half the Time' Codeby@turbobureaucrat
2,893 reads
2,893 reads

A Programmer's Guide to Crack 'Twice the Work, Half the Time' Code

by German TebievJanuary 12th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Can programmers twice the work in half the time? This story is a deep dive into the current methodologies and their flaws, along with a new approach proposed by the writer. Enjoy reading!
featured image - A Programmer's Guide to Crack 'Twice the Work, Half the Time' Code
German Tebiev HackerNoon profile picture

Theoretical management is full of granddads. I've read works by Deming, Goldratt, Ohno, and Drucker. I've heard of Ackoff and Juran. I enjoyed the humane and careful attitude in every book I've read. Never was it: "Exploit people to their exhaustion". Peter Drucker, in his "Management Rev Ed", even guided us, managers of the 21st century, in what we should do:


The most important, and indeed the truly unique, contribution of management in the twentieth century was the fiftyfold increase in the productivity of the manual worker in manufacturing. The most important contribution management needs to make in the twenty-first century is similarly to increase the productivity of knowledge work and the knowledge worker.


Twenty years into the 21st century, how close are we to fulfilling this ambitious goal in the software development world? I see that we are far from it. Let's explore where we are, what problems we have, and what we can do about them.

Exploring the Widespread Approach to Software Development Efficiency

"The Art of Doing Twice the Work in Half the Time" is the subtitle of the "Scrum" book by Jeff Sutherland, co-author of this framework. I like this subtitle, as it strongly reflects how much more efficient we can become in our programming efforts. It's not all cloudless, though. The first problem we encounter here is that it is hard to understand what efficiency is in our job. The following complex problem is understanding if we become more efficient over time.


But why do we want efficiency in the end? Well, it's about doing the same thing with less effort, and in many spheres of our life, it is great to have the same or better results faster or at a lower price.


Let's return to our question. Scrum practitioners propose story points as a measurement of a task. There is a definition from the scrum.org website:

A Story Point is a relative unit of measure, decided upon and used by individual Scrum teams, to provide relative estimates of effort for completing requirements.


A story point is a magnitude of complexity related to the reference task. You and I have a feeling of how much effort it took us to implement that not-very-hard task. If I tell you that the new job will cost three story points, I say it will be three times harder than the previous one. It's my guess, and who knows how hard it would be.


Let's focus again on efficiency. If we deliver tasks estimated in 25 story points in this sprint, are we better than in the previous sprint when we had only 15? Or maybe we were more careful and gave higher estimates for the same effort? But how can we even compare? We don't do repetitive work in software engineering on a factory-sized scale. We design and implement information factories that provide services. Is there a place for efficiency talks in our industry?

There is a place for these talks. At least, we can intentionally slow things down. E.g., we can procrastinate or jump from one task to another. If we can do something less efficient, there is hope for the opposite. However, I don't see story points as a helpful measurement instrument here. They can effortlessly become abused intentionally or unintentionally. We need to search for something better.

Defining the Object of Measurement

Before defining the way to increase the speed of our programming effort, we need to determine what is this effort we want to measure. I don't see anything we can use industry-wide, but the positive side is that we don't need such a wideness for efficiency improvement. The need is for something that describes a meaningful work step in developing your programming product. We can use epics, tasks, features, or anything else representing the positive adjustment of the system under development.

In my current place, we use three different levels:

User-valuable feature:
 ⎿ Its slice for the engineering team's convenience:
   ⎿ The specific task inside a slice (e.g., back-end task).

We need the adjustment to have the following characteristics:

  1. Has the beginning time and the completion time;
  2. Occurs regularly.

So what we define here is "this big piece of work". Let's name it the action in the consequent part of the article. Not all actions are the same. I've demonstrated three kinds in the example above.


We need action to occur regularly.


This requirement comes from the fact that you can't be ultimately efficient from the start but can become more efficient over time. It's not the drawback of the described approach. Scrum works this way, Toyota Production System works this way, and science works this way. We require repeatability to discover the current state and to improve it consistently.


You can do anything new with ultimately optimized efficiency only by chance. And the more complex the action, the less possible it would be. In-advance preparation can help. However, the ability to prepare in advance means the action or its subactions occurred in the past, and there is knowledge about them. There is nothing to prepare for something completely new. On the other hand, we hardly meet a complete novelty in our life. A fraction of the previous experience is always relevant to the never-experienced situations.


To sum up, we have an action of a kind as an essence to measure.

How To Measure an Action of a Kind?

At first sight, the previous section doesn't add anything. We do some actions of a kind. How is it better than tasks measured with story points, t-shirt sizes or animals? A name is not a single gain. The action of a kind has two timestamps, and we can measure its duration by subtracting the beginning from the completion. Duration is an excellent gain here as it is our key to the everyday reality language.

  • How much time did it take to complete an epic?
  • It took us 39 days from start to completion.

Wonderful.

Is There Anything Else That We Gain?

The second requirement to our action, the regular occurrence, gives us so much that it is hard to believe. First of all, we gain a flow of actions of a kind. Here is a definition of flow from the "Actionable Agile Metrics for Predictability" book by Daniel Vacanti:

Simply stated, flow is the movement and delivery of customer value through a process.

Our requirement for two timestamps, the beginning and the completion of an action, gives us a good set of new metrics. Here they are from the very same book:

Work In Progress (the number of items that we are working on at any given time), Cycle Time (how long it takes each of those items to get through our process), and Throughput (how many of those items complete per unit of time).

I can intrigue you if you think it's all we have here. We are at the very beginning. Another treasure the flow gives us is the trace that it leaves as time passes. This trace allows us to understand the system better. We can capture it using several diagrams. One of them is the Cycle Time Scatterplot.

Cycle Time Scatterplot for Demo Data in ActionableAgile Instrument

Its delight comes from the fact that it captures "how we do things here". It doesn't require anything from your process, no particular methodology. Do you want to capture teeth brushing flow using the cycle time scatterplot? Just do it. Do you want the same for the houses built in your area? Absolutely fine. Do you want to track the development lifecycle, including the A/B experiments done after developing new features? Please start and do.


In the picture, you can also see the percentiles lines signed with 50%, 70%, 85%, and 95% on the right. What do they mean? On the left side, there are days. You can read the 85% and 16 days in the following way:

For 85% of items that entered our system in the period under review, it took 16 days or less to leave it.

It is the second time I have used the word "system" in this section. What does it mean? Let's define it the following way for this story:

System is something doing actions of a kind.


One action of a kind in the houses building system example from above is to build a house. Making one kilometre of a road doesn't count as an action here. It's a different kind of action and another system. However, there is no concrete division, but we want our houses to be alike, as well as teeth brushing and software development with A/B tests. For something different enough, we can come up with another system.

One More Important Gain

It's time to discuss one more effect that would help us ensure the accuracy of our enhancement efforts. Imagine you have a team and a need to create new software. You take user stories as a measurement of progress, as an action. After completing your first story, it's time to do the retrospective to see where we could be better next time.


Is there something in this logic that leads us to a trap? Let's have a closer look.


During the implementation of the first user story, the main obstacle was agreeing upon the necessary libraries and installing the required software. It took some time, and it was a real pain. During the retrospective, the team discussed how painful it was and how can we be better the next time. The quite obvious miss is that the next time hardly, you would need to agree upon the libraries and install the software. Libraries usually remain for a long time. Installing software would be part of onboarding new members, but it won't bother the already established team on their second user story. It's different enough and can now become a part of the onboarding system.


Let's have a look at the following piece of programming wisdom by Donald Knuth (or Tony Hoare):

Premature optimization is the root of all evil.

I guess you've met this one, which tells you not to think of the performance in the early stages of software development. You might have seen this wisdom in the following form:

Make it work, Make it right, Make it fast.

The example about the installation of libraries shows that the adage is relevant to the code and also the coding team! What a mystery we meet here! It's not a mystery but the attribute of a system. There are at least two reasons we should avoid jumping right into the enhancements after the first try.

Statistical Reason Not To Jump Right Away Into the Enhancement

Every complete action of a kind has its duration. The duration consists of two parts: one caused by common reasons and another caused by special reasons.

Let's refer to the teeth-brushing example once again. Commonly, coating the toothbrush with toothpaste takes a few seconds. In special cases, you need to take the toothpaste out of the closet, open it, and use it. Here the whole action takes several minutes. If, for whatever reason, we need to think of the efficiency of the toothbrush coating subaction in teeth-brushing, doing so after the initial one would be misleading. The initial action contains an extra part and differs from the typical action we want to speed up.

The nature of being special leads to the inconsistent appearance of the special duration reasons. What shows always is the core of our action, the fruitful target of our enhancements.

Theory of Constraints Reason Not To Jump Right Away Into the Enhancement

What does the theory of constraints tell us? It tells us that the whole producing something will be as productive as its least productive part. Imagine we have a company building tiny one-floor houses. Our yearly capacities are the following:

  • 6 basements,
  • 24 sets of walls,
  • 52 roofs.

How many houses can we build per year? You might answer six, but I suggest saying no more than six. Our building process is a consequential one: basement → walls → roof. Finishing the last, sixth house can fall behind the end of the year.

Schedule of Our Imaginary Building System

If we increase the number of walls we raise or roofs we build, will this change the capacity of our whole company? Will we produce more than the mentioned "no more than six"? No, the basement still limits us.

The capacity numbers from above come from the experience of this superficial company. We don't have this experience after implementing the first user story. We have yet to determine the constraint of our user stories building system as we don't know how long each subaction lasts. Consider us having quality assurance as a part of our user stories development process. Testing of a user story lasts four hours. The development of a user story lasts five working days in general. Let's say there are 250 working days in a year. Do you expect to have 50 user stories complete at the end of it or 730? As with the houses and basements, at most 50 per year. We need to gather statistics to understand the shape of our action and its least productive part.

How Many Complete Actions Are Enough To Begin the Enhancements?

To be entirely sure, I suggest having the ∞ number of complete actions at this point. After having this exact number, you can be 100% sure what to enhance first.🥁

For those outside the world of pure mathematics, let's consider the following thoughts. Here is a reference from the "Actionable Agile Metrics for Predictability" book referencing the "How to Measure Anything" book:

For example, Douglas Hubbard (whose book "How to Measure Anything" is listed in the Bibliography) advises his clients on his "Rule of Five": Rule of Five – There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.

Five actions seem enough to start in-depth thinking about system improvement.

Please don't see this as a taboo to change something for the first five actions. Consider other aspects as well: health safety, team cohesion and more.

Where To Start the Enhancement?

If we take the elementary action, e.g., turning on the TV by pressing a button on it, we can think of it as a whole. To reduce the number of movements, and the overall time, buy yourself a clicker and keep it in one place near your sofa. In this example, the first action could take around 20 seconds and the second... one second. My congratulations! You've reduced the 95% of the previously required time and still receive the same value. What a fantastic waste elimination!


Not all actions are so straightforward. The already mentioned user stories development is complex. It's challenging to handle the improvement there in one jump. We need to break things down into subactions, like in the houses building example. We can start with the following lifecycle:

  1. Analysis,
  2. Development,
  3. Testing,
  4. Done.


Where to start?


In Lean manufacturing, the process of creation, or action as named in this article, consists of subactions of three kinds:

  • Value-added activities;
  • Necessary non-value-added activities;
  • Waste.


Formulating the user story, designing a solution, and coding it are all value-added activities. Using git branching during the development might be considered necessary non-value-added activity. It doesn't add anything to this change but organizes the whole process. Waste prevents value accumulation for a while without a good reason. Waiting for the non-working database is a waste.


In Lean manufacturing, wastes (or muda) are known and defined by the Toyota Production System creator, Taiichi Ohno. At least they are defined for the Toyota company, the cars manufacturer. Other industries have their variants. Here is ours, created by Mary and Tom Poppendiecks in the "Lean Software Development" book:


  1. Partially done work,
  2. Extra processes,
  3. Extra features,
  4. Task switching,
  5. Waiting,
  6. Motion,
  7. Defects.

Or these? From the "Implementing Lean Software Development" book by the same authors:

  1. Partially done work,
  2. Extra features,
  3. Relearning,
  4. Unnecessary handoffs,
  5. Task switching,
  6. Delays,
  7. Defects.


At least software engineers can move now!😅


How could these pillars have changed so much in just several years in our industry? I see the answer in the impossibility of having a sufficient list of wastes for all times to come. Even Toyota, at some point, came up with the eighth waste.

It's great that the list for our industry has changed so radically. This change opens our minds to reconsidering our thoughts about what is waste continuously. Here is one more view on what can be a wasteful part of software development:


One of the biggest misunderstandings in the world of software is the value of code. But code is a liability, as we'll say repeatedly in this book. The more code we write, the more complexity and risk we generate for ourselves.


It's a quote from "The Value Flywheel Effect" by David Anderson with Mark McCann & Michael O'Reilly. Whew, what a ride!


So, how do we start? By looking at the least productive subaction. What do we seek? Subactions that do not add value.

Reconsidering the User Stories Development Workflow

Let me remind you of the workflow that we had:

  1. Analysis,
  2. Development,
  3. Testing,
  4. Done.

Usually, these are the parts done by different people, and there is always some time to wait for the handoff. Let's document it:

  1. Analysis Active,
  2. Analysis Done,
  3. Development Active,
  4. Development Done,
  5. Testing,
  6. Done.

I didn't contrive these steps. I took them from the ActionableAgile Analytics product demo. Can we trust them? I will say yes, as I've seen different examples of the real data, and this one looks close. Let's investigate the following stages' statistics. It demonstrates the averages.

The Average Values of the ActionableAgile Demo Data

The system cycle time is 9.37 days. This statement means that a task arrives at the Analysis Active stage, moves through all the next ones, and leaves Testing for Done, and, on average, this path takes 9.37 days. Stages with "Active" in their name seem to bring usefulness to the result as well as Testing. Stages ending with "Done" are queues, waiting, and nothing useful. If we mark them accordingly using the Flow Efficiency diagram, we'll see that, on average, only 40% of the time spent on a single user story is valuable.

Flow Efficiency Diagram of the ActionableAgile Demo Data


In this diagram, we also include the tracked blocked time and strange tasks, which spent all of their time in the "Done"-stages. If we exclude them, the flow efficiency for this demo example would be around 50%.

Addressing the Discovered Waste of Time

As we have too little information about the demonstrational set of data, there won't be specific recommendations like: "Give Team E better computers". However, there would be enough thoughts to inspire your own in your case. The least productive part of our process is the Analysis Done stage. We can't even call it so as it describes waiting. However, it takes almost 29% of every task. What could be the reason here?


The active development phase doesn't seem slow, requiring less time to complete than the active analysis part. Looking at the average WIP, we'll highlight the problem: the analysis department can handle more user stories simultaneously.


Balancing this, e.g., by bringing more developers to the team, could be a possible solution. However, don't be too hasty here. The reason may be quite different. The Analysis Done stage can contain the undercover work. It could be that developers are not satisfied with the requirements quality but can't resolve this problem systematically and spend time during this stage to enhance them. To discover the boundary conditions, to propose the UI handling of the asynchronous requests and more.


Before proposing the solution apply the root-cause analysis: use five whysfishbone or something else.

Verifying the Success of the Applied Change

Let's say that we addressed the problem from the previous section. How can we be sure that the proposed change worked? We need to accumulate data once again. Do you remember the Rule of Five from the above? We can use it here as well. Our system is now adjusted. Let's measure it again.

In my work, I use two tools to measure the results of the experiment:

  1. Cumulative Flow Diagram,
  2. The trend line on the Cycle Time Scatterplot diagram.

Cumulative Flow Diagram of the ActionableAgile Demo Data

Do you see the cyan area of the Analysis Done? Expect it at least to become thinner as time passes and to disappear at most.

The 85% Trend of the ActionableAgile Demo Data

Look at the green dashed line appearing on this already familiar diagram. It shows the 85% cycle time trend for the last N days. I use 30 instead of N, as it is stable enough to demonstrate the changes. If the discovered solution handles the deep enough root cause, expect this line to glide ≈30% down to 11 days.

If there are no significant data changes, it's time to look for other solutions.

The Next Steps

The next obvious point for improvement is the Development Done stage. Let's imagine that we've coped with it also. We've cut 50% of the time required initially to complete the user story. However, the story's title promises quadrupling productivity. In this case, we can start thinking of the Analysis Active stage. Then try to parallelize the Testing one with it and Development Active.

Nevertheless, I am not sure that it is necessary here. Imagine using software which gets new features every two days. The market may not be ready for it. The market becomes a constraint for our system. This discovery doesn't mean we are always that limited with our improvements. Usually, we have several development systems creating value, and our features take longer than nine days. In my experience, the bird's view on the items lasting for six months discovered only 30% of the value-adding time. The more detailed picture of the tasks inside the 30% again demonstrated the exact 30% of the value-added time. It turned out that for the whole 180 days, there were only 16 days of value-added activity. A potential of 11 times improvement is visible.

The Approach in Summary

  1. Discover your system,
  2. Discover its constraint,
  3. Eliminate waste from it till it's no longer a constraint,
  4. Repeat.

Valid Questions

I feel that the described approach has a high potential in doing the contribution management of the twentieth century passed us. When I tell someone about the method, I usually face several questions:


  1. Won't this destroy the joy of programming?

  2. Software development is so unpredictable and so creative. How can we talk about working on its efficiency?!


The proposal won't destroy the joy of the programming. You might have noticed that we worked on eliminating the holes in the process in which nothing happened. Being able to take the just-created task is as joyful as taking it two days after. Another joy comes from looking at your code as the system under change and your team as the same. You have a new land of programming tools, and approaches opened.


Why would you previously need online mob programming tools? To become faster? But what was faster back then? Why would you need the explicit software design stage? Aha! Our development team is burdened with bugs, which is why user stories wait up to 30% of their lifecycle for the developers to fix bugs. Why don't we prevent them?


What kills the joy of programming is the regular need to cut off your flesh to complete the tasks. Being empowered with better methods, tools, and knowledge doesn't kill the joy.

Yes, software development creation is indeed unpredictable and not routine. However, is it infinitely unpredictable? Would a year be enough to complete any of the tasks looking at you from your backlog? What about ten years? Is the nature of their variation so elusive that we can't do anything about it? The existence of the Cycle Time Scatterplot diagram shows us that there is a limit to software development variation. You can point out that some tasks require a quick textual constant fix, while others require several investigation days. I would agree with this, but I would also ask you: "Is the existence of these tasks the result of inevitable software nature, or is it the result of the software architecture you use? Isn't the big ball of mud in processes the reason for such a mess?"


Isn't the need for a smoother development flow finally a bulletproof reason to address your technical, architectural, and processual debts?


Yes, even in the most change-friendly architecture, there would appear tough questions requiring more time than we would like them to need. And we already have the place on the Cycle Time Scatterplot diagram above the 95th percentile line to handle them. But they are exceptions, and we don't want everything to become exceptions.


We, managers, shouldn't do the firefighting but should work on our systems design and their variation.

Avoiding the Obvious and Wrong Steps

I am not the first person to seek efficiency in our industry. Some of the seekers think they are already familiar with the topic. Their approach is to install the tracking software on the computers of their employers and punish those doing some treated as not job. This method shows the complete unawareness of the sources of software development efficiency. The idea of great success resulting from great strain is not only poor. It's deathly. It limits our look at our systems in only one dimension: working hard enough or not enough. Your personal experience provides enough examples of burnt-out people. History shows us that countries fell into this trap with many accompanying casualties. No, working hard is not the way to continuous improvement. But what is it?

Discovering Place To Start

More than a century ago, Frederick Taylor started his work on what we now know as scientific management. He looked at his colleagues and searched for the more efficient methods for them to do their job:

Taylor determined to discover, by scientific methods, how long it should take men to perform each given piece of work; and it was in the fall of 1882 that he started to put the first features of scientific management into operation.

I don't know the structure of the business Taylor was working on back then. It could be the case that his production step was among others, which could also mean that Taylor was caught in the local suboptimization trap. It's the one about increasing the number of roofs our imaginary house-building company can handle. Taylor's discovery influence doesn't diminish, even if this was the case. Now we know the danger of this trap and can act smarter.


Do you remember my example with items lasting six months or 180 days? If I successfully eliminate the waste of time in the inner items where engineers work, I will save 38 days and have 142 days for a large item left. If I do the same for the outer where the whole team works, I will save 126 days and have 54 days for the same large item. Torturing developers with overtime and beds in the office makes no sense if you want to win big. Look at the value-delivering process from the bird's view and only go deeper once you are out of the room for improvement on this level.