paint-brush
Technical Huddle: An Easy Way To Turn Challenges Into Successby@ovidiu.silaghi
153 reads

Technical Huddle: An Easy Way To Turn Challenges Into Success

by Ovidiu SilaghiOctober 2nd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Technical Huddle: An Easy Way To Turn Challenges into Success is an easy way to turn challenges into success. The team felt that technical improvements were not understood or considered important, while product managers felt that the team was not realizing the importance of making progress on critical features and commitments made to users and stakeholders. This led to frustration for everybody. The team needed a way to connect the product, business, and technical improvements. I proposed that by bringing data and visualization the teams and product managers would find it easier to discuss and make decisions.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Technical Huddle: An Easy Way To Turn Challenges Into Success
Ovidiu Silaghi HackerNoon profile picture

The Challenge

A few years back I had the opportunity to join two strong, small teams who were working on a couple of important products that already had a big audience and a few years of development behind them. I spent my first month discussing with teammates, product managers, technical leaders and major stakeholders of the products. We’d discussed challenges, ideas, and plans for the future. This allowed me to learn a lot about the products and the people who worked on or were impacted by them. 

During those discussions, I’d observed some struggles. We had an impressive collection of tools used for monitoring the products, but the knowledge of how to use those tools was mainly concentrated in one or two individuals per team and they were usually used in a reactive rather than proactive way. For example, some tools were only used when production went down and rarely or never otherwise... Another challenge was that the products had quite significant technical debt and there were technical improvements that engineers wanted to prioritize while product managers wanted to continue adding new features (a common tension when building products). This led to frustration for everybody. The team felt that technical improvements were not understood or considered important, while product managers felt that the team was not realizing the importance of making progress on critical features and commitments made to users and stakeholders. 

Challenges we were facing could be summarized in the following way:

  • Prioritizing technical improvements is hard when they are not articulated in a way the business can understand and prioritize against revenue-impacting feature requests or improvements.
  • Product managers and teams were not speaking the same “language”. They needed a way to connect the product, business, and technical improvements. 
  • Monitoring of products was usually done ad-hoc and only a few team members had enough knowledge of the entire suite of tools available for monitoring. The use of these tools usually happened in a reactive way, when the team was facing a problem. 

The Idea

These challenges were great opportunities to create something which would improve the collaboration and which would bring higher quality and efficiency in building products going forward. 

I proposed that by bringing data and visualization the teams and product managers would find it easier to discuss and make decisions. 

In order to convince the teams, I felt that I should probably build a proof of concept which we could use as a starting point for discussions. I’d analyzed all the tools we had available, their functionality then looked for open-source tools that we could use given our tech stack.

I was lucky that we were using NewRelic which has a powerful functionality: Insights, which - much to my surprise - we weren’t using at that time. With Insights features I was able to easily create dashboards with nice graphs and most importantly, valuable information.

I presented those to the teams and we started brainstorming on how we could use them, what data is useful for us and what other data we could get with other tools. There were many great ideas that came out of those discussions and we also had some volunteers who took some of those ideas and implemented them.

A few examples: we improved the dashboards on NewRelic Insights, another colleague built a tool that was integrating open-source static code analyzers and generated an Atlassian Confluence page with that data, and we integrated Google Analytics data and Amazon AWS cost data. These allowed us to correlate web traffic data with system load/health data and infrastructure costs.

The Execution

Our initial dashboards looked at the following data:

Transactions (Web Transactions)

  • Ideas on how to use it: look at the data and identify which are the busiest days and hours and define the rules for scaling of machines to cover that. Look at the slowest and average transactions in order to identify if we have a performance problem and if we need to look at it in more detail. Most viewed and slowest pages will allow us to identify which pages might cause performance problems.
  • Possible benefits: will help to improve the scaling of the machines and reduce costs in some situations. Easier to see which requests are causing performance problems. It can help us to identify which pages cause performance problems and might make users unhappy.

Database Queries 

  • Ideas on how to use it: identify which are the busiest days and hours and define rules for scaling the Database on machines to cover that. Look at the slowest Database queries and transactions with the highest number of Calls to identify performance problems.
  • Possible benefits: will help to improve scaling and reduce costs in some situations. Easier to identify the Database queries and transactions which cause performance problems.

Code Quality Attributes

  • Ideas on how to use it: being able to identify modules that generated bugs and build failures could be a good indicator that those modules are fragile and need our attention. Tests that are frequently failing could be a good indicator that those tests are fragile and should be refactored, removed or the code they cover should be updated. The static code analyzer score on the ascending trend is a good indicator of increased code quality.
  • Possible benefits: improved overall quality of the code and project. Cleaner code. Modules that are more stable and less error-prone. A more reliable suite of automation tests.

Responses & Errors

  • Ideas on how to use it: responses status codes show us if we mainly provide successful responses to our users or if we have a huge number of failed responses. Frequent errors help us to identify the ones on which we need to focus our investigations. 
  • Possible benefits: easier to spot if we have unhappy users and to identify if we need to investigate failures. Easier to prioritize bugs based on the frequency. 

Browser, Operating System, Device Usage

  • Ideas on how to use it: this will help identify what kind of browsers, operating systems, and devices our app should support.
  • Possible benefits: can reduce the time of development and testing and can help us be proactive in terms of what new browsers, operating systems, and devices we should support.

With dashboards available we put in place a ceremony which looked like this:

Every Monday we met and we looked at the dashboards.The output of that session was a list of actions and Atlassian Jira tickets.We then presented those tickets to product managers, showed them the data and how it was impacting or could impact the users/customers. In the hope that this would make prioritization easier and more efficient. 

After implementing some of the tickets and after each new release, we were measuring the impact compared to previous release data. We weren’t strict on numbers, rather we preferred to look more at trends. 

We called this ceremony: The Technical Huddle

After a few sessions, some of my colleagues came up with great ideas on how to improve the dashboards and make the sessions more efficient. A colleague developed a Continuous Integration plan which collected and saved all the data we were using in dashboards and, based on that, generated an Atlassian Confluence page that displayed the data in a more structured and efficient way which allowed us to quickly understand it. He added colors with different shades to show progress or regress based on the difference in percentage between data from one release to another.

Another colleague found a new open-source tool that we could use for static code analysis. Initially, we weren’t focused on frontend code so one of our colleagues found a tool that gave us data on that area. Initially, these meetings lasted for almost one hour, but soon we managed to reduce them to 15-30 minutes. We were surprised to see that even our product managers started to join our technical huddles once they started to see user benefits that they could “sell” to their stakeholders - page load time improvements, reduced error rates… things that users value and therefore are commercially valuable. It was amazing to see team involvement, and team members’ passion to improve and make it more efficient. 

The Benefits

Having regular Technical Huddles where we looked at and analyzed data which showed how we were doing in terms of quality and efficiency on the products we were building, gave us a lot of benefits and some of them we initially didn’t even anticipate:

  • You could say we built a “translator” between technical improvements, product, and business which allowed teams and product managers to communicate more easily and prioritize better work to be done. It was so rewarding to see teams’ happiness when we were finally prioritizing more of the technical improvements they were making and how enthusiastic the product managers were that they could show stakeholders why they prioritized a technical improvement and how that improved the product and business. When product managers can articulate why technical improvements matter and when the engineers can articulate why features matter you have a strong indicator of a high performing team. 
  • We were proactively monitoring the products and looking at them from a <ility>(performance, scalability, reliability, maintainability, etc) point of view.
  • We were able to connect these quality attributes to user value and business which allowed product managers to prioritize work better. This allowed us to prevent future problems and identify trends in the product. We were able to build products that had better stability and performance.
  • Reduce costs by improving performance, stability and improving infrastructure management (e.g: better auto-scaling). 
  • We increased the overall view and knowledge of the product. Team members were also looking at the product as a whole, not only on the parts in which they were working. 
  • We found these sessions were a great way to spread the knowledge inside the team. When we observed there were questions about a tool or data in the dashboard we took action to either organize a presentation on that part or to have somebody explain how to use it. 
  • Users of the products were happier because we reduced bugs and errors and improved performance. 
  • An interesting side benefit of these sessions was that it was a great way to connect team members and build the team

Having regular sessions together with your team where you are looking at dashboards with data related to quality, efficiency and impact of the product you are building is a powerful ceremony I would recommend everybody to adopt. I’ve seen the benefits it can bring to teams, product managers, stakeholders, and users. To me, this is the missing ceremony from Agile frameworks.

Previously published at https://www.linkedin.com/pulse/technical-huddle-missing-ceremony-from-agile-ovidiu-silaghi/