Last month, Stack Overflow released its annual Developer Survey, which polled over 64,000 developers across the globe about their favorite technologies, coding habits and work preferences. One of the key highlights was a strong preference for distributed work. A majority of developers (64%) reported working remotely at least one day a month, and 11% reported working remote full-time. Even more telling is that developers ranked “remote options” as a top priority — the ultimate office perk — second only to number of vacation days when assessing new job opportunities.
If you believe that developers are writing the script for the future, these results would indicate that the next act is going to be remote-first. Factor in the severe shortage of U.S. technical talent and recent restrictive work policies on top of that, and it would appear that the distributed “future of work” might arrive much sooner than anticipated.
Distributed, however, is very hard to do well — it requires a lot of thoughtful strategy, much more than just setting up remote employees on Slack. And if tech companies want to effectively leverage this model, they’ll have to commit 100%. It might result in some growing pains at first, but it will ultimately result in processes and systems that will improve communication and efficiency across the entire organization.
During the last six months as VP of Technology at Andela, I’ve been running a team of nearly 40 people — spread out across New York, Lagos, and Nairobi — who are building out the systems and tools we need to make distributed teams excellent and scale. For Andela, that means growing from 500 people to over 100k in less than 10 years.
GitHub and consequently the pull request was a huge inflection point for distributed engineering teams — it decoupled the ‘work’ required from the ‘integration’ of it. Now, the challenge at hand is building off that to come up with the “pull request” that works for the shared context of a company. Can any one of our 500+ employees, whether an engineer or an operations associate, get up in the morning and figure out exactly what they need to get done, do the work, submit the request, benefit from continuous integration, and know that they’re improving the system daily — even if they’re in a time zone that’s 8 hours away from their other locations?
We’re getting there, but it’s not easy. Over the next three blog posts, I’ll dig into how to build dynamic systems and controls for distributed teams, how to develop people in a distributed environment, and how to scale distributed teams in a high-growth startup. Stay tuned!
Great processes lead to great distributed work. But guess what? Very few teams, especially in the startup world, have great processes. Unless you’re old and stodgy and have had the same processes forever, which never happens, things are probably changing too quickly.
There isn’t a playbook that works for all distributed companies, whether we’re talking about partially distributed (i.e. two of your engineers decided to move to Portland) or fully distributed (i.e. Automatic, the $1B company with 400 global employees working from wherever they please).
There’s also a difference between the company that’s been building distributed processes from day one (where I’m coming from with Andela), and the company that’s trying to go distributed at day 500. The latter, as you might imagine, presents a greater challenge, as managers are tasked with altering existing processes to accommodate their new remote team members. In both cases, however, it’s essential to build out processes one by one that help everyone exchange feedback, elevate challenges, and communicate clearly without relying on face time.
Ultimately, a distributed work environment forces managers to relinquish control.
Ultimately, a distributed work environment forces managers to relinquish control. When you’re working asynchronously, you can’t have nearly as many interactions back and forth to figure out a problem. You can’t have team members blocked. As a result, distributed forces you to push out autonomy and decision making to your team.
How do you do this? You create systems of measurement where everyone has buy-in and understands explicitly the KPIs that they’ll be measured on. The measurements should be as simple as possible, so you’ll need to break down problems into smaller pieces to make solving them easier. At the end of the week, your goal should be to have one number for each engineer and one number for each of your teams, so you can objectively know how well a team is doing and how much of your attention they need.
Once you’ve chosen the measurement, and decided on a benchmark, you throw the team out the door and see how it goes.
This only works if you recognize that these systems of measurement are totally fallible and that they’ll change as you grow. These aren’t do-or-die measurements, they’re ways to gather information about what’s wrong — and inform future systems of measurement. You’ll probably feel like you’re giving up a lot of control, but if you’re still getting the results you want, then who cares?
The major challenges around remote work come down to signal versus noise. When you’re relying solely on Slack, email and Google Hangouts, all of that information is signal, and it carries the same sense of urgency or severity. This presents a challenge for managers, because when a colleague says, “you know what, this was frustrating,” you need to know whether that frustration is at the bottom of their mind or the top — in which case, it’s something you need to deal with immediately.
In a distributed work environment, in other words, you don’t have the noise that helps you calibrate how serious the signal is. Getting a beer with a team member is wonderful noise because you get to know that person well enough to understand what’s important to them and therefore calibrate your communication.
Teams that are new to distributed tend to react to everything, which leads to a common problem: signal overload
Teams that are new to distributed tend to react to everything, which leads to a common problem: signal overload. These teams suffer from an overload of information across their Slack channels, Google docs, GitHub, Trello, etc. Discovery of information is chaotic and time consuming, and friction rises in communication when information doesn’t have a place to settle.
As a distributed manager, you have to make sure that information is flowing openly and that support requests coming in have a place to live. When you get feedback, it’s essential to have the processes in place to distill that information into something valuable and not forget about it. For instance, once a GitHub issue is created, it’s really easy to collaborate and have a focused conversation around that issue — this is because this issue serves as a shared context, an ‘artifact’ if you will. Trello is a great tool to capture disparate information, localize a conversation around it and then assign a process for resolving it. Consistent retrospectives, explicitly documented on google docs, are wonderful for making sure all voices are heard and working through systemic issues as a team.
In a world of signal overload, you need to make sure you’re not missing the right signals in your communications.
Slack is incredibly overwhelming, and it’s all signal. I’m sold on Slack, and our entire team uses it, but it’s a tool — and like any other tool, you can easily abuse it.
In its worst form, chat allows us this “always on meeting” where nothing is defined and action items aren’t carried forward. At its best, it’s an asynchronous tool to keep everyone apprised and stop the back-door, one-to-one conversations. ChatOps, as a development technique, is fantastic — for instance, using bots to deploy so everyone knows what’s happening.
As with most technologies, you need the tool to be able to create the behavior you want, but you need a culture change and belief change in order to use the tool effectively. Slack is a tool to hash things out, but the result has to live in a designated place where it can be acted upon and solved.
Using Slack effectively means getting everything into public channels — which helps reduce the one to one to one direct messages which serves only to obfuscate problems. I believe the health of an engineering team could be measured by how many issues are raised and resolved in a given time period, and how fast that iteration is. The goal, then, is to get those issues to be public immediately, and broken into small enough pieces so that they’re resolved as quickly as possible.
The more things that are discussed in public channels, the more resolutions are happening and the greater circle of people who can participate and witness those resolutions which acts as a feedback loop — who doesn’t want to resolve issues collaboratively? If things stay in private, they tend to stay unresolved for longer.
It’s not as simple as it sounds. If you’re operating in an organization with over 500 people, like Andela, channels are good to set topics, but they get tough to manage. We’re still in the early days of understanding how to cultivate these communities and how to set KPIs to get the team focused on improving our chat behaviors.
Great distributed teams are the result of good processes.
To tie it back: Great distributed teams are the result of good processes. Good processes are focused on results, finding the right home for information to live, and making sure your team surfaces problems publicly so that they can be dealt with. In the next post, I’ll speak to the particularly difficult challenge of developing people in a distributed environment.