paint-brush
How to Setup Your Organisation's Data Team for Successby@barrmoses
546 reads
546 reads

How to Setup Your Organisation's Data Team for Success

by Barr MosesMay 21st, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

About mortality, Shakespeare’s Hamlet once said: “To be or not to be, that is the question.”  About her data team, a wise Head of Data at a startup once said: “To centralize or decentralize, that is the question.” And it’s an important one.  Here’s how some of the best data leaders apply an agile methodology to build data organizations that scale with the growth of their companies.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Setup Your Organisation's Data Team for Success
Barr Moses HackerNoon profile picture


As startups increasingly invest in data to drive decision-making and power their digital products, data leaders are tasked with scaling their teams – and fast. From knowing which roles to hire for (and when) to setting SLAs for data, today’s data leaders are responsible for – quite literally – keeping their companies insight-informed at each stage of the journey.


Regardless of where you are in this marathon, one of the biggest challenges is determining the proper reporting structure for your data team.


As data needs increase, so too do the bottlenecks imposed by centralized data teams – and the duplication and complexity introduced by decentralized ones. And just when you think you’ve figured out the *perfect* paradigm (i.e., a central data engineering team, distributed data analysts, and a few analytics engineers to bridge the gaps! OR a handful of data analysts reporting to the COO with data engineers working under the CTO!), your entire strategy gets turned on its head when priorities shift.


So, what does a data leader to do?

To better understand how some of the best teams are tackling this problem, I sat down with Greg Waldman, VP of Data Infrastructure at Toast, a new public provider of point of sale software for restaurants, to discuss the evolution of his company’s data team and share his experiences navigating the never-ending tug of ways between these centralized and decentralized structures.


Over the last five years, Greg led the Toast data team as it grew from one analyst (Greg himself) to a 20+ person organization, and has evolved from a centralized to a hybrid-decentralized model—and back again.


Read on to learn how Greg’s team lets business needs drive the data team structure, how he knew when it was necessary to make those changes, and the crucial role he wishes he had hired much sooner.


In the beginning: when a small team struggles to meet data demands

When Greg joined Toast in 2016, the company already had 200 employees—but no dedicated analytics staff. Despite this shortage of specialized talent, the company had always prioritized using data to make decisions.


“Our founding team was just really sharp,” Greg said. “They ran the company on Excel documents, but eventually when they got to 200 people, they knew that approach wasn’t going to scale. When I came on, the ask was basically, ‘​​We have too many meetings where one person thinks the number’s five and the other person thinks the number’s four, and then they just bicker about that the whole time. So, make that stop happening.’”


Immediately, Greg dug in and began building out tools, processes, and a basic data program. Over the first year, the Toast data team tripled—they now had three people. The company continued to use the data to drive its culture and decision-making.


“Everybody says they have a data-driven culture, but I’ve worked at enough places to know the difference, and I see the juxtaposition compared to Toast,” said Greg. “Our people throughout the company—especially our leadership—really look for data before they make big decisions.”


But while the small data team tripled, Toast itself doubled. By 2017, the company had 400 employees. The centralized data team couldn’t keep up with the demands of the entire fast-growing, data-obsessed organization.


“We had lines out the door,” said Greg. “There was just an appetite for more data than we were able to provide. I think that was a bit of an inflection point for us. Because if you don’t figure out a way to serve that need, then the business might start operating in a different way—and be less data-driven if you can’t get them the necessary data.”


Supporting hypergrowth as a decentralized data operation

The shift to a decentralized structure began to take shape organically as departments began finding ways to meet their own data needs.


“Eventually small pockets of analytics opened up in other parts of the company, like sales and customer success,” Greg said. “Mostly because our small team just couldn’t meet the needs of the growing business. And so they started their own teams, and that kind of worked!”


In 2018, this decentralized team of 10 data professionals worked within business units, meeting data needs and supporting Toast’s head-spinning trajectory as the company nearly doubled again, growing to 750 employees. Greg and his team also rebuilt their data tech stack, migrating from an end-to-end data platform to a modern distributed stack including s3, Airflow, Snowflake, Stitch, and Looker.


Dedicated analysts working in their business units still maintained a close connection with Greg’s core analytics team, giving Toast a hybrid between a fully centralized and fully decentralized data team structure. But as the organization continued to scale—reaching a headcount of 1,250 employees in 2019, with 15 data analysts, data scientists, and data engineers—problems began to arise from this hybrid model.


Data consistency was one concern.


“There were various degrees of rigor across the organization when it comes to what constitutes good data. When you’re small, you’re scrappy, you’re growing, and any data is better than no data. But eventually, we reached a scale where we knew that inaccurate data could be harmful.”


And even with technically accurate data, Greg knew that strong communication between analysts, technical leaders, and downstream stakeholders was critical when it came to establishing a standard of data observability and trust across the entire company.


“As the business gets bigger and more complicated, you need analysts to start seeing the whole business,” said Greg. “Even in a decentralized model, you need to ensure analysts work in close collaboration with other analysts and technical leaders when it comes to setting the standards around performance and operability.”


Regrouping, re-centralizing, and refocusing on data trust

When evaluating how to structure his data team, Toast always weighs three options: centralized, decentralized, and hybrid, each of which they tried on for size over time. In the end, he found the hybrid model to be most effective for the size and scope of his analytics-heavy team. Image courtesy of Greg Waldman and Toast. 


Toast brought analysts who had been working under their respective Customer Success and Go-To-Market teams back under the analytics umbrella.


“We ended up centralizing and a discussed but underrated benefit has been just how much folks on the team have learned from one another,” said Greg. The team is now part of the Finance & Strategy department. But he knows the centralized structure may not be the long-term solution for Toast.


“The way I think about data teams, in a nutshell, is that you want everyone to add as much business value as possible,” said Greg. “We’ve been very open to change and trying different things and understanding that what works with 200 people versus 500 people versus a thousand versus two thousand is not the same answer, and that’s okay. It can be somewhat obvious when you’ve reached those inflection points and need to try something new.”


At the end of the day, it’s all about meeting the needs of the business – no matter what it means for your team’s reporting structure – while ensuring that technical leads are enablers and not bottlenecks for analysts.


4 things to keep in mind when scaling your startup’s data team

Ultimately, Greg’s team settled on a centralized data team structure with a few distributed elements, affording them greater ownership and governance over their data products and the ability to build a scalable, modular data stack. Image courtesy of Greg Waldman and Toast.


Greg has some hard-won advice for data leaders facing similar challenges at hypergrowth companies—but every tactic goes back to his principle of focusing on what approach best meets the business needs of your company, which will likely change over time.


In short, he suggests, leaders should stay nimble and teams should be willing to adapt to the needs of the business. Here’s how.


Hire data generalists, not specialists—with one exception

According to Greg, the first specialist you should hire is a data engineer.


“Early on, we basically just hired data athletes who could do a little bit of everything,” said Greg.


“We had those athletes playing analyst/data engineer. I had a senior manager role open and a data engineer apply, but she didn’t have any interest in managing. When I talked to her, it became obvious how badly we needed a dedicated data engineering skillset for the team. And in retrospect, I should have been looking for someone like that a year earlier given our growth trajectory.”


All too often, data teams are hamstrung by the lack of technical support needed to build and maintain ETL pipelines, as well as ensure that the data infrastructure underlying them can scale with the analytics needs of the company.


“So while I still believe in hiring data athletes who can do a bit of everything, data engineers are the one exception. After you hire a few analysts, your first data engineer should follow close behind.”


Prioritize building a diverse data team from day one

This goes without saying, but when it comes to setting up your team for long-term success, you need to invest (early) in candidates with diverse experiences and backgrounds. Homogeneity is a nonstarter for innovation and prevents data analysts and engineers from understanding the perspectives and needs of all data consumers.


When you’re moving quickly at scale, however, it can be hard to remember this – unless you put in place a set of clear hiring and growth KPIs that reflect this goal.


“Think about diversity early on,” said Greg. “Because especially in these small data teams, if you’re not careful, you’ll just end up with a bunch of like-minded people from similar backgrounds. And you don’t want a bunch of the same people—you need different perspectives.”


It’s one thing to say, “we need to build a diverse team,” but something else entirely to do it. So, how should data leaders get started?


Here are a few tips:

  • Partner with executives and your People team to write job descriptions that are inclusive of different experiences and backgrounds (i.e., avoiding excessively masculine language in favor of gender-neutral ones)

  • Put together diverse hiring panels (even if they’re not pulled from the data team) to embody the team you’re striving to build

  • Cast a wide network to recruit candidates who may not have traditional data titles or roles; it’s a constantly evolving space!

  • Implement a gender or race-blind application process that screens based on qualifications and experiences


“It’s much harder to build a diverse team later in the startup journey because people from different backgrounds want to join a team that has people from different backgrounds. And if you don’t think about that right out of the gate, it can be much more challenging.”

Overcommunication is key to change management

This point is even more relevant in our remote-first world, in which many teams work from home, and overcommunicating over email, Slack, and carrier pigeon (just kidding!) is a necessary part of any job.


According to Tomasz Tunguz, Managing Director at Redpoint Ventures, companies should repeat themselves (i.e., their core value propositions) with customers consistently, even if it seems unnecessary. The same goes for data leaders when it comes to communicating their work and any team changes with data stakeholders.


For instance, if your decentralized customer success analyst is migrating to report up to the Head of Analytics after 3 months of working under the Head of Customer Success, not only should you communicate that this change is happening, but also reiterate that this adjustment doesn’t change the nature of your team’s output. Stakeholders can still expect accurate, timely analysis that maps to core business objectives, even if the team is no longer distributed.


While structural changes inevitably impact the nature of the relationship between stakeholder (the functional team) and service provider (the data team), codifying, communicating, and repeating how this shift will not impact your team’s KPIs will restore goodwill and help cross-functional groups overcome change.


“If you have analysts reporting into business leaders, make sure that they’re empowered to push back based on the data they are seeing,” said Greg. “Otherwise it can be a tricky dynamic where they are encouraged to show data that backs anecdotal hypotheses. When you bring those teams back under an analytics umbrella, your analysts are going to learn from one another, but influencing other departments can be challenging.”


Most recently, Toast has been running a largely centralized analytics model, which has performed well and met the needs of the business for the last year and a half.


Don’t overvalue a “single source of truth”

The concept of a “single source of truth” or golden data, is a powerful one – for good reason. Striving for metrics alignment and consistent clean data can help companies trust that their data is pointing them in the right direction.


Still, as a data leader at a hypergrowth startup, you’ll be pulled in to work on lots of experiments and projects at any given time – as long as you have directional observability into data trust (i.e., is this table up to date? May I know who owns this data set? Why did 50 rows turn into 500?), the need for a “single source of truth” isn’t as pressing.


“I always tell people not to overvalue the whole single source of truth concept,” said Greg. “As a perfectionist, it took a long time for me to learn this. There are times when you need to be 100% correct, and then there are a lot of times when you don’t. Often, directional accuracy is fine, and you’ll just waste resources trying to be perfect. The 80/20 rule is key.”


Data is always messy and rarely perfect. You’ll get more done if you prioritize having an end-to-end view of data health and accuracy over more granular control.


Greg’s final piece of advice for data leaders?


“Hire good people with strong communication skills and everything else becomes a lot easier. Good people will lead you to other great people, and you can hire the smartest people in the world, but if they can’t communicate their analysis to less technical folks they simply won’t be successful.”



Also published here.