Consumers Don’t Care About Your Technology

Written by rborn92 | Published 2017/12/04
Tech Story Tags: buffer | dropbox | airbnb | mvp | tech-stack

TLDRvia the TL;DR App

In spirit of validating an idea and getting off the ground quickly, companies often build an MVP, or minimum viable product, at the beginning. But how long can a company survive on an MVP? Do consumers really care what’s happening behind the curtain or how potentially fragile a system is?

In short, the answer is no. If a product serves a consumer’s needs, history has shown that technology plays a very small role. Either due to lack of concern or an absence of knowledge.

Let’s take a look at three successful companies that ran an MVP at scale — Buffer, Dropbox, and Airbnb.

Buffer — The social media scheduling app that serves over 4M people.

In a blog post publish in 2014, Buffer admitted to still be running their entire scheduling architecture on a structure designed by the founder Joel Gascoigne in his bedroom back in 2010 — a single cron job. You know, the Unix based, OS-level scheduler designed to run shell scripts such as updating software or syncing the system clock with NTP servers.

By design, cron is single-threaded. The process sleeps until a scheduled job is ready to run in which case it will fork a process to run that scheduled job then go back to sleep until another job is scheduled.

This can lead to a couple issues as Buffer eventually found out.

  1. There isn’t a managed thread or process pool meaning if you fork too many processes or processes take too long to complete, you’re going to over-utilize your machine causing a crash or at least cause vital processes to stall. Buffer would scale vertically when this happened, upgrading to larger virtual instances, thus delaying the issue.
  2. If scheduling the processes took longer than the schedule interval (in this case, 1 minute), schedules will start to become delayed since there is no scheduling concurrency.

The cron job is responsible for querying their database every single minute to find posts that are expected to go out that minute. Luckily today with database abstraction layers built by AWS and the like, you can quickly scale a table to handle such read intensive operations, but it’s still destined to reach a failure point. If scanning the table takes longer than the scheduled interval (1min), then posts will be delayed — this also isn’t accounting for processing time to prepare the posts and network latency to publish to the social media platforms.

CPU usage of Buffer’s single server running both the web-app and SQL database

At scale, Buffer began seeing 4 minute delays on posts. To adjust for the ever-growing database table, they changed their scheduling interval to 15min. Meaning the cron job now looks for posts going out in the next 15 minutes, and runs 4 times an hour instead of 60 times.

They also moved processing to a server pool of utility servers that receive post intents via a JMS message bus — Amazon’s SQS. Although all of the intent scheduling is still handled by a single cron job.

Despite still being largely dependent on the MVP system, Buffer is comfortably bringing in almost $16M ARR.

https://buffer.baremetrics.com/stats/arr

Dropbox — The file backup app with more than 500M users

In a talk given at Stanford in 2012, Kevin Modzelewski looked back on the first 5 years of Dropbox and the technological constraints met when trying to run an MVP at scale.

Possibly due to the advancement in technology and ease of access nowadays, most people today would likely build a file storage app using an HDFS such as Amazon’s S3. You can store large amounts of data for cheap, and still have relatively quick access. Metadata can then be stored in a database for lookup.

However, at the time when Drew Houston, founder of Dropbox, first conceived the idea and applied to YC, Amazon’s S3 was just released and the idea of HDFS-as-a-service was fairly new. Presumably as a result, Dropbox was designed to run on a single machine, responsible for serving the static website, storing metadata in MySQL, and persisting files to its local disk.

https://www.youtube.com/watch?v=PE4gwstWhmc

Later that year after running out of disk space and pegging the CPU, Drew moved the data storage to S3 and migrated the metadata off of the single machine into a dedicated DB instance.

https://www.youtube.com/watch?v=PE4gwstWhmc

Airbnb — The apartment sharing app that has 44.8M people sleeping in other people’s homes

Today Airbnb has an elegantly simple UX and a feature-packed dashboard. But, that wasn’t always the case. The Apartment sharing app you’ve come to love, once looked like this:

AirBed & Breakfast, now Airbnb

The very first iteration was simply just a static website that allowed design conference attendees to email Brian Chesky and Joe Gebbia asking to sleep on an air mattress in their San Francisco apartment.

Following a “successful” launch of 3 happy guests, the roommates decided to enlist Nathan Blecharczyk to help them build a more robust website, shown above.

The site was still very simple and it was years before they noticed any major growth and began adding more features such as the ability to “star” properties.

What’s the Right Approach?

Should you push an MVP to it’s technological limits before redesigning a system? Should you architect a stack that can scale from day 1? How big of a role does survivorship bias play?

Share your thoughts below.


Published by HackerNoon on 2017/12/04