Every day, we talk to companies who are in the early phases of building our their data infrastructure. A lot of times these conversations circle around which technology to pick for which job. For example, we often get the question “what’s better — Spark or Amazon Redshift?”, or “which one should we be using?”. Spark and Redshift are two very different technologies. It’s not an either / or, it’s more of a “when do I use what?”. In this post, I’ll lay out some of the differences and when to use which technology.
At the time of this post, if you look under the hood of the most advanced tech start-ups in Silicon Valley, you will likely find both Spark and Redshift. Spark is getting a little bit more attention these days because it’s a new shiny toy. But they cover different use cases (“dish washer vs. fridge”, per Ricardo Vladimiro).
Let’s give you a decision-making framework that can guide you through your thinking:
Apache Spark is a data processing engine. With Spark you can:
There is a general execution engine (Spark Core) and all other functionality is built on top of.
Source: Databricks
People are excited about Spark for three reasons:
Spark is fast because it distributes data across a cluster, and processes that data in parallel. It tries to process data in memory, vs. shuffling things in and out of disk (like e.g. MapReduce does).
Spark is easy because it has a high level of abstraction, allowing you to write applications with less lines of code. Plus, Scala and R are attractive for data manipulation.
Spark is extensible via the pre-built libraries, e.g. for machine learning, streaming apps or data ingestion. These libraries are either part of Spark or 3rd party projects.
In short, the promise of Spark is to speed up development, make applications more portable and extensible, and make the actual application run faster.
A few more noteworthy points on Spark:
Source: Spark Streaming — Spark 2.1.1 Documentation.
You need to know how to write code to use Spark (the “write applications” part). So the people who use Spark are typically developers.
Amazon Redshift is an analytical database. With Redshift you can:
Redshift is a managed service provided by Amazon. Raw data flows into Redshift (called “ETL”), where it’s processed and transformed at a regular cadence (“transformation” or “aggregations”), or on an ad-hoc basis (“ad-hoc queries”). Another term for loading and transforming data is also “data pipelines”.
Source: Amazon Web Services
People are excited about Redshift for three reasons:
Redshift is fast because its massively parallel processing (MPP) architecture distributes and parallelizes queries. Redshift allows a high query concurrency and processes queries in memory.
Redshift is easy because it can ingest structured, semi-structured and unstructured datasets (via S3 or DynamoDB) up to a petabyte or more, to then slice ‘n dice that data any way you can imagine with SQL.
Redshift is cheap because you can store data for a $935/TB annual fee (if you use the pricing for a 3-year reserved instance). That price-point is unheard of in the world of data warehousing.
In short, the promise of Redshift is to make data warehousing cheaper, faster and easier. You can analyze much bigger and complex datasets than ever before, and there’s a rich ecosystem of tools that work with Redshift.
A few more noteworthy points about Redshift:
Source: Automating Analytic Workflows on AWS
You need to know how to write SQL queries to use Redshift (the “run big, complex queries” part). So the people who use Redshift are typically analysts or data scientists.
In summary, one way to think about Spark and Redshift is to distinguish them by what they are, what you do with them, how you interact with them, and who the typical user is.
Source: image created for this blog post by intermix
I’ve hinted at how you see both Spark and Redshift deployed. That gets us to data architecture.
In very simple terms, you can build an application with Spark, and then use Redshift both as a source and a destination for data.
Why would you do that? A key reason is the difference between Spark and Redshift in the way they process data, and how much time it takes to product a result.
A highly simplified example: Fraud detection. You could build an app with Spark that detects fraud in real-time from e.g. a stream of bitcoin transactions. Given it’s near-real time character, Redshift would not be a great fit in this case.
But let’s say if you wanted to have more signals for your fraud detection, for better predictability. You could load data from Spark into Redshift. There, you join it with historic data on fraud patterns. But you can’t do that in real-time, the result would come too late for you to block the transaction. So you use Spark to e.g. block a transaction in real-time, and then wait for the result from Redshift to decide if you keep blocking it, send it to a human for verification, or approve it.
In December 2017, the Amazon Big Data Blog had another example of using both Spark and Redshift: “Powering Amazon Redshift Analytics with Apache Spark and Amazon Machine Learning”. The post covers how to build a predictive app that tells you how likely a flight will be delayed. The prediction happens based on the time of day or the airline carrier, by using multiple data sources and processing them across Spark and Redshift.
You can see how the separation of “apps” and “data warehousing” we created at the start of this post is in reality an area that’s shifting or even merging. That takes us to the final part of this post: Data engineering.
The border between developers and business intelligence analysts / data scientists are fading. That has given rise to a new occupation: Data engineering. I’ll use a definition for data engineering by Maxime Beauchemin:
“In relation to previously existing roles, the data engineering field [is] a superset of business intelligence and data warehousing that brings more elements from software engineering, [and it] integrates the operation of ‘big data’ distributed systems”.
Spark is such a “big data” distributed system. Redshift is the data warehousing part. Data engineering is the discipline that brings both together. That’s because you see “code” moving its way into data warehousing. Code allows you to author, schedule and monitor data pipelines that feed into Redshift, incl. the transformations on the data once it sits inside your cluster. And you’ll very likely have to ingest data from Spark. And so the trend to “code” in warehousing implies that knowing SQL is not sufficient any more. You need to know how to write code. Hence the “data engineer”.
This post is already way too long, but I hope it provides a useful summary on how to think about your data stack. For your big data architecture, you will likely end up using both Spark and Redshift, each one to fulfill a specific use case that’s is best suited for.