Apache Spark is generally known as a fast, general and open-source engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. It allows you to speed analytic applications up to 100 times faster compared to technologies on the market today. You can interface Spark with through “PySpark”. This is the Spark Python API exposes the Spark programming model to Python. Python Even though working with Spark will remind you in many ways of working with , you’ll also see that it can be tough getting familiar with all the functions that you can use to query, transform, inspect, … your data. What’s more, if you’ve never worked with any other programming language or if you’re new to the field, it might be hard to distinguish between functions. Pandas DataFrames Let’s face it, map() and flatMap() are different enough, but it might still come as a challenge to decide which one you really need when you’re faced with them in your analysis. Or what about other functions, like reduce() and reduceByKey()? Download the cheat sheet . here Even though the is very elaborate, it never hurts to have a cheat sheet by your side, especially when you’re just getting into it. documentation This PySpark cheat sheet covers the basics, from initializing Spark and loading your data, to retrieving RDD information, sorting, filtering and sampling your data. But that’s not all. You’ll also see that topics such as repartitioning, iterating, merging, saving your data and stopping the SparkContext are included in the cheat sheet. that the examples in the document take small data sets to illustrate the effect of specific functions on your data. In real life data analysis, you’ll be using Spark to analyze big data. Note Are you hungry for more? Don’t miss our other Python cheat sheets for that cover topics such as , , , and much more! data science Python basics Numpy Pandas Pandas Data Wrangling Originally published at www.datacamp.com .