paint-brush
Get A Quick Start With PySpark And Spark-Submitby@amirpupko
9,731 reads
9,731 reads

Get A Quick Start With PySpark And Spark-Submit

by Amir PupkoAugust 27th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We just released a new <a href="https://github.com/Soluto/spark-submit-with-pyspark-template" target="_blank">open source boilerplate template</a> to help you (any Spark user) run <a href="https://hackernoon.com/tagged/spark-submit" target="_blank">spark-submit</a> commands smoothly — such as inserting dependencies, project source code and more.

Company Mentioned

Mention Thumbnail
featured image - Get A Quick Start With PySpark And Spark-Submit
Amir Pupko HackerNoon profile picture

We just released a new open source boilerplate template to help you (any Spark user) run spark-submit commands smoothly — such as inserting dependencies, project source code and more.

TLDR: Here is an open source template to help you get started

At Soluto, as part of Data Scientist day-to-day work, we create ETL (Extract, Transform, Load) jobs. Our main tool for this is Spark, specifically, PySpark, with spark-submit.

Spark is used for distributed computing on large-scale datasets. spark-submit helps you launch your code application on your cluster.

Here are some examples of jobs we run daily at Soluto:

  • Creating offline content recommendations for users
  • Aggregating single events into more logical tables — as part of our service we offer tech support via chat messaging. Instead of having multiple message events for a single support session, we create SessionsTable with one session entity that holds all the aggregated information of a single chat session

Some of the basic needs when using Spark for ETL jobs:

  • Passing arguments
  • Creating Spark context and sql context
  • Loading your project source code (src directory)
  • Loading pip modules (with simple requirements file)

We created a  simple template that can help you get started running ETL jobs using PySpark (both using spark-submit and interactive shell), create Spark context and sql context, use simple command line arguments and load all your dependencies (your project source code and third party requirements).

So if you’re starting a new Spark project, “Fork” it on GitHub and enjoy Sparking it up!

Please feel free to share any thoughts, open issues and contribute code!