Jonathan Dinu – Building Spark Applications LiveLessons

$50.00$240.00 (-79%)

In stock

Practicing Data scientists who already use Python or R and want to learn how to scale up their analyses with Spark.

SalePage

Jonathan Dinu – Building Spark Applications LiveLessons

Jonathan Dinu – Building Spark Applications LiveLessons

Check it out: Jonathan Dinu – Building Spark Applications LiveLessons

Building Spark Applications LiveLessons provides data scientists and developers with a practical introduction to the Apache Spark framework using Python, R, and SQL. Additionally, it covers best practices for developing scalable Spark applications for predictive analytics in the context of a data scientists standard workflow.

In this video training, Jonathan starts off with a brief history of Spark itself and shows you how to get started programming in a Spark environment on a laptop. Taking an application and code first approach, he then covers the various APIs in Python, R, and SQL to show how Spark makes large scale data analysis much more accessible through languages familiar to data scientists and analysts alike. With the basics covered, the videos move into a real-world case study showing you how to explore data, process text, and build models with Spark. Throughout the process, Jonathan exposes the internals of the Spark framework itself to show you how to write better application code, optimize performance, and set up a cluster to fully leverage the distributed nature of Spark. After watching these videos, data scientists and developers will feel confident building an end-to-end application with Spark to perform machine learning and do data analysis at scale!

Skill Level
– Beginning/Intermediate

What You Will Learn
– How to install and set up a Spark environment locally and on a cluster
– The differences between and the strengths of the Python, R, and SQL programming interfaces
– How to build a machine learning model for text
– Common data science use cases that Spark is especially well-suited to solve
– How to tune a Spark application for performance
– The internals of the Spark framework and its execution model
– How to use Spark in a data science application workflow
– The basics of the larger Spark ecosystem

Who Should Take This Course
– Practicing Data scientists who already use Python or R and want to learn how to scale up their analyses with Spark.
– Data Engineers who already use Java/Scala for Spark but want to learn about the Python, R, and SQL APIs and understand how Spark can be used to solve Data Science problems.

Course Requirements
– Basic understanding of programming.
– Familiarity with the data science process and machine learning are a plus.

Lesson 1, Introduction to the Spark Environment, introduces Spark and provides context for the history and motivation for the framework. This lesson covers how to install and set up Spark locally, work with the Spark REPL and Jupyter notebook, and the basics of programming with Spark.
Lesson 2, Spark Programming APIs, covers each of the various Spark programming interfaces. This lesson highlights the differences between and the tradeoffs of the Python (PySpark), R (SparkR), and SQL (Spark SQL and DataFrames) APIs as well as typical workflows for which each is best suited.
Lesson 3, Your First Spark Application, walks you through a case study with DonorsChoose.org data showing how Spark fits into the typical data science workflow. This lesson covers how to perform exploratory data analysis at scale, apply natural language processing techniques, and write an implementation of the k-means algorithm for unsupervised learning on text data.
Lesson 4, Spark Internals, peels back the layers of the framework and walks you through how Spark executes code in a distributed fashion. This lesson starts with a primer on distributed systems theory before diving into the Spark execution context, the details of RDDs, and how to run Spark in cluster mode on Amazon EC2. The lesson finishes with best practices for monitoring and tuning the performance of a Spark application.
Lesson 5, Advanced Applications, takes you through a KDD cup competition, showing you how to leverage Sparks higher level machine learning libraries (MLlib and spark.ml). The lesson covers the basics of machine learning theory, shows you how to evaluate the performance of models through cross validation, and demonstrates how to build a machine learning pipeline with Spark. The lesson finishes by showing you how to serialize and deploy models for use in a production setting.

Main Menu

Jonathan Dinu – Building Spark Applications LiveLessons

$50.00$240.00 (-79%)

Add to cart