Apache Spark is a distributed computing platform built on extensibility: Sparks APIs make it easy to combine input from many data sources and process it using diverse programming languages and algorithms to build a data application. R is one of the most powerful languages for data science and statistics, so it makes a lot of sense to connect R to Spark. Fortunately, Rs rich language features enable simple APIs for calling Spark from R that look similar to running R on local data sources. With a bit of background about both systems, you will be able to invoke massive computations in Spark or run your R code in parallel from the comfort of your favorite R programming environment.
I hope that you enjoy this book and use it to scale up your R workloads and connect them to the capabilities of the broader Spark ecosystem. And because all of the infrastructure here is open source, dont hesitate to give the developers feedback about making these tools better.
Preface
In a world where information is growing exponentially, leading tools like Apache Spark provide support to solve many of the relevant problems we face today. From companies looking for ways to improve based on data-driven decisions, to research organizations solving problems in health care, finance, education, and energy, Spark enables analyzing much more information faster and more reliably than ever before.
Various books have been written for learning Apache Spark; for instance, Spark: The Definitive Guide is a comprehensive resource, and Learning Spark is an introductory book meant to help users get up and running (both are from OReilly). However, as of this writing, there is neither a book to learn Apache Spark using the R computing language nor a book specifically designed for the R user or the aspiring R user.
There are some resources online to learn Apache Spark with R, most notably the spark.rstudio.com site and the Spark documentation site at spark.apache.org. Both sites are great online resources; however, the content is not intended to be read from start to finish and assumes you, the reader, have some knowledge of Apache Spark, R, and cluster computing.
The goal of this book is to help anyone get started with Apache Spark using R. Additionally, because the R programming language was created to simplify data analysis, it is also our belief that this book provides the easiest path for you to learn the tools used to solve data analysis problems with Spark. The first chapters provide an introduction to help anyone get up to speed with these concepts and present the tools required to work on these problems on your own computer. We then quickly ramp up to relevant data science topics, cluster computing, and advanced topics that should interest even the most experienced users.
Therefore, this book is intended to be a useful resource for a wide range of users, from beginners curious to learn Apache Spark, to experienced readers seeking to understand why and how to use Apache Spark from R.
This book has the following general outline:
Introduction
In the first two chapters, , you learn about Apache Spark, R and the tools to perform data analysis with Spark and R.
Analysis
In , you learn how to analyze, explore, transform, and visualize data in Apache Spark with R.
Modeling
In the , you learn how to create statistical models with the purpose of extracting information, predicticting outcomes, and automating this process in production-ready workflows.
Scaling
Up to this point, the book has focused on performing operations on your personal computer and with limited data formats. , introduce distributed computing techniques required to perform analysis and modeling across many machines and data formats to tackle the large-scale data and computation problems for which Apache Spark was designed.
Extensions
, describes optional components and extended functionality applicable to specific, relevant use cases. You learn about alternative modeling frameworks, graph processing, preprocessing data for deep learning, geospatial analysis, and genomics at scale.
Advanced
The book closes with a set of advanced chapters, ; these will be of greatest interest to advanced users. However, by the time you reach this section, the content wont seem as intimidating; instead, these chapters will be equally relevant, useful, and interesting as the previous ones.
The first group of chapters, , provides a gentle introduction to performing data science and machine learning at scale. If you are planning to read this book while also following along with code examples, these are great chapters to consider executing the code line by line. Because these chapters teach all of the concepts using your personal computer, you wont be taking advantage of multiple computers, which Spark was designed to use. But worry not: the next set of chapters will teach this in detail!