Tag Archives: catalyst

In the Code: Spark SQL Query Planning and Execution

If you’ve dug around into the Spark SQL code, whether it’s in catalyst, or the core code, you’ve probably seen tons of references to Logical and Physical plans. These are the core units of query planning, a common concept in database programming. In this post we’re going to go over what these are at a high level and then how they’re represented and used in Spark. The query planning code is especially important because it determines how your SparkSQL query gets turned into actual RDD primitive calls and has large implications on performance of applications.

This document is based on the Spark master branch as of early Februrary. It assumes that you’ve got a cursory understanding of the Spark RDD API, and some of the lower level concepts such as partitioning and shuffling (what it is, when it takes place).

For higher level coverage of what we’re going over, I would recommend the Spark SQL whitepaper that was published last year. The content it covers is pretty similar, but we’ll be diving a little deeper with examples from the code and more explanation on how execution actually takes place rather than focusing on features.

Continue reading

Writing a Spark Data Source

Note: This is being written as of early December of 2015 and currently assumes Spark 1.5.2 API. The data sources API has been out for a few versions now but it’s still stabilizing so some of this might change to be out of date.

I’m writing this guide as part of my own exploration on how to write a data source using the Spark SQL Data Sources API. We’ll start with exploration of the interfaces and features, then dive into 2 examples: Parquet and Spark JDBC. Finally we will cover an end to end example of writing your own simple data source.

Continue reading