[SysDeg] Worksharing Framework and its design – Part 2: Communication method

After having a basic understanding about Spark and SparkSQL, I came back to my system. The high level design of the system remains the same as I described two months ago. It is a client-server model, but the server is changed from the Spark server to the SparkSQL server. I spent roughly two weeks for some coding … Continue reading [SysDeg] Worksharing Framework and its design – Part 2: Communication method

[Arch] SparkSQL Internals – Part 2: SparkSQL Data Flow

The paper of SparkSQL provides a very nice figure about SparkSQL data flow. I’ve had experiences on Apache Pig for more than one year so I realized that it is better to put them all together. I created a new figure that includes the data flow of Hive, Pig, and SparkSQL. I know a little … Continue reading [Arch] SparkSQL Internals – Part 2: SparkSQL Data Flow

[Arch] SparkSQL Internals – Part 1: SQLContext

I assume that you’ve already read these documents about SparkSQL. Things that you should keep in your mind: DataFrame API: where relational processing meets procedural processing. Catalyst: extensible query optimizer which works on trees and rules, provides lazy optimization and is easy to extend/add a new rule. In this post, I will introduce to you … Continue reading [Arch] SparkSQL Internals – Part 1: SQLContext