[Arch] SparkSQL Internals – Part 2: SparkSQL Data Flow

The paper of SparkSQL provides a very nice figure about SparkSQL data flow. I’ve had experiences on Apache Pig for more than one year so I realized that it is better to put them all together. I created a new figure that includes the data flow of Hive, Pig, and SparkSQL. I know a little … Continue reading [Arch] SparkSQL Internals – Part 2: SparkSQL Data Flow

Advertisements

[Sysdeg] Moving to SparkSQL, why not?

Maybe you still remember the draft design of the system I proposed here. The reason why I delayed posting the part-2, which mostly focuses on technical details, because Spark is new for me so I need time to dig more into it. However, the design won’t be changed so much, I think. Come back to … Continue reading [Sysdeg] Moving to SparkSQL, why not?

My internship and some documents on Apache Spark

I heard about what I will do in my internship 6 months ago. Well, to be precise, it was right after I finished my summer internship. It is designing and building a worksharing framework (scan, computation)  for Pig queries - Hadoop MapReduce, which mostly focuses on GROUPING SETS operation. 6 months later, my internship still … Continue reading My internship and some documents on Apache Spark

Some experiences on building my own Pig

Pig is a high-level platform for creating MapReduce programs used with Hadoop. The language for this platform is called Pig Latin. Pig Latin abstracts the programming from the Java MapReduce idiom into a notation which makes MapReduce programming high level, similar to that of SQL for RDBMS systems. Pig Latin can be extended using UDF … Continue reading Some experiences on building my own Pig