Apache Spark Introduction
Bài đăng này đã không được cập nhật trong 7 năm
Apache Spark Introduction
Spark is a fast and general cluster computing system for Big Data. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Spark Streaming for stream processing. Spark Core is the foundation of the overall project. It provides distributed task dispatching, scheduling, and basic I/O functionalities, exposed through an application programming interface
Spark provides interactive shell in python and scala programming languages.
Spark uses the Hadoop core library to talk to HDFS and other Hadoop-supported storage systems. Because the protocols have changed in different versions of Hadoop, you must build Spark against the same version that your cluster runs.
Spark SQL
Spark provides SQL language support, with command-line interfaces and ODBC/JDBC server.
Spark Streaming
Spark Streaming leverages Spark Core’s fast scheduling capability to perform streaming analytics. It ingests data in mini-batches and performs RDD transformations on those mini-batches of data.
MLlib Machine Learning Library
Spark MLlib is a distributed machine learning framework on top of Spark Core that, due in large part to the distributed memory-based Spark architecture.
GraphX
GraphX is a distributed graph processing framework on top of Apache Spark. GraphX provides two separate APIs for implementation of massively parallel algorithms (such as PageRank): a Pregel abstraction, and a more general MapReduce style API.
References
Wiki – https://en.wikipedia.org/wiki/Apache_Spark Official Site – http://spark.apache.org Spark Tutorial – TutorialKart
All rights reserved