大数据新兴技术培训
Big Data Rankings & Products
The first module “Big Data Rankings & Products” focuses on the relation and market shares of big data hardware,
software, and professional services. This information provides an insight to how future industry,
products, services, schools, and government organizations will be influenced by big data technology.
To have a deeper view into the world’s top big data products line and service types,
the lecture provides an overview on the major big data company, which include IBM, SAP,
Oracle, HPE, Splunk, Dell, Teradata, Microsoft, Cisco, and AWS. In order to understand the power of big data technology,
the difference of big data analysis compared to traditional data analysis is explained.
This is followed by a lecture on the 4 V big challenges of big data technology,
which deal with issues in the volume, variety, velocity, and veracity of the massive data.
Based on this introduction information, big data technology used in adding global insights
on investments, help locate new stores and factories,
and run real-time recommendation systems by Wal-Mart, Amazon, and Citibank is introduced.
Big Data & Hadoop
The second module “Big Data & Hadoop” focuses on the characteristics and operations of Hadoop,
which is the original big data system that was used by Google.
The lectures explain the functionality of MapReduce,
HDFS (Hadoop Distributed FileSystem), and the processing of data blocks.
These functions are executed on a cluster of nodes that are assigned the role of NameNode or DataNodes,
where the data processing is conducted by the JobTracker and TaskTrackers,
which are explained in the lectures. In addition,
the characteristics of metadata types and the differences
in the data analysis processes of Hadoop and SQL (Structured Query Language) are explained.
Then the Hadoop Release Series is introduced which include the descriptions of Hadoop YARN (Yet Another Resource Negotiator),
HDFS Federation, and HDFS HA (High Availability) big data technology.
Spark
The third module “Spark” focuses on the operations and characteristics of Spark,
which is currently the most popular big data technology in the world.
The lecture first covers the differences in data analysis characteristics of Spark and Hadoop,
then goes into the features of Spark big data processing based on the RDD (Resilient Distributed Datasets),
Spark Core, Spark SQL, Spark Streaming, MLlib (Machine Learning Library), and GraphX core units.
Details of the features of Spark DAG (Directed Acyclic Graph) stages and pipeline processes
that are formed based on Spark transformations and actions are explained. Especially,
the definition and advantages of lazy transformations and DAG operations are described along with
the characteristics of Spark variables and serialization.
In addition, the process of Spark cluster operations based on Mesos, Standalone, and YARN are introduced.
Spark ML & Streaming
The fourth module “Spark ML & Streaming” focuses on how Spark ML (Machine Learning)
works and how Spark streaming operations are conducted.
The Spark ML algorithms include featurization, pipelines,
persistence, and utilities which operate on the RDDs (Resilient Distributed Datasets) to extract information form the massive datasets.
The lectures explain the characteristics of the DataFrame-based API,
which is the primary ML API in the spark.ml package.
Spark ML basic statistics algorithms based on correlation and hypothesis testing (P-value)
are first introduced followed by the Spark ML classification and regression algorithms based
on linear models, naive Bayes, and decision tree techniques. Then the characteristics of Spark streaming,
streaming input and output, as well as streaming receiver types (which include basic, custom,
and advanced) are explained, followed by how the Spark Streaming process
and DStream (Discretized Stream) enable big data streaming operations for real-time and near-real-time applications.
Storm
The fifth module “Storm” focuses on the characteristics and operations of Storm big data systems.
The lecture first covers the differences in data analysis characteristics of Storm,
Spark, and Hadoop technology. Then the features of Storm big data processing based on the nimbus,
spouts, and bolts are described followed by the Storm streams, supervisor, and ZooKeeper details.
Further details on Storm reliable and unreliable spouts and bolts are provided followed
by the advantages of Storm DAG (Directed Acyclic Graph) and data stream queue management.
In addition, the advantages of using Storm based fast real-time applications, which include real-time analytics,
online ML (Machine Learning), continuous computation,
DRPC (Distributed Remote Procedure Call), and ETL (Extract, Transform, Load) are introduced.
IBM SPSS Statistics Project
The sixth and last module “IBM SPSS Statistics Project” focuses on providing experience
on one of the most famous and widely used big data statistical analysis systems in the world. First,
the lecture starts with how to setup and use IBM SPSS Statistics, and continues
on to describe how IBM SPSS Statistics can be used to gain corporate data analysis experience.
Then the data processing statistical results of two projects based on using the IBM SPSS Statistics big data system is conducted.
The projects are conducted so the student can discover new ways to use,
analyze, and draw charts of the relationship between datasets,
and also compare the statistical results using IBM SPSS Statistics.