Cloudera Developer Training for Spark and Hadoop (CDTSH)

Quantity

$2,995.00

New Age Technologies has been delivering Authorized Training since 1996. We offer Cloudera’s full suite of authorized courses including courses pertaining to Apache Spark, Hadoop, Apache HBase, MapReduce, Data Science, Cloudera Developer and more. If you have any questions or can’t seem to find the Cloudera class that you are interested in, contact one of our Cloudera Training Specialists. Invest in your future today with Cloudera training from New Age Technologies.

Cloudera Training Specialists | ☏ 502.909.0819

Current Promotion

  • ENTER CODE "CLOUDERA10" @ CHECKOUT & RECEIVE 10% OFF OR REQUEST GIFT CARD EQUIVALENT
Private IT Training

Cloudera Developer Training for Spark and Hadoop Overview:

The Cloudera Developer Training for Spark and Hadoop hands-on course delivers the key concepts and expertise participants need to ingest and process data on a Hadoop cluster using the most up-to-date tools and techniques. Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers.

Who Should Attend:

  • Developers and engineers who have programming experience

Cloudera Developer Training for Spark and Hadoop Prerequisites:

Before attending this course, you must have the following:

  • Programming in Scala or Python is required
  • Basic familiarity with the Linux command line is assumed
  • Basic knowledge of SQL is helpful
  • Prior knowledge of Hadoop is not required

Cloudera Developer Training for Spark and Hadoop Objectives:

After successfully completing this course, you will learn the following:

  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to use Sqoop and Flume to ingest data
  • How to process distributed data with Apache Spark
  • How to model structured data as tables in Impala and Hive
  • How to choose the best data storage format for different data usage patterns
  • Best practices for data storage

Cloudera Developer Training for Spark and Hadoop Certification:

  • Cloudera Certified Professional: Data Engineer Certification (CCP:Data Engineer)

Cloudera Developer Training for Spark and Hadoop Outline:

Module 1: Introduction to Hadoop and the Hadoop Ecosystem
  • Problems with Traditional Large-scale Systems
  • Hadoop!
  • The Hadoop EcoSystem
Module 2: Hadoop Architecture and HDFS
  • Distributed Processing on a Cluster
  • Storage: HDFS Architecture
  • Storage: Using HDFS
  • Resource Management: YARN Architecture
  • Resource Management: Working with YARN
Module 3: Importing Relational Data with Apache Sqoop
  • Sqoop Overview
  • Basic Imports and Exports
  • Limiting Results
  • Improving Sqoop’s Performance
  • Sqoop 2
Module 4: Introduction to Impala and Hive
  • Introduction to Impala and Hive
  • Why Use Impala and Hive?
  • Comparing Hive to Traditional Databases
  • Hive Use Cases
Module 5: Modeling and Managing Data with Impala and Hive
  • Data Storage Overview
  • Creating Databases and Tables
  • Loading Data into Tables
  • HCatalog
  • Impala Metadata Caching
Module 6: Data Formats
  • Selecting a File Format
  • Hadoop Tool Support for File Formats
  • Avro Schemas
  • Using Avro with Hive and Sqoop
  • Avro Schema Evolution
  • Compression
Module 7: Data Partitioning
  • Partitioning Overview
  • Partitioning in Impala and Hive
Module 8: Capturing Data with Apache Flume
  • What is Apache Flume?
  • Basic Flume Architecture
  • Flume Sources
  • Flume Sinks
  • Flume Channels
  • Flume Configuration
Module 9: Spark Basics
  • What is Apache Spark?
  • Using the Spark Shell
  • RDDs (Resilient Distributed Datasets)
  • Functional Programming in Spark
Module 10: Working with RDDs in Spark
  • A Closer Look at RDDs
  • Key-Value Pair RDDs
  • MapReduce
  • Other Pair RDD Operations
Module 11: Writing and Deploying Spark Applications
  • Spark Applications vs. Spark Shell
  • Creating the SparkContext
  • Building a Spark Application (Scala and Java)
  • Running a Spark Application
  • The Spark Application Web UI
  • Configuring Spark Properties
  • Logging
Module 12: Parallel Programming with Spark
  • Review: Spark on a Cluster
  • RDD Partitions
  • Partitioning of File-based RDDs
  • HDFS and Data Locality
  • Executing Parallel Operations
  • Stages and Tasks
Module 13: Spark Caching and Persistence
  • RDD Lineage
  • Caching Overview
  • Distributed Persistence
Module 13: Common Patterns in Spark Data Processing
  • Common Spark Use Cases
  • Iterative Algorithms in Spark
  • Graph Processing and Analysis
  • Machine Learning
  • Example: k-means
Preview: Spark SQL
  • Spark SQL and the SQL Context
  • Creating DataFrames
  • Transforming and Querying DataFrames
  • Saving DataFrames
  • Comparing Spark SQL with Impala

Average Salary for Skill: Hadoop

    Top