Big Data Hadoop


  • Batch Timings :
  • Starting Date :

Course Overview

Hadoop is a Big Data mechanism, which helps to store and process & analysis of unstructured data by using any commodity hardware. Hadoop is an open source software framework written in Java, which supports distributed application. It was introduced by Dough Cutting & Michael J. Cafarella in mid of 2006. Yahoo is the first commercial user of Hadoop(2008).
Hadoop works on two different generation Hadoop 1.0 & Hadoop 2.0 which, is based on YARN (yet another resource negotiator) architecture. Hadoop named after Dough cutting’s son’s elephant. Hadoop took over the Big data ecosystem by storm in 2012. Enterprises are now looking to leverage the big data environment require Big Data Architect who can design and build large-scale development and deployment of Hadoop applications.

Big Data is a collection of the huge or massive amount of data. We live in the data age. And it’s not easy to measure the total volume of data or to manage & process this enormous data. The flood of this Big Data is coming from different resources such as the New York stock exchange, Facebook, Twitter, AirCraft, Wallmart etc.

COURSE FEATURES

  • Resume & Interviews Preparation Support
  • Hands on Experience on Project.
  • 100 % Placement Assistance
  • Multiple Flexible Batches
  • Missed Sessions Covered
  • Practice Course Material

At the end of Big Data Hadoop Developer Training Course, Participants will be able to:

  • Completely understand Apache Hadoop Framework.
  • Learn to work with HDFS.
  • Discover how MapReduce works with data and processes it.
  • Design and develop big data applications using Hadoop Ecosystem.
  • Learn how YARN helps in managing resources into clusters.
  • Write as well as execute programs in YARN.
  • Implement MapReduce Integration, HBase, Advanced Indexing and Advanced Usage.
  • Become expert in working on data, and managing data resources
  • Become functional programmer implementing various applications to ensure effective data processing and optimization techniques are in place
  • Expert knowledge to apply interactive algorithms and work on data forms.
  • Resume & Interviews Preparation Support

Course Duration

  • WeekEnds: 8 Weekends
  • WeekDays: 2 months

Prerequisites :

  • Basics of Core Java and OOPs concept.
  • Basic Knowledge of SQL and Linux.

Who Should Attend?

  • Analytics Professionals
  • IT Professionals
  • Software Testing Professionals
  • Mainframe Professionals
  • Software Developers & Architects
  • Graduates who are willing to build a career in Big Data
  • Anyone interested in Big data Analytics

Course

1.1 Big Data Introduction

  • What is data, Types of data, what is big data?
  • Evolution of big data, Need for Big data Analytics
  • Sources of data, how to define big data using three V’s

1.2 Apache Hadoop and the Hadoop Ecosystem

  • History of Hadoop
  • Problems with traditional large-scale systems
  • Requirements for a new approach
  • Why Hadoop is in demand in market nowadays?
  • Why we need Hadoop?
  • What is Hadoop?
  • Apache Hadoop Overview
  • Data Ingestion and Storage
  • Data Processing
  • Data Analysis and Exploration
  • Other Ecosystem Tools
  • Activity: Querying Hadoop Data

1.3  Basic Java Overview for Hadoop

  • Object oriented concepts
  • Variables and Data types
  • Static data type
  • Primitive data types
  • Objects & Classes
  • Java Operators Method and its types
  • Constructors
  • Conditional statements
  • Looping in Java Access
  • Modifiers
  • Inheritance
  • Polymorphism
  • Method overloading &overriding Interfaces

1.4 Building Blocks

  • Quick tour of Java (As Hadoop is Written in Java , so it will help us to understand it better)
  • Quick tour of Linux commands ( Basic Commands to traverse the Linux OS)
  • Quick Tour of RDBMS Concepts (to use HIVE and Impala)
  • Quick hands on experience of SQL.
  • Introduction to Cloudera VM and usage instructions

1.5 Getting started with Cloudera QuickStart VM

  • Getting started with Bigdata Hadoop Cluster with Cloudera CDH
  • Creating Virtual environment demo
  • QuickStartVM CDH Navigation
  • Introduction to Cloudera Manager

1.6 Apache Hadoop File Storage

  • Apache Hadoop Cluster Components
  • HDFS Architecture
  • Using HDFS
  • Activity: Accessing HDFS with the Command Line and Hue

1.7 Distributed Processing on an Apache Hadoop Cluster

  • YARN Architecture
  • Working With YARN
  • Activity: Running and Monitoring a YARN

1.8 MapReduce

  • Introduction to MapReduce
  • Concepts of MapReduce
  • Map Reduce architecture
  • Advance Concept of Map Reduce
  • Understanding how the distributed processing solves the big data challenge and how MapReduce helps to solve that problem
  • Understanding the concept of Mappers and Reducers
  • Phases of a MapReduce program
  • Anatomy of a Map Reduce Job Run
  • Data-types in Hadoop MapReduce
  • Role of InputSplit and RecordReader
  • Input format and Output format in Hadoop
  • Concepts of Combiner and Partitioner
  • Running and Monitoring MapReduce jobs
  • Writing your own MapReduce job using MapReduce API
  • Difference between Hadoop 1 & Hadoop 2
  • The Hadoop Java API for MapReduce
  • Mapper Class
  • Reducer Class
  • Driver Class
  • Basic Configuration of MapReduce
  • Writing and Executing the Basic MapReduce Program using Java
  • Submission & Initialization of MapReduce Job.
  • Explain the Driver, Mapper and Reducer code
  • Word count problem and solution
  • Configuring development environment – Eclipse
  • Testing, debugging project through eclipse and then finally packaging, deploying the code on Hadoop Cluster

2.1 Data Analysis using Pig

  • Introduction to Apache Pig
  • Why PIG if MapReduce is there?
  • Pig Architecture and components
  • Pig Installation
  • Accessing Pig Grunt Shell
  • Pig Data types
  • Pig Commands – Load, Store, Describe , Dump
  • Pig Rotational Operators
  • Pig User Defined Functions
  • Tight coupling between Pig and MapReduce
  • Pig Latin scripting
  • PIG running modes,
  • PIG in local mode
  • PIG in Map Reduce mode

2.2  Data Analysis using Hive

  • Overview of Hive
  • Background of Hive
  • Hive vs PIG
  • Hive Architecture
  • Components of Hive
  • Installation & configuration
  • Working with Tables.
  • Primitive data types and complex data type
  • Hive Bucketed Tables and Sampling.
  • Dynamic Partition
  • Differences between ORDER BY, DISTRIBUTE BY and SORT BY.
  • Bucketing and Sorted Bucketing with Dynamic partition.
  • RC File.
  • INDEXES and VIEWS.
  • MAPSIDE JOINS.
  • Compression on hive tables and Migrating Hive tables.
  • Dynamic substation of Hive and Different ways of running Hive
  • How to enable Update in HIVE.
  • Log Analysis on Hive.
  • Access HBASE tables using Hive.
  • Hive Services, Hive Shell, Hive Server and Hive Web Interface (HWI)
  • Meta store
  • Creating the table, Loading the datasets & performing analysis on that Datasets.
  • How to Capture Inserts, Updates and Deletes in Hive
  • Hue Interface for Hive
  • How to analyse data using Hive script
  • Differentiation between Hive and Impala

2.3 Data Integration using Sqoop

  • Introduction to Apache Sqoop
  • Sqoop Architecture and installation
  • Import Data using Sqoop in HDFS
  • Import all tables in Sqoop
  • Export data from HDFS
  • Setting up RDBMS Server and creating & loading datasets into RDBMS Mysql.
  • Writing the Sqoop Import Commands to transfer data from RDBMS to HDFS/Hive/Hbase
  • Writing the Sqoop Export commands to transfer data from HDFS/Hive to RDBMS ·

2.4 Real time data streaming Flume

  • Installation
  • Introduction to Flume
  • Flume Agents: Sources, Channels and Sinks
  • Flume Commands
  • Flume Use Cases
  • How to load data in Hadoop that is coming from web server or other storage
  • How to load streaming data from Twitter data in HDFS using Hadoop

2.5 Hadoop No Sql database HBase

  • Introduction to Hbase
  • Hbase Architecture
  • HBase Installation and configurations
  • HBase concepts
  • HBase Data Model and Comparison between RDBMS and NOSQL.
  • Master & Region Servers.
  • HBase Operations (DDL and DML) through Shell and Programming and HBase Architecture.
  • Catalog Tables

2.8 Zookeeper

  • Introduction to Zookeeper
  • How Zookeeper helps in Hadoop Ecosystem
  • How to load data from Relational storage in Hadoop
  • Data Model of ZooKeeper
  • Operations of ZooKeeper
  • ZooKeeper Implementation
  • Sessions, States and Consistency.

2.9 Hadoop on Google Cloud

  • Introduction to google Cloud Infrastructure
  • Creating VM instances on google cloud
  • Deploying Data on to the google Cloud

2.10 Hadoop on AWS

FAQ

  • Java 1.6.x or higher.
  • Linux and Windows are the supported operating systems, but BSD, Mac OS/X, and OpenSolaris are known to work.

Classes are held on weekdays and weekends. You can check available schedules and choose the batch timings which are convenient for you.

The short answer is dual processor/dual core machines with 4-8GB of RAM depending upon workflow needs.

Quick Enquiry