Core Java


  • Batch Timings :
  • Starting Date :

Course Overview

Hadoop is a Big Data mechanism, which helps to store and process & analysis of unstructured data by using any commodity hardware. Hadoop is an open source software framework written in Java, which supports distributed application. It was introduced by Dough Cutting & Michael J. Cafarella in mid of 2006. Yahoo is the first commercial user of Hadoop(2008).
Hadoop works on two different generation Hadoop 1.0 & Hadoop 2.0 which, is based on YARN (yet another resource negotiator) architecture. Hadoop named after Dough cutting’s son’s elephant. Hadoop took over the Big data ecosystem by storm in 2012. Enterprises are now looking to leverage the big data environment require Big Data Architect who can design and build large-scale development and deployment of Hadoop applications.

Big Data is a collection of the huge or massive amount of data. We live in the data age. And it’s not easy to measure the total volume of data or to manage & process this enormous data. The flood of this Big Data is coming from different resources such as the New York stock exchange, Facebook, Twitter, AirCraft, Wallmart etc.

COURSE FEATURES

  • Resume & Interviews Preparation Support
  • Hands on Experience on One Live Project.
  • 100 % Placement Assistance
  • Resume Preparation
  • Interview Preparation
  • Multiple Flexible Batches

At the end of Hadoop Developer Training Course, Participants will be able to:

  • Completely understand Apache Hadoop Framework.
  • Learn to work with HDFS.
  • Discover how MapReduce works with data and processes it.
  • Design and develop big data applications using Hadoop Ecosystem.
  • Learn how YARN helps in managing resources into clusters.
  • Write as well as execute programs in YARN.
  • Implement MapReduce Integration, HBase, Advanced Indexing and Advanced Usage.
  • Become expert in working on data, and managing data resources
  • Become functional programmer implementing various applications to ensure effective data processing and optimization techniques are in place
  • Expert knowledge to apply interactive algorithms and work on data forms.
  • Resume & Interviews Preparation Support

Course Duration

  • Weekdays: 2 Months(40-50 hours)
  • Weekends: 3 Months (50-60 hours)
  • Weekdays: Tuesday-Friday Classes
  • Weekends: Saturday and Sunday Classes

Prerequisites : 

  • BSc, BCS, BCA, BE, B.Tech, MSc, MCS, MCA, M.Tech
  • Knowledge of Core Java

Who Should Attend?

  • Analytics Professionals
  • IT Professionals
  • Software Testing Professionals
  • Mainframe Professionals
  • Software Developers & Architects
  • Graduates who are willing to build a career in Big Data
  • Anyone interested in Big data Analytics

What you gain

  • Better job opportunities
  • Open gates for multiple industries in Hadoop.
  • Get advantage of job opportunities in data management industry.
  • Skills and expertise for advanced career growth
  • Better Career and Salary
  • Big Company hiring.
  • Get great salary hikes with our advanced training and learning.
  • Big Data Hadoop is a dynamic field and shapes you as a better professional for working in future

Course

1.1 Big Data Introduction

  • What is Big Data
  • Evolution of Big Data
  • Why Big Data?
  • Role Played by Big Data
  • Data management – Industry Challenges
  • Types of data
  • Sources of Big Data
  • Big Data examples
  • What is streaming data?
  • Batch vs Streaming data processing

1.2 Hadoop Introduction

  • History of Hadoop
  • Problems with traditional large-scale systems
  • Requirements for a new approach
  • Why Hadoop is in demand in market nowadays?
  • Why we need Hadoop?
  • What is Hadoop?
  • How Hadoop solves the Big Data problem
  • Hadoop Architecture.
  • Hadoop ecosystem Components
  • HDFS Overview
  • Hadoop 1.x vs Hadoop 2.x
  • Hadoop 1.X Architecture
  • Hadoop 1.X Core Components
  • Hadoop 1.X Job Process
  • Overview of Hadoop Daemons
  • Hadoop Daemons in Hadoop Release-1.x
  • Hadoop Daemons in Hadoop Release-2.x
  • Hadoop Release-3.x
  • Comparing Hadoop & SQL.

1.3  Basic Java Overview for Hadoop

  • Object oriented concepts
  • Variables and Data types
  • Static data type
  • Primitive data types
  • Objects & Classes
  • Types: Wrapper classes
  • Java Operators Method and its types
  • Constructors
  • Conditional statements
  • Looping in Java Access
  • Modifiers
  • Inheritance
  • Polymorphism
  • Method overloading &overriding Interfaces

1.4 Building Blocks

  • Quick tour of Java (As Hadoop is Written in Java , so it will help us to understand it better)
  • Quick tour of Linux commands ( Basic Commands to traverse the Linux OS)
  • Quick Tour of RDBMS Concepts (to use HIVE and Impala)
  • Quick hands on experience of SQL.
  • Introduction to Cloudera VM and usage instructions

1.5 Setting up the Development Environment (Cloudera quickstart vm)

  • Overview of Big Data Tools
  • Different vendors providing hadoop and where it fits in the industry ·
  • Setting up Development environment & performing Hadoop Installation on User’s laptop Hadoop daemons
  • Starting stopping daemons using command line and cloudera manager
1.6 Hadoop cluster

  • Client Nodes
  • Slaves
  • Setting up a Hadoop cluster
  • Preparing nodes for Hadoop and VM settings
  • Install Java and configure password less SSH across nodes
  • Basic Linux commands
  • Hadoop 1.x single node deployment
  • Hadoop Daemons – NameNode, JobTracker, DataNode, TaskTracker, Secondary NameNode
  • Hadoop Configuration files and running
  • Important web URLs and Logs for Hadoop
  • Run HDFS and Linux commands
  • Hadoop 1.x multi-mode deployment
  • Run sample jobs in Hadoop single and multi-node clusters

1.7  HDFS (Hadoop Distributed File System)

  • HDFS Design Goals
  • Understanding the HDFS architecture.
  • Understand Blocks and how to configure block size
  • Block replication and replication factor
  • Understand Hadoop Rack Awareness and configure racks in Hadoop
  • File read and write anatomy in in HDFS
  • Enable HDFS Tash
  • Configure HDFS Name and Space Quota
  • Configure and use WebHDFS (Rest API for HDFS)
  • Health Monitoring using FSCK command
  • Understand NameNode Safemode, file system image and edits
  • Configure Secondary NameNode and use checkpointing process to provide NameNode failover
  • HDFS DFSAdmin and FIle system shell commands
  • Hadoop NameNode/DataNode directory structure
  • HDFS permissions model
  • HDFS Offline image viewer
  • Metadata, FS image, Edit log, Secondary Name Node and Safe Mode.
  • How to add New Data Node dynamically.
  • How to decommission a Data Node dynamically (Without stopping cluster).
  • Data Processing and Replication Pipeline

1.8 Big Data Introduction

  • Introduction to MapReduce
  • Concepts of MapReduce
  • Map Reduce architecture
  • Advance Concept of Map Reduce
  • Understanding how the distributed processing solves the big data challenge and how MapReduce helps to solve that problem
  • Understanding the concept of Mappers and Reducers
  • Phases of a MapReduce program
  • Anatomy of a Map Reduce Job Run
  • Data-types in Hadoop MapReduce
  • Role of InputSplit and RecordReader
  • Input format and Output format in Hadoop
  • Concepts of Combiner and Partitioner
  • Running and Monitoring MapReduce jobs
  • Writing your own MapReduce job using MapReduce API
  • Difference between Hadoop 1 & Hadoop 2
  • The Hadoop Java API for MapReduce
  • Mapper Class
  • Reducer Class
  • Driver Class

1.9 Configuration

  • Basic Configuration of MapReduce
  • Writing and Executing the Basic MapReduce Program using Java
  • Submission & Initialization of MapReduce Job.
  • Explain the Driver, Mapper and Reducer code
  • Word count problem and solution
  • Configuring development environment – Eclipse
2.1 Pig

  • What is Apache Pig
  • Why Apache Pig
  • Pig features
  • Where should Pig be used
  • Where not to use Pig
  • Why PIG if MapReduce is there?
  • Pig Architecture and components
  • Pig Installation
  • Accessing Pig Grunt Shell
  • Pig Data types
  • Pig Commands – Load, Store, Describe , Dump
  • Pig Rotational Operators
  • Pig User Defined Functions
  • Configure PIG to use HCatalog
  • Tight coupling between Pig and MapReduce
  • Pig Latin scripting
  • PIG running modes,
  • Map Reduce vs. PIG
  • PIG in local mode
  • PIG in Map Reduce mode
  • Execution mechanism and data processing
  • Writing UDFs
  • Macros in Pig

2.2 Hive

  • Overview of Hive
  • Background of Hive
  • Hive vs PIG
  • Hive Architecture
  • Components of Hive
  • Installation & configuration
  • Working with Tables.
  • Primitive data types and complex data type
  • Hive Bucketed Tables and Sampling.
  • Dynamic Partition
  • Differences between ORDER BY, DISTRIBUTE BY and SORT BY.
  • Bucketing and Sorted Bucketing with Dynamic partition.
  • RC File.
  • INDEXES and VIEWS.
  • MAPSIDE JOINS.
  • Compression on hive tables and Migrating Hive tables.
  • Dynamic substation of Hive and Different ways of running Hive
  • How to enable Update in HIVE.
  • Log Analysis on Hive.
  • Access HBASE tables using Hive.
  • Hive Services, Hive Shell, Hive Server and Hive Web Interface (HWI)
  • Meta store
  • Creating the table, Loading the datasets & performing analysis on that Datasets.
  • How to Capture Inserts, Updates and Deletes in Hive
  • Hue Interface for Hive
  • How to analyse data using Hive script
  • Differentiation between Hive and Impala

2.3 Sqoop

  • Introduction to Apache Sqoop
  • Sqoop Architecture and installation
  • Import Data using Sqoop in HDFS
  • Import all tables in Sqoop
  • Export data from HDFS
  • Setting up RDBMS Server and creating & loading datasets into RDBMS Mysql.
  • Writing the Sqoop Import Commands to transfer data from RDBMS to HDFS/Hive/Hbase
  • Writing the Sqoop Export commands to transfer data from HDFS/Hive to RDBMS ·

2.4 Flume

  • Installation
  • Introduction to Flume
  • Flume Agents: Sources, Channels and Sinks
  • Flume Commands
  • Flume Use Cases
  • How to load data in Hadoop that is coming from web server or other storage
  • How to load streaming data from Twitter data in HDFS using Hadoop

2.5 NoSQL Database – Hbase

  • Introduction to NoSQL Databases and Hbase
  • HBase v/s RDBMS, HBase Components, HBase Architecture
  • HBase Cluster Deployment

2.6 HBase

  • Introduction to Hbase
  • Hbase Architecture
  • HBase Installation and configurations
  • HBase concepts
  • HBase Data Model and Comparison between RDBMS and NOSQL.
  • Master & Region Servers.
  • HBase Operations (DDL and DML) through Shell and Programming and HBase Architecture.
  • Catalog Tables.
  • Block Cache and sharding.
  • SPLITS
  • DATA Modeling (Sequential, Salted, Promoted and Random Keys).
  • JAVA API’s and Rest Interface.
  • Client Side Buffering and Process 1 million records using Client side Buffering.
  • HBASE Counters.
  • Enabling Replication and HBASE RAW Scans.
  • HBASE Filters.
  • Bulk Loading and Coprocessors (Endpoints and Observers with programs).
  • Real world use case consisting of HDFS,MR and HBASE.
2.7 Oozie, HUE and Yarn (Hadoop Processing Framework)

  • Oozie Fundamentals
  • Oozie: Components
  • Oozie workflow creations
  • Scheduling with Oozie
  • Concepts on Coordinators and BundlesClient Nodes
  • Hands-on Training on Oozie Workflow
  • Oozie Coordinator
  • Oozie Commands
  • Oozie Web Console
  • Oozie for MapReduce
  • Hive in Oozie
  • An Overview of Hue
  • Hue in Real-time Scenarios
  • Introduction to YARN
  • Significance of YARN
  • YARN Daemons – Resource Manager, NodeManager etc.
  • Job assignment & Execution flow

2.8 Spark and Scala

  • Introduction to Spark
  • Understanding the Spark architecture and why it is better than Map Reduce
  • Hands on examples with various transformations on RDD
  • Perform Spark actions on RDD
  • Spark Sql concepts : Dataframes & Datasets
  • Hands on examples with Spark SQL to create and work with data frames and datasets
  • Create Spark DataFrames from an existing RDD
  • Create Spark DataFrames from external files
  • Create Spark DataFrames from hive tables
  • Perform operations on a DataFrame
  • Using Hive tables in Spark
  • Key-Value Pair RDDs
  • Introduction to scala
  • Spark & Scala interdependence
  • Objects & Classes in Scala
  • What are Operators in Scala
  • How to use Logical Operator
  • Control Structures in Scala
  • Functions and Procedures
  • Scala Traits
  • Fields and Collections in Scala

2.9 Zookeeper

  • Introduction to Zookeeper
  • How Zookeeper helps in Hadoop Ecosystem
  • How to load data from Relational storage in Hadoop
  • Data Model of ZooKeeper
  • Operations of ZooKeeper
  • ZooKeeper Implementation
  • Sessions, States and Consistency.
  • Preparing nodes for Hadoop and VM settings
  • Install Java and configure password less SSH across nodes
  • Basic Linux commands
  • Hadoop 1.x single node deployment
  • Hadoop Daemons – NameNode, JobTracker, DataNode, TaskTracker, Secondary NameNode
  • Hadoop Configuration files and running
  • Important web URLs and Logs for Hadoop
  • Run HDFS and Linux commands
  • Hadoop 1.x multi-mode deployment
  • Run sample jobs in Hadoop single and multi-node clusters

2.10 MongoDB

  • Introduction to MongoDB
  • MongoDB v/s RDBMS
  • Why & Where to use MongoDB
  • JSON and BSON
  • MongoDB Tools
  • Collection and Database
  • CRUD Operations in MongoDB
  • MongoDB Cluster Operations

FAQ

There is plenty of raw data out there but organizations need professionals who can convert this raw data into meaningful visualizations that provide an insight into the data and help in making business decisions. A compelling tool that helps visualize and summarise data is Tableau. Zeolearn’s course covers the core features of Tableau. With hands-on practice exercises you will learn about basic and advanced visualization techniques of conditional formatting, scripting, integration with R and other valuable methods to make data analytics easier.

Classes are held on weekdays and weekends. You can check available schedules and choose the batch timings which are convenient for you.

Towards the end of the course, all participants will be required to work on a project to get hands on familiarity with the concepts learnt. You will perform data visualization with Tableau with full support from your mentors. This project, which can also be a live industry project, will be reviewed by our instructors and industry experts. On successful completion, you will be awarded a certificate.

Quick Enquiry