Hadoop is one of the most commonly used Big Data frameworks, supporting the processing of large data sets in a distributed computing environment. This tool is becoming more and more essential to big business as the world becomes more data-driven. In this introduction, you'll cover the individual components of Hadoop in detail and get a higher level picture of how they interact with one another. It's an excellent first step towards mastering Big Data processes.
- Access 30 lectures & 5 hours of content 24/7
- Install Hadoop in Standalone, Pseudo-Distributed, & Fully Distributed mode
- Set up a Hadoop cluster using Linux VMs
- Build a cloud Hadoop cluster on AWS w/ Cloudera Manager
- Understand HDFS, MapReduce, & YARN & their interactions
Loonycorn is comprised of four individuals--Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh--who have honed their tech expertises at Google and Flipkart. The team believes it has distilled the instruction of complicated tech concepts into funny, practical, engaging courses, and is excited to be sharing its content with eager students.