Free Udemy Courses and Zero Broken link.
The only website where expired courses are deleted.

Apache Kafka Tutorial For Beginners

Share :

Publisher : DataShark Academy

Course Language : English



In this beginner course you will learn few core concepts about Apache Kafka which will provide you a good start. After completing this course, you can take full course from DataShark Academy to become Apache Kafka Guru. The link to full course will be in the last lecture. Once you complete full course, You will be able to build Kafka applications for any real world use cases.

In more detailed course - Apache Kafka Guru, you will get following:

Learn Apache Kafka with plenty of hands on practicals !!!

Just in few minutes you will be on the route to be an Apache Kafka hero. You will learn in this course how 1000's of companies have built their business solutions (ex. Netflix) around Apache Kafka.

This comprehensive Kafka tutorial starts with building foundation about all major concepts in Apache Kafka. This part of the course is vital for understanding how Kafka really works. This section will help you with hands-on exercises later in this course.

In the later part of this Kafka tutorial, we will have lots of hands-on exercises covering from basic to advanced topics of Kafka. You will learn how to set up Apache Kafka on your personal computer (Mac/Linux or Windows PC). Then create Kafka topics (where actual data is stored inside Kafka) & perform various operations on your topics. Later you will setup Kafka producers and Kafka consumers to send and receive data.

In more advance topics you will learn how to create your own Kafka Producers and Kafka Consumers.


Apache Kafka is one of the most promising data processing system available today. Apache Kafka is an open source distributed streaming platform which can handle 100s of billions of events in a day.

It is giving tough competition to other streaming applications such as Apache Spark & Flink.

Apache Kafka is easy to setup and learn. It enables you to connect multiple systems to share the data in 100s of different possible combinations. For instance, using Apache Kafka, you can connect a legacy Relation Database System  (RDBMS) with Apache Hadoop's Distributed File System, or with ElasticSearch in ELK stack. You can also capture data from one system and send it to AWS S3 or AWS Lambda functions. The possibilities are endless.

That's why every data engineer and data architect must learn Apache Kafka today.


In complete course, you will also learn how to create your own custom Kafka Producers and Kafka Consumers using Java code in easy to understand manner. These exercises will greatly boost your confidence to build any custom producers and consumers for real world problems.

Here are some of topics that we will cover in full course:


In this section of the course, we will start with building the knowledge about various key concepts of Apache Kafka. This will be important section, so make sure you watch all videos before moving to the next section of the course. Here are some of the chapters covered in this section:

  1. What is Apache Kafka?

  2. How Apache Kafka provides High Level of Scalability?

  3. Kafka’s In-built Fault Tolerance

  4. Apache Kafka is damn Popular


In this part of the course, we will discuss why we really need Apache Kafka and what kind of real world problem does it solve? Here are some of the chapters covered in this section:

  1. Why we need Apache Kafka?

  2. Decoupling of Systems


It’s time to dig deeper into how Apache Kafka really works and we will do this by understanding Kafka’s architecture.

  1. Apache Kafka Architecture

  2. How Netflix is using Kafka


In this section of the course, we will look at various components that make Apache Kafka what it is today. This is an important section of this course. It will help you really build the core understanding about each individual component of a Kafka application.

  1. What is Kafka Broker

  2. Kafka Topic Explained

  3. Learn about Kafka Topic Partitions

  4. What are Kafka Offsets

  5. How Replication Works in Kafka

  6. Who is the leader

  7. Let’s talk about Zookeeper

  8. Kafka Producers

  9. Kafka Consumers

  10. Putting everything together


In this section, we will talk about what to consider while designing Apache Kafka applications.

  1. Designing Considerations


Let’s use the knowledge and get our hands dirty by working on some real exercises.

  1. Setup Apache Kafka on Local Computer (Mac & Linux users)

  2. Setup Apache Kafka on Local Computer (Windows PC users)

  3. How to connect to HDP Sandbox terminal (Windows PC users)

  4. Let’s fire up a local cluster (Windows PC users)

  5. It’s time to create some Kafka Topics

  6. What Kafka topics you got?

  7. Peak inside a Kafka topic

  8. How to delete a Kafka topic

  9. Running a Kafka Producer

  10. Running a Kafka Consumer

  11. What to do when a Kafka Producer or Kafka Consumer goes down in production


In this part of the course, you will work on more advanced subjects.

  1. How to create your own Kafka Producer (Java Code)

  2. How to create your own Kafka Consumer (Java Code)

  3. Integrating Kafka with Spark Streaming (Scala Code)

  4. Running your Spark Data aggregator


  1. Trick #1 – Generate data for your Kafka Console Producer

  2. Trick #2 – Simulate a Real Time Streaming Data Source


This course is taught by professionals with extensive experience in handling big data applications for top fortune 100 companies of the world.

Your instructors have managed to create data pipelines for extracting, transforming & processing over 100's of Terabytes of data in a day for their clients providing data analytics for user services.

At DataShark Academy, we provide accelerated learning programs taught by professionals with years of expertise in Big Data technologies and working with 10s of clients. 

You will learn plenty using our unique approach that focuses on maximum results in the shortest possible time.

Who this course is for:
  • A developer who wants to build applications to move data from one end to another.
  • Big data architect who wants to design giant data processing applications