Thilan Dissanayaka Interview Guides Jan 28

Kafka - Interview preparation guide

What is Apache Kafka?

Apache Kafka is a distributed event streaming platform designed for high-throughput, fault-tolerant, and real-time data streaming. It is used for building real-time data pipelines and streaming applications.

What are the core components of Kafka?

  • Producer: Sends messages to Kafka topics.
  • Consumer: Reads messages from topics.
  • Broker: Kafka server that stores and serves messages.
  • Topic: Logical channel to which messages are sent and from which consumers read.
  • Partition: A topic is divided into multiple partitions for parallelism and scalability.
  • Zookeeper: Manages metadata and cluster coordination (deprecated in newer versions).

What is a Kafka topic?

A Kafka topic is a logical channel to which producers publish data and consumers read data. Topics can have multiple partitions to scale horizontally.

What is a Kafka partition, and why is it important?

A Kafka partition is a subset of a topic that allows parallel processing and ensures message ordering within the partition. It improves throughput and fault tolerance.

How does Kafka ensure message durability?

Kafka writes data to disk and replicates it across multiple brokers using a replication factor. This ensures data is not lost even if a broker fails.

What is the difference between Kafka Consumer Group and Consumer?

  • Consumer: A client that reads messages from Kafka topics.
  • Consumer Group: A group of consumers where each message is consumed by only one consumer in the group, enabling parallel consumption.

What is an offset in Kafka?

An offset is a unique identifier for each record within a Kafka partition. It allows consumers to track their position and resume reading from where they left off.

What is Kafka retention policy?

Kafka’s retention policy determines how long messages are stored. It can be configured by:

  • Time-based: e.g., retain messages for 7 days.
  • Size-based: e.g., retain up to 100 GB of data.

How does Kafka handle failure?

  • Replication: Messages are replicated across multiple brokers.
  • Leader Election: If a broker fails, Kafka promotes a replica to leader.
  • Consumer Offset Management: Allows consumers to resume processing after failures.

What are the main APIs in Kafka?

  1. Producer API: For publishing records.
  2. Consumer API: For subscribing to topics.
  3. Streams API: For processing and transforming data in real-time.
  4. Admin API: For managing Kafka topics and brokers.
ALSO READ
Database Indexing: Speeding Up Your Queries Like a Pro
Apr 26 Database Systems

In the world of databases, speed matters. Whether you're powering an e-commerce store, a social media app, or a business dashboard — users expect data to load instantly. That’s where database....

Factory Pattern explained simply
Apr 26 Software Architecture

# Factory Pattern Imagine you want to create objects — but you don't want to expose the creation logic to the client and instead ask a factory class to **create objects for you**. That's....

Singleton Pattern explained simply
Apr 26 Software Architecture

Ever needed just one instance of a class in your application? Maybe a logger, a database connection, or a configuration manager? This is where the Singleton Pattern comes in — one of the simplest....

Docker - Interview preparation guide
May 08 Interview Guides

## What is Docker and why is it used? Docker is a platform for developing, shipping, and running applications in containers. Containers package an application with its dependencies, ensuring....

Exploiting a  Stack Buffer Overflow  on Linux
May 11 Exploit development

Have you ever wondered how attackers gain control over remote servers? How do they just run some exploit and compromise a computer? If we dive into the actual context, there is no magic happening.....

CI/CD concepts - Interview preparation guide
Jan 05 Interview Guides

## What is CI/CD? CI/CD stands for Continuous Integration and Continuous Delivery/Deployment. CI is the practice of automatically integrating code changes from multiple contributors into a....