Apache Kafka Framework Architecture

Let’s dive into the Kafka Framework or Architecture,

In Kafka Architecture four core APIs are there,

  1. Producer API
  2. Consumer API
  3. Streams API
  4. Connector API

Kafka Architecture

Producer API

  • Producer API permits clients to connect to Kafka servers running in the cluster and publish the stream of records to one or more Kafka topics.

Consumer API

  • In case of consumer API clients can connect to Kafka servers which is running in the cluster and consume streams of records from one or more Kafka topics.
  • Kafka consumes the messages from Kafka topics.

Streams API

  • Stream API permits clients to act as stream processors by consuming streams from one or more topics and producing the streams to other output topics.
  • Transforming the input and output streams is permitted in Stream API.

Connector API

  • Here user can create reusable source and sink connector components for various data sources.
  • Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications. For example, Connector to RDBMS (Relational Database management System) might capture every change to a table.

Work Flow of Publisher-Subscriber Messaging system

Step wise work Flow is explained below,

  1. Firstly producers send message to a topic at regular interval of time.
  2. Then Kafka broker stores all messages in the partitions configured in particular topic. It checks the messages are equally shared between partitions. When producer sends three messages then there will be three partitions, then Kafka will store first message in the first partition and the second message in the second partition and so on.
  3. In next stage consumer subscribes to a specific topic.
  4. After the subscription of specific topic, Kafka will provide the current offset of the topic to the consumer and also saves the offset in the Zookeeper body.
  5. Consumer will request the Kafka in a regular interval i.e 100 Ms for new messages.
  6. Once Kafka receives the messages from producers, it send these messages to the consumers.
  7. Then finally consumer process all the received messages.
  8. Once the messages are processed, consumer will send back an acknowledgement to the Kafka broker.
  9. Once Kafka receives an acknowledgement, it change the offset to the new value and updates it in the Zookeeper i.e offsets are maintained in the Zookeeper, the consumer can read next message correctly even during server disapproval.
  10. The flow will repeat until the consumer stops the request.
  11. Consumer has the option to rewind or skip to the desired offset of a topic at any time and read all the subsequent messages.

“That’s all about the Architecture and Work Flow of Apache Kafka”