Implement stream-aware, reactive, integration pipelines with Alpakka Kafka

Share
  • March 1, 2019

 

Driven by the need for streaming data in the world of microservices and cloud deployment where new components must interact with legacy systems, project Alpakka was born!

Alpakka Kafka is an open source initiative to implement stream-aware, reactive, integration pipelines for Java and Scala.

According to the GitHub repo, it is built on top of Akka Streams and has been designed from the ground up to understand streaming natively and provide a DSL for reactive and stream-oriented programming, with built-in support for backpressure.

Today, we take a closer look at what Alpakka Kafka has to offer and we’ll go over the important highlights featured in its first milestone release.

The features

Here are the main features Alpakka Kafka has to offer:

  • Provides Apache Kafka connectivity for Akka Streams
  • Supports consuming messages from Kafka into Akka Streams with at-most-once, at-least-once and transactional semantics, and supports producing messages to Kafka
  • Once consumed messages are in the Akka Stream, the whole flexibility of all Akka Stream operators becomes available
  • achieves back-pressure for consuming by automatically pausing and resuming its Kafka subscriptions
  • Alpakka Kafka 1.0 uses the Apache Kafka Java client 2.1.0 internally

The 1.0 release also features some important changes since 0.22:

  • Upgrade to Kafka client 2.1.0 #660 – This upgrade makes it possible to use of the zstandard compression (with Kafka 2.1 brokers). Use Kafka client 2.x poll API #614.
  • No more WakeupException – The Kafka client API 2.x allows for specifying a timeout when polling the Kafka broker, thus you do not need to use the cranky tool of Kafka’s WakeupExceptions to be sure not to block a previous thread. The settings to configure wake-ups are not used anymore.
  • Alpakka Kafka consumers don’t fail for non-responding Kafka brokers anymore (as they used to after a number of WakeupExceptions).
  • New Committer.sink and Committer.flow for standardized committing #622 and #644
  • Commit with metadata #563 and #579
  • Java APIs for all settings classes #616
  • Alpakka Kafka testkit

Head over to the official documentation or the GitHub repo to find all the relevant information on Alpakka Kafka and get started in no time!

The post Implement stream-aware, reactive, integration pipelines with Alpakka Kafka appeared first on JAXenter.

Source : JAXenter