Obtenez par e-mail toute l'actualité Hortonworks

Une fois par mois, recevez les dernières idées, tendances, informations d’analyse et découvertes sur le Big Data.


Sign up for the Developers Newsletter

Une fois par mois, recevez les dernières idées, tendances, informations d’analyse et découvertes sur le Big Data.




Prêt à débuter ?

Télécharger Sandbox

Que pouvons-nous faire pour vous ?

* Je comprends que je peux me désabonner à tout moment. J'ai également compris les informations supplémentaires fournies dans la Politique de confidentialité de Hortonworks.
fermerBouton Fermer
December 06, 2018
diapositive précédenteDiapositive suivante

Kafka Streams – Is it the right Stream Processing engine for you?

In an earlier blog post, Democratizing Analytics within Kafka With 3 Powerful New Access Patterns in HDP and HDF, we discussed different access patterns that provides application developers and BI analysts powerful new tools to implement diverse use cases where Kafka is key component of their application architectures. In this blog, we will discuss in detail the streaming access pattern and the addition of Kafka Streams support in HDF 3.3 and the upcoming HDP 3.1 release.

Before the addition of Kafka Streams support, HDP and HDF supported two stream processing engines:  Spark Structured Streaming and Streaming Analytics Manager (SAM) with Storm. So naturally, this begets the following question:

Why add a third stream processing engine to the platform?

With the choice of using Spark structured streaming or SAM with Storm support, customers had the choice to pick the right stream processing engine based on their non- functional requirements and use cases. However, neither of these engines addressed the following types of requirements that we saw from our customers:

  • Lightweight library to build eventing-based microservices with Kafka as the messaging/event backbone.
  • The application runtime shouldn’t require a cluster.
  • Cater to application developers who want to programmatically build streaming applications with simple APIs for less complex use cases.
  • Requirements around exactly-once semantics where the data pipelines only consist of Kafka.

Kafka Streams addresses each of these requirements.  With the addition of Kafka Streams, customers now have more options to pick the right stream processing engine for their requirements and use cases. The below table provides some general guidelines / comparisons.

The table above is packed with lots of information. So, when is Kafka Streams an ideal choice for your stream processing needs? Consider the following:

  • Your stream processing application consists of Kafka to Kafka pipelines.
  • You don’t need/want another cluster for stream processing.
  • You want to perform common stream processing functions like filtering, joins, aggregations, enrichments on the stream for simpler stream processing apps.
  • Your target users are developers with java dev backgrounds.
  • Your use cases are about building lightweight microservices, simple ETL and stream analytics apps.

Each of these three supported streaming engines use a centralized set of platform services providing security (authentication / authorization), audit, governance, schema management and monitoring capabilities. 

What’s Next

In the following post to this, we will demonstrate using Kafka Streams integrated with Schema Registry, Atlas and Ranger to build set of microservices apps using a fictitious use case.

Balises :

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués par une *