Best way to persist join table data in Cassandra - cassandra

I've a scenario where I've a join created on top of 10 tables. This works great when the join is done in the database. Now, these tables are streaming data through Kafka topics (1:1 - table:topic mapping). I need to create/update the join(s) as the new messages come to the topic. So far, I've decided to store this data in a NoSQL DB like Cassandra and update the joined records as events keep coming. Here are my questions:
Is there a way to do this within Kafka itself?
If not in Kafka, what is the best way to do that?
Does the solution of persisting in Cassandra offer a better alternative?
Please Note: I've read that Cassandra isn't the right solution for joins. If not Cassandra, what is recommended? Please don't shoot the question down being subjective because if not others, at least, I expect to gain insights with that as well.

Is there a way to do this within Kafka itself?
Yes, using Kafka Streams or KSQL.
Kafka Streams details and example
KSQL details and example
As Justin Cameron noted, joins are limited to 2-way joins, so you would need to "daisy chain" your transformations. Each would write back to an staging Kafka topic, and the final joined result would also be a Kafka topic. From here, you can stream it to Cassandra using Kafka Connect (part of Apache Kafka).
Disclaimer: I work for Confluent, the company behind the open-source KSQL project.

Related

How to build Aggregations on Apache Solr with Spark

I have a requirement to build aggregations on the data that we receive to our Apache Kafka...
I am little bit lost which technlogical path to follow...
It seems people see the standard way, a constellation of Apache Kafka <-> Apache Spark <-> Solr
Bitnami Data Platform
I can't find concrete examples how this actually functions, but I am also asking myself would any solution von
Apache Kafka <-> Kafka Connect Solr <-> Solr
would not do the trick becasue solr supports aggregations also...
Solr Aggregation
but I saw some code snippets that aggregate the Data in Spark and write under special index to Solr.....
Also probably aggregation mit Kafka <-> Kafka Connect Solr <-> Solr will only function for only one Topic from Kafka, so if I have to combine the data from 2 or more, different Topics and aggregate, then Kafka, Spark, Solr is way to go.... (or this viable at all)
So as you may read, I am little bit confused, so I like to ask here, how are you approching this problem with your real life solutions....
Thx for answers...
Spark can of course join multiple topics. So can Flink, or Kafka Streams/KsqlDB. Spark or Flink just happen to be able to also write their data to external sources, such as Solr, rather than exclusively back into a new Kafka topic. The "downside" is that you need to maintain a scheduler exclusively for those, as compared to running a cluster of standalone Kafka Connect or Kafka Streams JAR applications. If you're using kubernetes, then that could be used for all of above (maybe not Flink... Haven't tried)
Kafka Connect can consume multiple topics and, depending on the connector configuration, might write to one or many Solr collections.

Kafka Spark Streaming ingestion for multiple topics

We are currently ingesting Kafka messages into HDFS using Spark Streaming. So far we spawn a whole Spark job for each topic.
Since messages are produced pretty rarely for some topics (average of 1 per day), we're thinking about organising the ingestion in pools.
The idea is to avoid creating a whole container (and related resources) for this "unfrequent" topics. In fact Spark Streaming accepts a list of topics in input, so we're thinking about using this feature in order to have a single job consuming all of them.
Do you guys think the one exposed is a good strategy? We also thought about batch ingestion, but we like to keep real-time behavior so we excluded this option. Do you have any tip or suggestion?
Does Spark Streaming handle well multiple topics as a source in case of failures in terms of offset consistency etc.?
Thanks!
I think Spark should be able to handle multiple topics fine as they have support for this from a long time and yes Kafka connect is not confluent API. Confluent does provide connectors for their cluster but you can use it too. You can see that Apache Kafka also has documentation for Connect API.
It is little difficult with Apache version of Kafka, but you can use it.
https://kafka.apache.org/documentation/#connectapi
Also if you're opting for multiple kafka topics in single spark streaming job, you may need to think about not creating small files as your frequency seems very less.

Which framework should be used to aggregate and joining the data of Kafka topics and store in to MySQL

I have data in two kafka topics from mysql using debezium-connector-mysql-plugin.
now i want to aggregate this data at daily level and store in to another mysql table.
please suggest.
Thanks.
You've not really laid out your requirements, other than commenting that you don't want to use Confluent Platform (but not said why).
In general, with data in Kafka (regardless of where it comes from) you have different options for processing it:
Bespoke consumer (probably a bad idea, given the availability of stream processing frameworks)
KSQL (use SQL to do your joins etc) - part of Confluent Platform
Kafka Streams - a Java library for doing stream processing. Part of Apache Kafka.
Flink, Spark Streaming, Samza, Heron, etc etc etc
It's up to you which you use, and it's going to come down to factors such as
Existing technology in use (no point deploying a Spark cluster if you don't need to; conversely, if you already use Spark and have lots of developers trained on it then it could make sense to use it)
Language familiarity of developers - does it have to be a Java API, or is SQL more accessible
Capabilities of the framework/tool - do you need tight security integration, exactly-once processing, CEP, etc etc. Some of these will rule in or out the tool that you use.
Once you've joined and aggregated your data, a good pattern to follow is to write it back to Kafka (thus more loosely decoupling your design, and enabling separation of responsibilities of the components) and from there write it to MySQL using Kafka Connect and the JDBC Sink. Kafka Connect is part of Apache Kafka.
One final consideration : if you're taking data from MySQL, to process it and then write it back into MySQL… do you even need Kafka? Is there an appropriate reason to be using it and not just doing this processing in mySQL itself?
Disclaimer: I work for Confluent.

Log processing using ELK stack

My requirement is
I have log files that I need to process, also I would like to enrich the log information with some data which I have in postgres db.
Step 1. I plan to feed data from above two sources (log file and database) to kafka topics, using logstash
Step 2. I plan to use kafka stream to join data on different kafka topics and push them to elastic search via API calls.
My doubt is about step 2,
Is kafka stream is the way to go ? or can I use Apache spark which I believe can be used for same.
Any help on this is appreciated.
Step 1. I plan to feed data from above two sources (log file and database) to kafka topics, using logstash
If you're already using Apache Kafka, then note that you can use Kafka Connect for integrating systems, including databases, into Kafka. For information on integrating databases, see this article.
Step 2. I plan to use kafka stream to join data on different kafka topics and push them to elastic search via API calls.
My doubt is about step 2, Is kafka stream is the way to go ? or can I use Apache spark which I believe can be used for same. Any help on this is appreciated.
Yes, Kafka Streams is a good fit for this. It can enrich events as they flow through a topic, using data from other topics. These topics can be sourced from any system, including log files, databases, etc. Here is example code of such join, and the documentation for it.
BTW you might want to also check out KSQL. KSQL is built on Kafka Streams so you get the same scalability and elasticity functionality, but with a SQL abstraction that you can run directly (no coding needed). For an example of using KSQL to enrich streams of data see this talk or this article
(Disclosure: I work for Confluent, who lead the open-source KSQL project)

Streaming data from Kafka into Cassandra in real time

What's the best way to write date from Kafka into Cassandra? I would expect it to be a solved problem, but there doesn't seem to be a standard adapter.
A lot of people seem to be using Storm to read from Kafka and then write to Cassandra, but storm seems like somewhat of an overkill for simple ETL operations.
We are heavily using Kafka and Cassandra through Storm
We rely on Storm because:
there are usually a lot of distributed processing (inter-node) steps before result of original message hit Cassandra (Storm bolt topologies)
We don't need to maintain consumer state of Kafka (offset) ourselves - Storm-Kafka connector is doing it for us when all products of original message is acked within Storm
Message processing is distributed across nodes with Storm natively
Otherwise if it is a very simple case, you might effectively read messages from Kafka and write result to Cassandra without help of Storm
Recent release of Kafka came with the connector concept to support source and sinks as first class concepts in the design. With this, you do not need any streaming framework for moving data in/out of Kafka. Here is the Cassandra connector for Kafka that you can use: https://github.com/tuplejump/kafka-connect-cassandra

Resources