I have apache access log file and i want to store access counts (total/daily/hourly) of each page in a cassandra table.
I am trying to do it by using kafka connect to stream from log file to a kafka topic. In order to increment metrics counters in Cassandra can I use Kafka Connect again? Otherwise which other tool should be used here e.g. kafka streams, spark, flink, kafka connect etc?
You're talking about doing stream processing, which Kafka can do - either with Kafka's Streams API, or KSQL. KSQL runs on top of Kafka Streams, and gives you a very simple way to build the kind of aggregations that you're talking about.
Here's an example of doing aggregations of streams of data in KSQL
SELECT PAGE_ID,COUNT(*) FROM PAGE_CLICKS WINDOW TUMBLING (SIZE 1 HOUR) GROUP BY PAGE_ID
See more at : https://www.confluent.io/blog/using-ksql-to-analyse-query-and-transform-data-in-kafka
You can take the output of KSQL which is actually just a Kafka topic, and stream that through Kafka Connect e.g. to Elasticsearch, Cassandra, and so on.
You mention other stream processing tools, they're valid too - depends in part on existing skills and language preferences (e.g. Kafka Streams is Java library, KSQL is … KSQL, Spark Streaming has Python as well as Java, etc), but also deployment preferences. Kafka Streams is just a Java library to deploy within your existing application. KSQL is deployable in a cluster, and so on.
This can be easily done with Flink, either as a batch or streaming job, and either with or without Kafka (Flink can read from files and write to Cassandra). This sort of time windowed aggregation is easily done with Flink's SQL api; see the examples here.
Related
With Spark streaming, I can read Kafka messages and write data to different kind of tables, for example HBase, Hive and Kudu. But this can also be done by using Kafka connectors for these tables. My question is, in which situations I should prefer connectors over the Spark streaming solution.
Also how tolerant is the Kafka connector solution? We know that with Spark streaming, we can use checkpoints and executors running on multiple nodes for fault tolerant execution, but how is fault tolerance (if possibe) achieved with Kafka connectors? By running the connector on multiple nodes?
So, generally, there should be no big difference in functionality when it comes to simply reading records from Kafka and sending them into other services.
Kafka Connect is probably easier when it comes to standard tasks since it offers various connectors out-of-the-box, so it will quite probably reduce the need of writing any code. So, if you just want to copy a bunch of records from Kafka to HDFS or Hive then it will probably be easier and faster to do with Kafka connect.
Having this in mind, Spark Streaming drastically takes over when You need to do things that are not standard i.e. if You want to perform some aggregations or calculations over records and write them to Hive, then You probably should go for Spark Streaming from the beginning.
Genrally, I found doing some substandard things with Kafka connect, like for example splitting one message to multiple ones(assuming it was for example JSON array) to be quite troublesome and often require much more work than it would be in Spark.
As for the Kafka Connect fault tolerance, as it's described in the docs this is achieved by running multiple distributed workers with same group.id, the workers redistribute tasks and connectors if one of them fails.
in which situations I should prefer connectors over the Spark streaming solution.
"It Depends" :-)
Kafka Connect is part of Apache Kafka, and so has tighter integration with Apache Kafka in terms of security, delivery semantics, etc.
If you don't want to write any code, Kafka Connect is easier because it's just JSON to configure and run
If you're not using Spark already, Kafka Connect is arguably more
straightforward to deploy (run the JVM, pass in the configuration)
As a framework, Kafka Connect is more transferable since the concepts are the same, you just plugin the appropriate connector for the technology that you want to integrate with each time
Kafka Connect handles all the tricky stuff for you like schemas, offsets, restarts, scaleout, etc etc etc
Kafka Connect supports Single Message Transform for making changes to data as it passes through the pipeline (masking fields, dropping fields, changing data types, etc etc). For more advanced processing you would use something like Kafka Streams or ksqlDB.
If you are using Spark, and it's working just fine, then it's not necessarily prudent to rip it up to use Kafka Connect instead :)
Also how tolerant is the Kafka connector solution? … how is fault tolerance (if possibe) achieved with Kafka connectors?
Kafka Connect can be run in distributed mode, in which you have one or more worker processes across nodes. If a worker fails, Kafka Connect rebalances the tasks across the remaining ones. If you add a worker in, Kafka Connect will rebalance to ensure workload distribution. This was drastically improved in Apache Kafka 2.3 (KIP-415)
Kafka Connect uses the Kafka consumer API and tracks offsets of records delivered to a target system in Kafka itself. If the task or worker fails you can be sure that it will restart from the correct point. Many connectors support exactly-once delivery too (e.g. HDFS, Elasticsearch, etc)
If you want to learn more about Kafka Connect see the docs here and my talk here. See a list of connectors here, and tutorial videos here.
Disclaimer: I work for Confluent and a big fan of Kafka Connect :-)
I'm looking for the best way to read messages (alot of messages, around 100B each day) from Kafka, after reading the message I need to make manipulate on data and write it into HDFS.
If I need to do it with the best performance, What is the best way for me to read messages from Kafka and write file into HDFS?
Which programming language is best for that?
Do I need to consider to use solutions like Spark for that?
You should use Spark streaming for this (see here), it provides simple correspondence between Kafka partitions and Spark partitions.
Or you can use Use Kafka Streams (see more). Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters.
You can use Spark, Flink, NiFi, Streamsets... but Confluent provides Kafka Connect HDFS exactly for this purpose.
The Kafka Connect API is somewhat limited in transformations, so what most people do is to write a Kafka Streams job to filter/enhance the data to a secondary topic, which then is written to HDFS
Note: These options will write many files to HDFS (generally, one per Kafka topic partition)
Which programming language is best for that?
Each of the above are using Java. But you don't need to write any code yourself if using NiFi, Streamsets, or Kafka Connect
My requirement is
I have log files that I need to process, also I would like to enrich the log information with some data which I have in postgres db.
Step 1. I plan to feed data from above two sources (log file and database) to kafka topics, using logstash
Step 2. I plan to use kafka stream to join data on different kafka topics and push them to elastic search via API calls.
My doubt is about step 2,
Is kafka stream is the way to go ? or can I use Apache spark which I believe can be used for same.
Any help on this is appreciated.
Step 1. I plan to feed data from above two sources (log file and database) to kafka topics, using logstash
If you're already using Apache Kafka, then note that you can use Kafka Connect for integrating systems, including databases, into Kafka. For information on integrating databases, see this article.
Step 2. I plan to use kafka stream to join data on different kafka topics and push them to elastic search via API calls.
My doubt is about step 2, Is kafka stream is the way to go ? or can I use Apache spark which I believe can be used for same. Any help on this is appreciated.
Yes, Kafka Streams is a good fit for this. It can enrich events as they flow through a topic, using data from other topics. These topics can be sourced from any system, including log files, databases, etc. Here is example code of such join, and the documentation for it.
BTW you might want to also check out KSQL. KSQL is built on Kafka Streams so you get the same scalability and elasticity functionality, but with a SQL abstraction that you can run directly (no coding needed). For an example of using KSQL to enrich streams of data see this talk or this article
(Disclosure: I work for Confluent, who lead the open-source KSQL project)
I'm doing a POC for running Machine Learning algorithm on stream of data.
My initial idea was to take data, use
Spark Streaming --> Aggregate Data from several tables --> run MLLib on Stream of Data --> Produce Output.
But I cam across KStreams. Now I'm confused !!!
Questions :
1. What is difference between Spark Streaming and Kafka Streaming ?
2. How can I marry KStreams + Spark Streaming + Machine Learning ?
3. My idea is to train the test data continuously rather than have batch training..
First of all, the term "Confluent's Kafka Streaming" is technically not correct.
it's called Kafka's Streams API (aka Kafka Streams)
it's part of Apache Kafka and thus "owned" by the Apache Software Foundation (and not by Confluent)
there is Confluent Open Source and Confluent Enterprise -- two offers from Confluent that both leverage Apache Kafka (and thus, Kafka Streams)
However, Confluent contributes a lot of code to Apache Kafka, including Kafka Streams.
About the differences (I only highlight some main differences and refer to the Internet and documentation for further details: http://docs.confluent.io/current/streams/index.html and http://spark.apache.org/streaming/)
Spark Streaming:
micro-batching (no real record-by-record stream processing)
no sub-second latency
limited window operations
no event-time processing
processing framework (difficult to operate and to deploy)
part of Apache Spark -- a data processing framework
exactly-once processing
Kafka Streams
record-by-record stream processing
ms latency
rich window operations
stream/table duality
event time, ingestion time, and processing time semantics
Java library (easy to run and deploy -- it's just a Java application as any other)
part of Apache Kafka -- a Stream Processing Platform (ie, it offers storage and processing at once)
at-least-once processing (exactly-once processing is WIP; cf KIP-98 and KIP-129)
elastic, ie, dynamically scalable
Thus there is no reasons to "marry" both -- it's a question of choice which one you want to use.
My personal take is, that Spark is not a good solution for stream processing. If you want to use a library like Kafka Streams or a framework like Apache Flink, Apache Storm, or Apache Apex (which are all good option for stream processing) depends on your use case (and maybe personal taste) and cannot be answered on SO.
A main differentiator of Kafka Streams is, that it is a library and does not require a processing cluster. And because it is part of Apache Kafka and if you have Apache Kafka already in place, this might simplify your overall deployment as you do not need to run an extra processing cluster.
I have recently presented at a conference about this topic.
Apache Kafka Streams or Spark Streaming are typically used to apply a machine learning model in real time to new events via stream processing (process data while it is in motion). Matthias answer already discusses their differences.
On the other side, you first use things like Apache Spark MLlib (or H2O.ai or XYZ) to build the analytic models first using historical data sets.
Kafka Streams can be used for online training of models, too. Though, I think online training has various caveats.
All of this is discussed in more details in my slide deck "Apache Kafka Streams and Machine Learning / Deep Learning for Real Time Stream Processing".
Apache Kafka Steams is library and provides embeddable stream processing engine and it is easy to use in Java applications for stream processing and it is not a framework.
I found some Use cases about when to use Kafka Streams and also good comparison with Apache flink from Kafka author.
Spark Streaming and KStreams in one pic from stream processing point of view.
Highlighted the significant advantages of Spark Streaming and KStreams here to make answer short.
Spark Streaming Advantages over KStreams:
Easy to integrate Spark ML models and Graph computing in same application without writing data outside of an application which means you will process the much quicker than writing kafka again and process.
Join non streaming sources like files system and other non kafka sources with other stream sources in same application.
Messages with Schema can be easily processed with most favorite SQL (StructuredStreaming).
Possible to do graph analysis over streaming data with GraphX inbuilt library.
Spark apps can be deployed over (if) existing YARN or Mesos cluster.
KStreams Advantages:
Compact library for ETL processing and ML model serving/training on messages with rich features. So far, both source and target should be Kafka topic only.
Easy to achieve exactly once semantics.
No separate processing cluster required.
Easy to deploy on docker since it's a plain java application to run.
What's the best way to write date from Kafka into Cassandra? I would expect it to be a solved problem, but there doesn't seem to be a standard adapter.
A lot of people seem to be using Storm to read from Kafka and then write to Cassandra, but storm seems like somewhat of an overkill for simple ETL operations.
We are heavily using Kafka and Cassandra through Storm
We rely on Storm because:
there are usually a lot of distributed processing (inter-node) steps before result of original message hit Cassandra (Storm bolt topologies)
We don't need to maintain consumer state of Kafka (offset) ourselves - Storm-Kafka connector is doing it for us when all products of original message is acked within Storm
Message processing is distributed across nodes with Storm natively
Otherwise if it is a very simple case, you might effectively read messages from Kafka and write result to Cassandra without help of Storm
Recent release of Kafka came with the connector concept to support source and sinks as first class concepts in the design. With this, you do not need any streaming framework for moving data in/out of Kafka. Here is the Cassandra connector for Kafka that you can use: https://github.com/tuplejump/kafka-connect-cassandra