Spark Streaming vs Structured Streaming - apache-spark

The last months I've been using quite a lot Structured Streaming for implementing Stream Jobs (after using Kafka a lot). After reading the book Stream Processing with Apache Spark i was having this question: Is there any point or use cases where i would use Spark Streaming instead of Structured Streaming? Should i invest some time getting into it or since im already using Spark Structured Streaming i should stick with it and there is no benefit on the previous API.
Would appreciate any opinion/insight

Hi Sharing my personal experience.
Structured streaming is the future for spark based streaming implementation. It provides higher level of abstraction and other great features. However there are few restrictions.
i have had to switch to spark streaming on few occasions due to the flexibility offered by it. One recent example is, we had to perform Joins with static reference data, however Outer joins are not supported in Structured streaming. This can be accomplished with Spark streaming.
With the newer spark version 2.4, Structured streaming is much improved with support for foreachBatch sink which gives similar flexibility offered by spark streaming.
My personal thought is having the knowledge of spark streaming is helpful and you might have to use it depending on your use case.

Related

Is the natural replacement for Spark (Direct) Streaming either Spark Structured Streaming or Kafka Streams?

Over the past few years we have developed quite some Spark Streaming (Direct API) applications that are reading or writing to/from Kafka, IBM MQ, Hive, HBase, HDFS, and others on our Cloudera Platform. Now that the Direct API of Spark Streaming (we currently have version 2.3.2) is deprecated and we recently added the Confluent platform (comes with Kafka 2.2.0) to our project we plan to migrate these applications.
What is the natural replacement of our Spark Streaming applications? Should we migrate to Spark Structured Streaming or rather to Kafka Streams?
I personally do not have any experience with both frameworks but in my view Spark Structured Streaming seems to be the natural choice. Our code base is mainly written in Scala which could be also used for the Structured API. Kafka Streams has a few limitations with Scala. Although we might loose some flexibility by leaving the low level API of RDDs and moving to a higher level of DataFrames we could build on our knowledge with Spark.
On the other side there is Kafka Streams which is probably the best choice when it comes to processing data between Kafka topics which is our main use case. And looking at all the Kafka Connectors that come with Confluent the other uses cases can be served as well.
You currently have some Spark scheduler, therefore you can use Structured Streaming, which is binary compatible with the old Streaming API.
If you're using Mesos or k8s, then putting Kafka Streams apps in Docker and running those is easier to scale, monitor and configure than Spark, IMO since it acts as any other Docker container in those systems, so you build a pattern around everything
Kafka Streams... is probably the best choice when it comes to processing data between Kafka topics
True.
Kafka Streams has a few limitations with Scala.
I think you might want to keep reading that section
The Kafka Streams DSL for Scala library is a wrapper over the existing Java APIs for Kafka Streams DSL that addresses the concerns raised
Of course you could always use Kotlin to interop better with the Java API

Are Spark Streaming, Structured Streaming and Kafka Streaming the same thing?

I have come across three popular streaming techniques that are Spark Streaming, Structured Streaming and Kafka Streaming.
I have gone through various sites but not getting this answer, are these three the same thing or different?
If not same what is the basic difference.
I am not looking for an in depth answer. But an answer to above question (yes or no) and a little intro to each of them so that I can explore more. :)
Thanks in advance
Subrat
I guess you are referring to Kafka Streams when you say "Kafka Streaming".
Kafka Streams is a JVM library, part of Apache Kafka. It is a way of processing data in Kafka topics providing an abstraction layer. Applications running KafkaStreams library can be run anywhere (not just in the Kafka cluster, actually, it is not recommended to). They'll consume, process and produce data to/from the Kafka cluster.
Spark Streaming is a part of Apache Spark distributed data processing library, that provides Stream (as oppposed to batch) processing. Spark initially provided batch computation only, so a specific layer Spark Streaming was provided for stream processing. Spark Streaming can be fed with Kafka data, but it can be connected to other sources as well.
Structured Streaming, within the realm of Apache Spark, is a different approach that came to overcome certain limitations to stream processing of the previous approach that Spark Streaming was using. It was added to Spark from a certain version onwards(2.0 IIRC).

Apache Spark Structured Streaming vs Apache Flink: what is the difference?

We have discussed the questions below:
What is the difference between Apache Spark and Apache Flink? [closed]
What does “streaming” mean in Apache Spark and Apache Flink?
What is the difference between mini-batch vs real time streaming in practice (not theory)?
But Spark Structured Streaming was added at Spark2.2, it brings a lot of changes for streaming, and it is outstanding.
Can we say Spark Strutured Streaming is a streaming processing, or still batch processing?
Now what is the big difference between Apache Flink and Apache Spark Structured Streaming?
Currently:
Spark Structured Streaming has still microbatches used in background. However, it supports event-time processing, quite low latency (but not as low as Flink), supports SQL and type-safe queries on the streams in one API; no distinction, every Dataset can be queried both with SQL or with typesafe operators. It has end-to-end exactly-one semantics (at least they says it ;) ). The throughput is better than in Flink (there were some benchmarks with different results, but look at Databricks post about the results).
In near future:
Spark Continous Processing Mode is in progress and it will give Spark ~1ms latency, comparable to those from Flink. However, as I said, it's still in progress. The API is ready for non-batch jobs, so it's easier to do than in previous Spark Streaming.
The main difference:
Spark relies on micro-batching now and Flink is has pre-scheduled operators. That means, Flink's latency is lower, but Spark Community works on Continous Processing Mode, which will work similar (as far as I understand) to receivers.

Akka stream vs Hive Stream

I am working on a requirement where we need to read messages from Kafka and save (sink) to Hive. I can think about of multiple implementation using different technologies:
Akka stream - where source will be Kafka source and sink to hive
Hive Stream - using hive streaming
Spark streaming
nifi - https://nifi.apache.org/
What would be best way to handle large set of kafka messages to stream with Hive?
Thanks
Arun
Best is of course a very vague concept, but I personally like NiFi as a data movement solution.
If you are looking for fast development, and clear monitoring then the intuitive GUI should prove very valuable.
If you find that you cannot get enough performance, or good enough latency, you might be able to improve with Spark Streaming, but often that should not be needed.
Ful disclosure: Have not worked with Akka Streams, and work for Cloudera a driving force behind Nifi, Spark and Hive

KStreams + Spark Streaming + Machine Learning

I'm doing a POC for running Machine Learning algorithm on stream of data.
My initial idea was to take data, use
Spark Streaming --> Aggregate Data from several tables --> run MLLib on Stream of Data --> Produce Output.
But I cam across KStreams. Now I'm confused !!!
Questions :
1. What is difference between Spark Streaming and Kafka Streaming ?
2. How can I marry KStreams + Spark Streaming + Machine Learning ?
3. My idea is to train the test data continuously rather than have batch training..
First of all, the term "Confluent's Kafka Streaming" is technically not correct.
it's called Kafka's Streams API (aka Kafka Streams)
it's part of Apache Kafka and thus "owned" by the Apache Software Foundation (and not by Confluent)
there is Confluent Open Source and Confluent Enterprise -- two offers from Confluent that both leverage Apache Kafka (and thus, Kafka Streams)
However, Confluent contributes a lot of code to Apache Kafka, including Kafka Streams.
About the differences (I only highlight some main differences and refer to the Internet and documentation for further details: http://docs.confluent.io/current/streams/index.html and http://spark.apache.org/streaming/)
Spark Streaming:
micro-batching (no real record-by-record stream processing)
no sub-second latency
limited window operations
no event-time processing
processing framework (difficult to operate and to deploy)
part of Apache Spark -- a data processing framework
exactly-once processing
Kafka Streams
record-by-record stream processing
ms latency
rich window operations
stream/table duality
event time, ingestion time, and processing time semantics
Java library (easy to run and deploy -- it's just a Java application as any other)
part of Apache Kafka -- a Stream Processing Platform (ie, it offers storage and processing at once)
at-least-once processing (exactly-once processing is WIP; cf KIP-98 and KIP-129)
elastic, ie, dynamically scalable
Thus there is no reasons to "marry" both -- it's a question of choice which one you want to use.
My personal take is, that Spark is not a good solution for stream processing. If you want to use a library like Kafka Streams or a framework like Apache Flink, Apache Storm, or Apache Apex (which are all good option for stream processing) depends on your use case (and maybe personal taste) and cannot be answered on SO.
A main differentiator of Kafka Streams is, that it is a library and does not require a processing cluster. And because it is part of Apache Kafka and if you have Apache Kafka already in place, this might simplify your overall deployment as you do not need to run an extra processing cluster.
I have recently presented at a conference about this topic.
Apache Kafka Streams or Spark Streaming are typically used to apply a machine learning model in real time to new events via stream processing (process data while it is in motion). Matthias answer already discusses their differences.
On the other side, you first use things like Apache Spark MLlib (or H2O.ai or XYZ) to build the analytic models first using historical data sets.
Kafka Streams can be used for online training of models, too. Though, I think online training has various caveats.
All of this is discussed in more details in my slide deck "Apache Kafka Streams and Machine Learning / Deep Learning for Real Time Stream Processing".
Apache Kafka Steams is library and provides embeddable stream processing engine and it is easy to use in Java applications for stream processing and it is not a framework.
I found some Use cases about when to use Kafka Streams and also good comparison with Apache flink from Kafka author.
Spark Streaming and KStreams in one pic from stream processing point of view.
Highlighted the significant advantages of Spark Streaming and KStreams here to make answer short.
Spark Streaming Advantages over KStreams:
Easy to integrate Spark ML models and Graph computing in same application without writing data outside of an application which means you will process the much quicker than writing kafka again and process.
Join non streaming sources like files system and other non kafka sources with other stream sources in same application.
Messages with Schema can be easily processed with most favorite SQL (StructuredStreaming).
Possible to do graph analysis over streaming data with GraphX inbuilt library.
Spark apps can be deployed over (if) existing YARN or Mesos cluster.
KStreams Advantages:
Compact library for ETL processing and ML model serving/training on messages with rich features. So far, both source and target should be Kafka topic only.
Easy to achieve exactly once semantics.
No separate processing cluster required.
Easy to deploy on docker since it's a plain java application to run.

Resources