Publish Apache Spark result to another Application/Kafka - apache-spark

I am currently designing a fast data aggregation module which receives events and publish them to Kafka cluster. Then we have an integration of Kafka and Spark Streaming. Spark Streaming reads stream from Kafka and executes some computation. When the computation is done, we need to send the result to another application. This application could be a web service or a Kafka cluster.
I am wondering how we can do this? From what I've read, Spark Stream pushes the data to downstream like Databases and file systems.
How would you go with design such an application? Should I replace Spark Stream with Storm to be able to publish the results to another application?

Please refer to dstream.foreachRDD, which is a powerful primitive that allows data to be sent out to external systems.
Design Patterns for using foreachRDD
Below is my kafka integration code for your reference(not optimized, just for POC, KafkaProducer object could be re-used in foreachRDD):
DStream.foreachRDD(rdd => {
rdd.foreachPartition { partitionOfRecords =>
val kafkaProps = new Properties()
kafkaProps.put("bootstrap.servers", props("bootstrap.servers"))
kafkaProps.put("client.id", "KafkaIntegration Producer");
kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
val producer = new KafkaProducer[String, String](kafkaProps);
partitionOfRecords.foreach(record => {
val message = new ProducerRecord[String, String]("hdfs_log_test", record.asInstanceOf[String])
producer.send(message)
})
producer.close()
}
})

I am wondering how we can do this? From what I've read, Spark Stream pushes the data to downstream like Databases and file systems.
Spark is not limited to HDFS or Databases, you're free to initialize a connection to any external resource which is available. It can be back to Kafka, RabbitMQ or a WebService.
If you're doing simple transformation like map, filter, reduceByKey etc, then using DStream.foreachRDD will do fine. If you'll be doing stateful computations like DStream.mapWithState, then once you're done processing the state you can you can simply send the data to any external service.
For example, we're using Kafka as an input stream of data, and RabbitMQ and an output after doing some stateful computations.

Related

Spark NiFi site to site connection

I am new with NiFi, I am trying to send data from NiFi to Spark or to establish a stream from NiFi output port to Spark according to this tutorial.
Nifi is running on Kubernetes and I am using Spark operator on the same cluster to submit my applications.
It seems like Spark is able to access the web NiFi and it starts a streaming receiver. However, data is not coming to the Spark app through output and I have empty rdds. I have not seen any warnings or errors in Spark logs
Any Idea or information which could help me to solve this issue is appreciated.
My code:
val conf = new SiteToSiteClient.Builder()
.keystoreFilename("..")
.keystorePass("...")
.keystoreType(...)
.truststoreFilename("..")
.truststorePass("..")
.truststoreType(...)
.url("https://...../nifi")
.portName("spark")
.buildConfig()
val lines = ssc.receiverStream(new NiFiReceiver(conf, StorageLevel.MEMORY_ONLY))

How to use spark structured streaming to simultaneously write to parquet and call REST API

How to use spark structured streaming to simultaneously write to parquet and call REST API? Below is where I need to integrate with:
Through spark SQL structured streaming, am able to consume from Kafka Topics.
The message is in avro format, and able to write into parquet filesystem.
On the other hand able to read the parquet filesystem and fire any SQL query as per the need.
Below are few integration or processing I am stuck, can anyone please help:
So, I have to now integrate a rest call, simultaneously I should be able to write to parquet filesystem and call the rest API.
To call rest API I should also convert the Dataset to Avro object first and then prepare the request object for REST API.
The above streaming implementation is done on JAVA. Preferably if JAVA based API or approach is suggested that would be great help.
FYI. I am using the latest version of spark streaming:
spark-streaming-kafka-0-10_2.12 -> 2.4.0
spark-streaming_2.12 -> 3.0.1
{
//dataSet -> dataset having kafka message
Dataset<Row> output = dataSet.select(package$.MODULE$.from_avro(col("value"), avroSchema).as("EventMessage")).select("EventMessage.*");
output
.writeStream()
.outputMode(OutputMode.Append().toString()).format("console")
.foreachBatch((VoidFunction2<Dataset<Row>, Long>) (df, batchId) -> {
df.write().mode(OutputMode.Append().toString()).format("parquet").partitionBy("action").parquet(STREAM_PARQUET_OUTPUT_PATH);
// REST API CALL BLOCK
//df -> avro object -> API Rquest Object -> REST Call
}).start().awaitTermination();
}

What does "avoid multiple Kudu clients per cluster" mean?

I am looking at kudu's documentation.
Below is a partial description of kudu-spark.
https://kudu.apache.org/docs/developing.html#_avoid_multiple_kudu_clients_per_cluster
Avoid multiple Kudu clients per cluster.
One common Kudu-Spark coding error is instantiating extra KuduClient objects. In kudu-spark, a KuduClient is owned by the KuduContext. Spark application code should not create another KuduClient connecting to the same cluster. Instead, application code should use the KuduContext to access a KuduClient using KuduContext#syncClient.
To diagnose multiple KuduClient instances in a Spark job, look for signs in the logs of the master being overloaded by many GetTableLocations or GetTabletLocations requests coming from different clients, usually around the same time. This symptom is especially likely in Spark Streaming code, where creating a KuduClient per task will result in periodic waves of master requests from new clients.
Does this mean that I can only run one kudu-spark task at a time?
If I have a spark-streaming program that is always writing data to the kudu,
How can I connect to kudu with other spark programs?
In a non-Spark program you use a KUDU Client for accessing KUDU. With a Spark App you use a KUDU Context that has such a Client already, for that KUDU cluster.
Simple JAVA program requires a KUDU Client using JAVA API and maven
approach.
KuduClient kuduClient = new KuduClientBuilder("kudu-master-hostname").build();
See http://harshj.com/writing-a-simple-kudu-java-api-program/
Spark / Scala program of which many can be running at the same time
against the same Cluster using Spark KUDU Integration. Snippet
borrowed from official guide as quite some time ago I looked at this.
import org.apache.kudu.client._
import collection.JavaConverters._
// Read a table from Kudu
val df = spark.read
.options(Map("kudu.master" -> "kudu.master:7051", "kudu.table" -> "kudu_table"))
.format("kudu").load
// Query using the Spark API...
df.select("id").filter("id >= 5").show()
// ...or register a temporary table and use SQL
df.registerTempTable("kudu_table")
val filteredDF = spark.sql("select id from kudu_table where id >= 5").show()
// Use KuduContext to create, delete, or write to Kudu tables
val kuduContext = new KuduContext("kudu.master:7051", spark.sparkContext)
// Create a new Kudu table from a dataframe schema
// NB: No rows from the dataframe are inserted into the table
kuduContext.createTable("test_table", df.schema, Seq("key"),
new CreateTableOptions()
.setNumReplicas(1)
.addHashPartitions(List("key").asJava, 3))
// Insert data
kuduContext.insertRows(df, "test_table")
See https://kudu.apache.org/docs/developing.html
The more clear statement of "avoid multiple Kudu clients per cluster" is "avoid multiple Kudu clients per spark application".
Instead, application code should use the KuduContext to access a KuduClient using KuduContext#syncClient.

Apache Kafka + Spark Integration (REST API is needed?)

Got some fundamental problems, hope someone can clear them up.
So I want to use Apache Kafka and Apache spark for my application. I have gone through numerous tutorials and got the basic idea of what it is and how it will work.
Use case :
Data will be generated from a mobile device(multiple devices, lets say 1000) at an interval of 40 sec and I need to process that data and add values to the database which in turn will be reflected back in a dashboard.
What I wanted to do is to use Apache Streams and make a post request from android itself and then those data will be processed by the spark application and that's it.
Issues:
Apache Spark
I am following this tutorial to get it up and running.( Am using JAVA, not scala)
Link : https://www.santoshsrinivas.com/installing-apache-spark-on-ubuntu-16-04/
After everything is done, I execute spark-shell and it start. I have also installed zookeeper and kafka on my server and I have started the Kafka in the background, so that's not an issue.
When I run http://161.xxx.xxx.xxx:4040/jobs/ I get this page
In all the tutorial which I have gone through, there is a page like this : https://i.stack.imgur.com/gF1fN.png but I don't get this. Is it that spark is not properly installed?
Now when I want to deploy a standalone jar to spark, (Using this link : http://data-scientist-in-training.blogspot.in/2015/03/apache-spark-cluster-deployment-part-1.html ) am able to run it.
i.e with the command : spark-submit --class SimpleApp.SimpleApp --master spark://http://161.xxx.xxx.xxx:7077 --name "try" /opt/spark/bin/try-0.0.1-SNAPSHOT.jar , I get the output.
Do I need to submit the application everytime if I want to use it?
This is my Program :
package SimpleApp;
/* SimpleApp.java */
import org.apache.spark.api.java.*;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.function.Function;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "/opt/spark/README.md"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("local[*]");
JavaSparkContext sc = new JavaSparkContext(conf);
//System.setProperty("hadoop.home.dir", "C:/winutil");
sc.setLogLevel("ERROR"); // Don't want the INFO stuff
JavaRDD<String> logData = sc.textFile(logFile).cache();
long numAs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("a"); }
}).count();
long numBs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("b"); }
}).count();
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
System.out.println("word count : "+logData.first());
sc.stop();
}
}
Now how do I integrate Kafka into it?
How to configure the app in such a way that it get executed everytime
kafka receives a message?
Moreover, do I need to make a REST API through which I need to send
the data to kafka i.e the REST api will be used as producer?
Something like spark Java framework? http://sparkjava.com/
If yes, again the bottleneck will happen at REST api level i.e how
many request it can handle or not because everywhere I read that
Kafka has a very high throughput.
Is the final structure going to be like SPARK JAVA -> KAFKA -> APACHE
SPARK ?
Lastly how to do I set up the development structure on my local
device? I have kafka/apache spark installed. And am using Eclipse.
Thanks
Well,
You are facing some problems to understand how Spark works with Kafka.
First let's understand somethings:
Kafka is a Stream process platform for low latency and high throughput. This will allow you to store and read lot's of data really fast.
Spark has two types of processing, Spark Batch and Spark Streaming. What you are studying is batch, for your problem I suggest you to see apache streaming.
What is Streaming?
Streaming is a way to transport and transform your data in real time or near real time. It will not be necessary to create a process that you need to call every 10 minutes or every 10 seconds. You will start the job and it will consume the source and will post in the sink.
Kafka is a passive platform, so Kafka can be a source or a sink of a stream process.
In your case, what I suggest is:
Create a streaming producer for your Kafka, you will read the log of your mobile application in your web server. So, you need to plug something at your web server to start the consumption of the data. What I suggest you is FluentdIs a really strong application for streaming, this is in Ruby but is really easy to use. If you want something more robust and more focused in BigData I suggest Apache Nifi This is hard to work, that is not easy but you can create pipelines of data flow to transfer your information to your cluster. And something REALLY SIMPLE and that will solve your problem is Apache Flume.
Start your Kafka, you can use Docker to use it. This will hold your data for a period, and will allow you to take your data when you need really fast and with a lot of information. Please read the docs to understand how it works.
Spark Streaming - That will not make sense to use a Kafka if you don't have a stream process, your solution of Rest to produce the data at Kafka is slow and if is batch doesn't make sense. So if you are writing as streaming, you should analyse as streaming too. I suggest you to read about Spark Streaming here. And how integrate the Spark with Kafka here.
So, as you asked:
Do I need a REST API?
The answer is No.
The architecture will be like this:
Web Server -> Fluentd -> Apache Kafka -> Spark Streaming -> Output
I hope that will help

How to process DynamoDB Stream in a Spark streaming application

I would like to consume a DynamoDB Stream from a Spark Streaming application.
Spark streaming uses KCL to read from Kinesis. There is a lib to make KCL able to read from a DynamoDB Stream: dynamodb-streams-kinesis-adapter.
But is it possible to plug this lib into spark? Anyone done this?
I'm using Spark 2.1.0.
My backup plan is to have another app reading from DynamoDB stream into a Kinesis stream.
Thanks
The way to do this it to implement the KinesisInputDStream to use the worker provided by dynamodb-streams-kinesis-adapter
The official guidelines suggest something like this:
final Worker worker = StreamsWorkerFactory
.createDynamoDbStreamsWorker(
recordProcessorFactory,
workerConfig,
adapterClient,
amazonDynamoDB,
amazonCloudWatchClient);
From the Spark's perspective, it is implemented under the kinesis-asl module in KinesisInputDStream.scala
I have tried this for Spark 2.4.0. Here is my repo. It needs little refining but gets the work done
https://github.com/ravi72munde/spark-dynamo-stream-asl
After modifying the KinesisInputDStream, we can use it as shown below.
val stream = KinesisInputDStream.builder
.streamingContext(ssc)
.streamName("sample-tablename-2")
.regionName("us-east-1")
.initialPosition(new Latest())
.checkpointAppName("sample-app")
.checkpointInterval(Milliseconds(100))
.storageLevel(StorageLevel.MEMORY_AND_DISK_2)
.build()

Resources