I'm testing an implementation at work that will see 300 million messages/day coming through, with plans to scale up enormously. There's one step that seems janky at the moment and I'd appreciate some advice.
I did take a stab at this with https://scalapb.github.io/sparksql.html but couldn't seem to get it to work, even following their maven instructions.
Currently I have a protobuf and a case class for the same model:
message MyThing { // proto
required string id = 1;
}
case class MyThing(id: String)
Then I have a spark readStream
val df =
spark.readStream
.format("kafka")
// etc
.load()
The kafka payload is in the "value" column, which is an Array[Byte] from the protobuf that was transmitted. I want to turn that binary column into Rows with a specific StructType.
What I have right now uses a weird syntax involving the case class:
val encoder = Encoder.product[MyThing]
df
.select("value")
.map { row =>
// from memory so might be slightly off
val proto = MyThingProto.parseFrom(row.getBinary(0))
val myThing = MyThing.fromProto(proto)
myThing
}(encoder)
.toDF()
// business logic
.writeStream
...//output
Can I make this more efficient/faster? The overhead involved in creating the case class seems excessive. I'd prefer to be able to do something like this:
.map { row =>
// from memory so might be slightly off
val proto = MyThingProto.parseFrom(row.getBinary(0))
val row = buildRow(proto)
row
}(encoder??) // what kind of encoder is used here?
def buildRow(proto: MyThingProto): Row =
Row(proto.getId)
Would this be better? Or perhaps a UDF that uses the Kafka deserializer interface?
Thanks in advance.
Related
We are trying to use dynamic filter for a structured streaming application.
Let's say we have following pseudo-implementation of a Spark structured streaming application:
spark.readStream()
.format("kafka")
.option(...)
...
.load()
.filter(getFilter()) <-- dynamic staff - def filter(conditionExpr: String):
.writeStream()
.format("kafka")
.option(.....)
.start();
and getFilter returns string
String getFilter() {
// dynamic staff to create expression
return expression; // eg. "column = true";
}
Is it possible in current version of Spark to have a dynamic filter condition? I mean the getFilter() method should dynamically return a filter condition (let's say it's refreshed each 10min). We tried to look into broadcast variable but not sure whether structured streaming supports such a thing.
It looks like it's not possible to update job's configuration once it's submitted. As a deploy we use yarn.
Every suggestion/option is highly appreciated.
EDIT:
assume getFilter() returns:
(columnA = 1 AND columnB = true) OR customHiveUDF(columnC, 'input') != 'required' OR columnD > 8
after 10 mins we can have small change (without first expression before first OR) and potentially we can have a new expression (columnA = 2) eg:
customHiveUDF(columnC, 'input') != 'required' OR columnD > 10 OR columnA = 2
The goal is to have multiple filters for one spark application and don't submit multiple jobs.
Broadcast variable should be ok here. You can write typed filter like:
query.filter(x => x > bv.value).writeStream(...)
where bv is a Broadcast variable. You can update it as described here: How can I update a broadcast variable in spark streaming?
Other solution is to provide i.e. RCP or RESTful endpoint and ask this endpoint every 10 minutes. For example (Java, because is simpler here):
class EndpointProxy {
Configuration lastValue;
long lastUpdated
public static Configuration getConfiguration (){
if (lastUpdated + refreshRate > System.currentTimeMillis()){
lastUpdated = System.currentTimeMillis();
lastValue = askMyAPI();
}
return lastValue;
}
}
query.filter (x => x > EndpointProxy.getConfiguration().getX()).writeStream()
Edit: hacky workaround for user's problem:
You can create 1-row view:
// confsDF should be in some driver-side singleton
var confsDF = Seq(some content).toDF("someColumn")
and then use:
query.crossJoin(confsDF.as("conf")) // cross join as we have only 1 value
.filter("hiveUDF(conf.someColumn)")
.writeStream()...
new Thread() {
confsDF = Seq(some new data).toDF("someColumn)
}.start();
This hack relies on Spark default execution model - microbatches. In each trigger the query is being rebuilt, so new data should be taken into consideration.
You can also in thread do:
Seq(some new data).toDF("someColumn).createOrReplaceTempView("conf")
and then in query:
.crossJoin(spark.table("conf"))
Both should work. Have in mind that it won't work with Continous Processing Mode
Here is the Simple example, In which i am dynamic filtering records which is coming form socket. Instead of Date you can use any rest API which can update your filter dynamically or light weight zookeeper instance.
Note: - If you planning to use any rest API or zookeeper or any other option, use mapPartition instead of filter because in that case you have call API/Connection one time for a partition.
val lines = spark.readStream
.format("socket")
.option("host", "localhost")
.option("port", 9999)
.load()
// Split the lines into words
val words = lines.as[String].filter(_ == new java.util.Date().getMinutes.toString)
// Generate running word count
val wordCounts = words.groupBy("value").count()
val query = wordCounts.writeStream
.outputMode("complete")
.format("console")
.start()
query.awaitTermination()
I have a Dataset<Row> which is a resultant of Kafka readStream as shown below in Java code snippet.
m_oKafkaEvents = getSparkSession().readStream().format("kafka")
.option("kafka.bootstrap.servers", strKafkaAddress)
.option("subscribe", getInsightEvent().getTopic())
.option("maxOffsetsPerTrigger", "100000")
.option("startingOffsets", "latest")
.option("failOnDataLoss", false)
.load()
.select(functions.from_json(functions.col("value").cast("string"), oSchema).as("events"))
.select("events.*");
m_oKafkaEvents
{
{"EventTime":"1527005246864000000","InstanceID":"231","Model":"Opportunity_1","Milestone":"OrderProcessed"},
{"EventTime":"1527005246864000002","InstanceID":"232","Model":"Opportunity_2","Milestone":"OrderProcessed"},
{"EventTime":"1527005246864000001","InstanceID":"233","Model":"Opportunity_1","Milestone":"OrderProcessed"},
{"EventTime":"1527005246864000002","InstanceID":"234","Model":"Opportunity_2","Milestone":"OrderProcessed"}
}
I need to split this dataset based on column "Model" which would result in two Dataset as below;
m_oKafkaEvents_for_Opportunity_1_topic
{
{"EventTime":"1527005246864000000","InstanceID":"231","Model":"Opportunity_1","Milestone":"OrderProcessed"},
{"EventTime":"1527005246864000001","InstanceID":"233","Model":"Opportunity_1","Milestone":"OrderProcessed"}
}
m_oKafkaEvents_for_Opportunity_2_topic
{
{"EventTime":"1527005246864000002","InstanceID":"232","Model":"Opportunity_2","Milestone":"OrderProcessed"},
{"EventTime":"1527005246864000002","InstanceID":"234","Model":"Opportunity_2","Milestone":"OrderProcessed"}
}
These Datasets would be published into Kafka sink. The topic name would be the model value. i.e Opportunity_1 and Opportunity_2.
Hence I need to have a handle column "Model" value and respective events list.
Since am new to spark, am looking for help on how this can be achieved via java code.
Appreciate any help.
The simplest solution would look like:
allEvents.selectExpr("topic", "CONCAT('m_oKafkaEvents_for_', Model, '_topic')")
.write()
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.save();
You can see an example here https://spark.apache.org/docs/2.2.0/structured-streaming-kafka-integration.html#writing-the-output-of-batch-queries-to-kafka . But after looking at Spark's code, it seems that we can have only 1 topic/write, i.e. it'll chose as topic the first encountered row:
def write(
sparkSession: SparkSession,
queryExecution: QueryExecution,
kafkaParameters: ju.Map[String, Object],
topic: Option[String] = None): Unit = {
val schema = queryExecution.analyzed.output
validateQuery(schema, kafkaParameters, topic)
queryExecution.toRdd.foreachPartition { iter =>
val writeTask = new KafkaWriteTask(kafkaParameters, schema, topic)
Utils.tryWithSafeFinally(block = writeTask.execute(iter))(
finallyBlock = writeTask.close())
}
You can try this approach though and tell here if it works as told above ? If it doesn't work, you have alternative solutions, as:
Cache main DataFrame and create 2 other DataFrames, filtered by Model attribute
Use foreachPartition and Kafka writer to send the messages without splitting the main dataset
The first solution is pretty easy to implement and you use all Spark facilities to do that. In the other side and at least theoritecally, splitting the dataset should be slightly slower than the second proposal. But try to measure before chosing one or another option, maybe the difference will be really small and it's always better to use clear and community-approven approach.
Below you can find some code showing both situations:
SparkSession spark = SparkSession
.builder()
.appName("JavaStructuredNetworkWordCount")
.getOrCreate();
Dataset<Row> allEvents = spark.readStream().format("kafka")
.option("kafka.bootstrap.servers", "")
.option("subscribe", "event")
.option("maxOffsetsPerTrigger", "100000")
.option("startingOffsets", "latest")
.option("failOnDataLoss", false)
.load()
.select(functions.from_json(functions.col("value").cast("string"), null).as("events"))
.select("events.*");
// First solution
Dataset<Row> opportunity1Events = allEvents.filter("Model = 'Opportunity_1'");
opportunity1Events.write().format("kafka").option("kafka.bootstrap.servers", "")
.option("topic", "m_oKafkaEvents_for_Opportunity_1_topic").save();
Dataset<Row> opportunity2Events = allEvents.filter("Model = 'Opportunity_2'");
opportunity2Events.write().format("kafka").option("kafka.bootstrap.servers", "")
.option("topic", "m_oKafkaEvents_for_Opportunity_2_topic").save();
// Note: Kafka writer was added in 2.2.0 https://github.com/apache/spark/commit/b0a5cd89097c563e9949d8cfcf84d18b03b8d24c
// Another approach with iteration throughout messages accumulated within each partition
allEvents.foreachPartition(new ForeachPartitionFunction<Row>() {
private KafkaProducer<String, Row> localProducer = new KafkaProducer<>(new HashMap<>());
private final Map<String, String> modelsToTopics = new HashMap<>();
{
modelsToTopics.put("Opportunity_1", "m_oKafkaEvents_for_Opportunity_1_topic");
modelsToTopics.put("Opportunity_2", "m_oKafkaEvents_for_Opportunity_2_topic");
}
#Override
public void call(Iterator<Row> rows) throws Exception {
// If your message is Opportunity1 => add to messagesOpportunity1
// otherwise it goes to Opportunity2
while (rows.hasNext()) {
Row currentRow = rows.next();
// you can reformat your row here or directly in Spark's map transformation
localProducer.send(new ProducerRecord<>(modelsToTopics.get(currentRow.getAs("Model")),
"some_message_key", currentRow));
}
// KafkaProducer accumulates messages in a in-memory buffer and sends when a threshold was reached
// Flush them synchronously here to be sure that every stored message was correctly
// delivered
// You can also play with features added in Kafka 0.11: the idempotent producer and the transactional producer
localProducer.flush();
}
});
I have written a Structured Streaming aggregation that takes events from a Kafka Source, performs a simple count and writes them back to a Cassandra Database. The code looks like this:
val data = stream
.groupBy(functions.to_date($"timestamp").as("date"), $"type".as("type"))
.agg(functions.count("*").as("value"))
val query: StreamingQuery = data
.writeStream
.queryName("group-by-type")
.format("org.apache.spark.sql.streaming.cassandra.CassandraSinkProvider")
.outputMode(OutputMode.Complete())
.option("checkpointLocation", config.getString("checkpointLocation") + "/" + "group-by-type")
.option("keyspace", "analytics")
.option("table", "aggregations")
.option("partitionKeyColumns", "project,type")
.option("clusteringKeyColumns", "date")
.start()
The problem is that the count is just over every single batch. So I will see counts dropping in Cassandra. The counts should never drop over a day, how can I achieve that?
Edit:
I have tried using window aggregations too, same thing
So the error in this case wasn't actually in my query or in Spark.
To figure out where the problem is I used the console sink and that one did not show the problem.
The problem was in my Cassandra sink which looked like this:
class CassandraSink(sqlContext: SQLContext, keyspace: String, table: String) extends Sink {
override def addBatch(batchId: Long, data: DataFrame): Unit = {
data.write.mode(SaveMode.Append).cassandraFormat(table, keyspace).save()
}
}
It uses the Datastax Spark Cassandra connector to write data frames.
The problem is that the variable data contains a streaming DataSet. In the ConsoleSink that's provided by Spark the DataSet gets copied into a static DataSet before writing. So I've changed it and now it works. The finished version looks like this:
class CassandraSink(sqlContext: SQLContext, keyspace: String, table: String) extends Sink {
override def addBatch(batchId: Long, data: DataFrame): Unit = {
val ds = data.sparkSession.createDataFrame(
data.sparkSession.sparkContext.parallelize(data.collect()),
data.schema
)
ds.write.mode(SaveMode.Append).cassandraFormat(table, keyspace).save()
}
}
I am trying to use structured streaming in Spark 2.1.1 to read from Kafka and decode Avro encoded messages. I have the a UDF defined as per this question.
val sr = new CachedSchemaRegistryClient(conf.kafkaSchemaRegistryUrl, 100)
val deser = new KafkaAvroDeserializer(sr)
val decodeMessage = udf { bytes:Array[Byte] => deser.deserialize("topic.name", bytes).asInstanceOf[DeviceRead] }
val topic = conf.inputTopic
val df = session
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", conf.kafkaServers)
.option("subscribe", topic)
.load()
df.printSchema()
val result = df.selectExpr("CAST(key AS STRING)", """decodeMessage($"value") as "value_des"""")
val query = result.writeStream
.format("console")
.outputMode(OutputMode.Append())
.start()
However I get the following failure.
Exception in thread "main" java.lang.UnsupportedOperationException: Schema for type DeviceRelayStateEnum is not supported
It fails on this line
val decodeMessage = udf { bytes:Array[Byte] => deser.deserialize("topic.name", bytes).asInstanceOf[DeviceRead] }
An alternate approach was to define encoders for the custom classes I have
implicit val enumEncoder = Encoders.javaSerialization[DeviceRelayStateEnum]
implicit val messageEncoder = Encoders.product[DeviceRead]
but that fails with the following error when the messageEncoder is getting registered.
Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for DeviceRelayStateEnum
- option value class: "DeviceRelayStateEnum"
- field (class: "scala.Option", name: "deviceRelayState")
- root class: "DeviceRead"
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:602)
at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:476)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:596)
at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$9.apply(ScalaReflection.scala:587)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
When I attempt to do this using a map after the load() I get the following compilation error.
val result = df.map((bytes: Row) => deser.deserialize("topic", bytes.getAs[Array[Byte]]("value")).asInstanceOf[DeviceRead])
Error:(76, 26) not enough arguments for method map: (implicit evidence$6: org.apache.spark.sql.Encoder[DeviceRead])org.apache.spark.sql.Dataset[DeviceRead].
Unspecified value parameter evidence$6.
val result = df.map((bytes: Row) => deser.deserialize("topic", bytes.getAs[Array[Byte]]("value")).asInstanceOf[DeviceRead])
Error:(76, 26) Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
val result = df.map((bytes: Row) => deser.deserialize("topic", bytes.getAs[Array[Byte]]("value")).asInstanceOf[DeviceRead])
Does that essentially mean that I cannot use Structured Streaming for Java enums? And it can only be used with either primitives or case classes?
I read a few related questions 1, 2, 3 around this and it seems the possibility of specifying a custom Encoder for a class i.e. UDT was removed in 2.1 and the new functionality was not added.
Any help will be appreciated.
I think you may be asking for too much in the current version of Structured Streaming (and Spark SQL) in general.
I've been yet unable to fully understand how to deal with the issue of missing encoders in a so-called more professional way, but the same issue you'd get when you tried to create a Dataset of enums. That might not simply be supported yet.
Structured Streaming is just a streaming library on top of Spark SQL and uses it for serialization-deserialization (SerDe).
To make the story short and to get you going (until you figure out a better way), I'd recommend avoid using enums in the business objects you use to represent the schema of your datasets.
So, I'd recommend doing something along the lines:
val decodeMessage = udf { bytes:Array[Byte] =>
val dr = deser.deserialize("topic.name", bytes).asInstanceOf[DeviceRead]
// do additional transformation here so you use a custom streaming-specific class
// Here I'm using a simple tuple to hold what might be relevant
// You could create a case class instead to have proper names
(dr.id, dr.value)
}
I'm trying to write a DataFrame from Spark to Kafka and I couldn't find any solution out there. Can you please show me how to do that?
Here is my current code:
activityStream.foreachRDD { rdd =>
val activityDF = rdd
.toDF()
.selectExpr(
"timestamp_hour", "referrer", "action",
"prevPage", "page", "visitor", "product", "inputProps.topic as topic")
val producerRecord = new ProducerRecord(topicc, activityDF)
kafkaProducer.send(producerRecord) // <--- this shows an error
}
type mismatch; found : org.apache.kafka.clients.producer.ProducerRecord[Nothing,org.apache.spark.sql.DataFrame] (which expands to) org.apache.kafka.clients.producer.ProducerRecord[Nothing,org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]] required: org.apache.kafka.clients.producer.ProducerRecord[Nothing,String] Error occurred in an application involving default arguments.
Do collect on the activityDF to get the records (not Dataset[Row]) and save them to Kafka.
Note that you'll end up with a collection of records after collect so you probably have to iterate over it, e.g.
val activities = activityDF.collect()
// the following is pure Scala and has nothing to do with Spark
activities.foreach { a: Row =>
val pr: ProducerRecord = // map a to pr
kafkaProducer.send(pr)
}
Use pattern matching on Row to destructure it to fields/columns, e.g.
activities.foreach { case Row(timestamp_hour, referrer, action, prevPage, page, visitor, product, topic) =>
// ...transform a to ProducerRecord
kafkaProducer.send(pr)
}
PROTIP: I'd strongly suggest using a case class and transform DataFrame (= Dataset[Row]) to Dataset[YourCaseClass].
See Spark SQL's Row and Kafka's ProducerRecord docs.
As Joe Nate pointed out in the comments:
If you do "collect" before writing to any endpoint, it's going to make all the data aggregate at the driver and then make the driver write it out. 1) Can crash the driver if too much data (2) no parallelism in write.
That's 100% correct. I wished I had said it :)
You may want to use the approach as described in Writing Stream Output to Kafka instead.