Create a DataFrame in Spark Stream - apache-spark

I've connected the Kafka Stream to the Spark. As well as I've trained Apache Spark Mlib model to prediction based on a streamed text. My problem is, get a prediction I need to pass a DataFramework.
//kafka stream
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
//load mlib model
val model = PipelineModel.load(modelPath)
stream.foreachRDD { rdd =>
rdd.foreach { record =>
//to get a prediction need to pass DF
val toPredict = spark.createDataFrame(Seq(
(1L, record.value())
)).toDF("id", "review")
val prediction = model.transform(test)
}
}
My problem is, Spark streaming doesn't allow to create a DataFrame. Is there any way to do that? Can I use case class or struct?

It's possible to create a DataFrame or Dataset from an RDD as you would in core Spark. To do that, we need to apply a schema. Within the foreachRDD we can then transform the resulting RDD into a DataFrame that can be further used with an ML pipeline.
// we use a schema in the form of a case class
case class MyStructure(field:type, ....)
// and we implement our custom transformation from string to our structure
object MyStructure {
def parse(str: String) : Option[MyStructure] = ...
}
val stream = KafkaUtils.createDirectStream...
// give the stream a schema using a case class
val strucStream = stream.flatMap(cr => MyStructure.parse(cr.value))
strucStream.foreachRDD { rdd =>
import sparkSession.implicits._
val df = rdd.toDF()
val prediction = model.transform(df)
// do something with df
}

Related

Can not read AVRO data from kafka stream in spark scala app [duplicate]

I'm using a Kafka Source in Spark Structured Streaming to receive Confluent encoded Avro records. I intend to use Confluent Schema Registry, but the integration with spark structured streaming seems to be impossible.
I have seen this question, but unable to get it working with the Confluent Schema Registry. Reading Avro messages from Kafka with Spark 2.0.2 (structured streaming)
It took me a couple months of reading source code and testing things out. In a nutshell, Spark can only handle String and Binary serialization. You must manually deserialize the data. In spark, create the confluent rest service object to get the schema. Convert the schema string in the response object into an Avro schema using the Avro parser. Next, read the Kafka topic as normal. Then map over the binary typed "value" column with the Confluent KafkaAvroDeSerializer. I strongly suggest getting into the source code for these classes because there is a lot going on here, so for brevity I'll leave out many details.
//Used Confluent version 3.2.2 to write this.
import io.confluent.kafka.schemaregistry.client.rest.RestService
import io.confluent.kafka.serializers.KafkaAvroDeserializer
import org.apache.avro.Schema
case class DeserializedFromKafkaRecord(key: String, value: String)
val schemaRegistryURL = "http://127.0.0.1:8081"
val topicName = "Schema-Registry-Example-topic1"
val subjectValueName = topicName + "-value"
//create RestService object
val restService = new RestService(schemaRegistryURL)
//.getLatestVersion returns io.confluent.kafka.schemaregistry.client.rest.entities.Schema object.
val valueRestResponseSchema = restService.getLatestVersion(subjectValueName)
//Use Avro parsing classes to get Avro Schema
val parser = new Schema.Parser
val topicValueAvroSchema: Schema = parser.parse(valueRestResponseSchema.getSchema)
//key schema is typically just string but you can do the same process for the key as the value
val keySchemaString = "\"string\""
val keySchema = parser.parse(keySchemaString)
//Create a map with the Schema registry url.
//This is the only Required configuration for Confluent's KafkaAvroDeserializer.
val props = Map("schema.registry.url" -> schemaRegistryURL)
//Declare SerDe vars before using Spark structured streaming map. Avoids non serializable class exception.
var keyDeserializer: KafkaAvroDeserializer = null
var valueDeserializer: KafkaAvroDeserializer = null
//Create structured streaming DF to read from the topic.
val rawTopicMessageDF = sql.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "127.0.0.1:9092")
.option("subscribe", topicName)
.option("startingOffsets", "earliest")
.option("maxOffsetsPerTrigger", 20) //remove for prod
.load()
//instantiate the SerDe classes if not already, then deserialize!
val deserializedTopicMessageDS = rawTopicMessageDF.map{
row =>
if (keyDeserializer == null) {
keyDeserializer = new KafkaAvroDeserializer
keyDeserializer.configure(props.asJava, true) //isKey = true
}
if (valueDeserializer == null) {
valueDeserializer = new KafkaAvroDeserializer
valueDeserializer.configure(props.asJava, false) //isKey = false
}
//Pass the Avro schema.
val deserializedKeyString = keyDeserializer.deserialize(topicName, row.key, keySchema).toString //topic name is actually unused in the source code, just required by the signature. Weird right?
val deserializedValueString = valueDeserializer.deserialize(topicName, row.value, topicValueAvroSchema).toString
DeserializedFromKafkaRecord(deserializedKeyString, deserializedValueString)
}
val deserializedDSOutputStream = deserializedTopicMessageDS.writeStream
.outputMode("append")
.format("console")
.option("truncate", false)
.start()
Disclaimer
This code was only tested on a local master, and has been reported runs into serializer issues in a clustered environment. There's an alternative solution (step 7-9, with Scala code in step 10) that extracts out the schema ids to columns, looks up each unique ID, and then uses schema broadcast variables, which will work better, at scale.
Also, there is an external library AbsaOSS/ABRiS that also addresses using the Registry with Spark
Since the other answer that was mostly useful was removed, I wanted to re-add it with some refactoring and comments.
Here are the dependencies needed. Code tested with Confluent 5.x and Spark 2.4
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>${confluent.version}</version>
<exclusions>
<!-- Conflicts with Spark's version -->
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-avro_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
And here is the Scala implementation (only tested locally on master=local[*])
First section, define the imports, some fields, and a few helper methods to get schemas
import io.confluent.kafka.schemaregistry.client.{CachedSchemaRegistryClient, SchemaRegistryClient}
import io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer
import org.apache.avro.Schema
import org.apache.avro.generic.GenericRecord
import org.apache.commons.cli.CommandLine
import org.apache.spark.sql._
import org.apache.spark.sql.avro.SchemaConverters
import org.apache.spark.sql.streaming.OutputMode
object App {
private var schemaRegistryClient: SchemaRegistryClient = _
private var kafkaAvroDeserializer: AvroDeserializer = _
def lookupTopicSchema(topic: String, isKey: Boolean = false) = {
schemaRegistryClient.getLatestSchemaMetadata(topic + (if (isKey) "-key" else "-value")).getSchema
}
def avroSchemaToSparkSchema(avroSchema: String) = {
SchemaConverters.toSqlType(new Schema.Parser().parse(avroSchema))
}
// ... continues below
Then define a simple main method that parses the CMD args to get Kafka details
def main(args: Array[String]): Unit = {
val cmd: CommandLine = parseArg(args)
val master = cmd.getOptionValue("master", "local[*]")
val spark = SparkSession.builder()
.appName(App.getClass.getName)
.master(master)
.getOrCreate()
val bootstrapServers = cmd.getOptionValue("bootstrap-server")
val topic = cmd.getOptionValue("topic")
val schemaRegistryUrl = cmd.getOptionValue("schema-registry")
consumeAvro(spark, bootstrapServers, topic, schemaRegistryUrl)
spark.stop()
}
// ... still continues
Then, the important method that consumes the Kafka topic and deserializes it
private def consumeAvro(spark: SparkSession, bootstrapServers: String, topic: String, schemaRegistryUrl: String): Unit = {
import spark.implicits._
// Setup the Avro deserialization UDF
schemaRegistryClient = new CachedSchemaRegistryClient(schemaRegistryUrl, 128)
kafkaAvroDeserializer = new AvroDeserializer(schemaRegistryClient)
spark.udf.register("deserialize", (bytes: Array[Byte]) =>
kafkaAvroDeserializer.deserialize(bytes)
)
// Load the raw Kafka topic (byte stream)
val rawDf = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", bootstrapServers)
.option("subscribe", topic)
.option("startingOffsets", "earliest")
.load()
// Deserialize byte stream into strings (Avro fields become JSON)
import org.apache.spark.sql.functions._
val jsonDf = rawDf.select(
// 'key.cast(DataTypes.StringType), // string keys are simplest to use
callUDF("deserialize", 'key).as("key"), // but sometimes they are avro
callUDF("deserialize", 'value).as("value")
// excluding topic, partition, offset, timestamp, etc
)
// Get the Avro schema for the topic from the Schema Registry and convert it into a Spark schema type
val dfValueSchema = {
val rawSchema = lookupTopicSchema(topic)
avroSchemaToSparkSchema(rawSchema)
}
// Apply structured schema to JSON stream
val parsedDf = jsonDf.select(
'key, // keys are usually plain strings
// values are JSONified Avro records
from_json('value, dfValueSchema.dataType).alias("value")
).select(
'key,
$"value.*" // flatten out the value
)
// parsedDf.printSchema()
// Sample schema output
// root
// |-- key: string (nullable = true)
// |-- header: struct (nullable = true) // Not a Kafka record "header". This is part of our value schema
// | |-- time: long (nullable = true)
// | ...
// TODO: Do something interesting with this stream
parsedDf.writeStream
.format("console")
.outputMode(OutputMode.Append())
.option("truncate", false)
.start()
.awaitTermination()
}
// still continues
The command line parser allows for passing in bootstrap servers, schema registry, topic name, and Spark master.
private def parseArg(args: Array[String]): CommandLine = {
import org.apache.commons.cli._
val options = new Options
val masterOption = new Option("m", "master", true, "Spark master")
masterOption.setRequired(false)
options.addOption(masterOption)
val bootstrapOption = new Option("b", "bootstrap-server", true, "Bootstrap servers")
bootstrapOption.setRequired(true)
options.addOption(bootstrapOption)
val topicOption = new Option("t", "topic", true, "Kafka topic")
topicOption.setRequired(true)
options.addOption(topicOption)
val schemaRegOption = new Option("s", "schema-registry", true, "Schema Registry URL")
schemaRegOption.setRequired(true)
options.addOption(schemaRegOption)
val parser = new BasicParser
parser.parse(options, args)
}
// still continues
In order for the UDF above to work, then there needed to be a deserializer to take the DataFrame of bytes to one containing deserialized Avro
// Simple wrapper around Confluent deserializer
class AvroDeserializer extends AbstractKafkaAvroDeserializer {
def this(client: SchemaRegistryClient) {
this()
// TODO: configure the deserializer for authentication
this.schemaRegistry = client
}
override def deserialize(bytes: Array[Byte]): String = {
val value = super.deserialize(bytes)
value match {
case str: String =>
str
case _ =>
val genericRecord = value.asInstanceOf[GenericRecord]
genericRecord.toString
}
}
}
} // end 'object App'
Put each of these blocks together, and it works in IntelliJ after adding -b localhost:9092 -s http://localhost:8081 -t myTopic to Run Configurations > Program Arguments
This is an example of my code integrating spark structured streaming with kafka and schema registry (code in scala)
import org.apache.spark.sql.SparkSession
import io.confluent.kafka.schemaregistry.client.rest.RestService // <artifactId>kafka-schema-registry</artifactId>
import org.apache.spark.sql.avro.from_avro // <artifactId>spark-avro_${scala.compat.version}</artifactId>
import org.apache.spark.sql.functions.col
object KafkaConsumerAvro {
def main(args: Array[String]): Unit = {
val KAFKA_BOOTSTRAP_SERVERS = "localhost:9092"
val SCHEMA_REGISTRY_URL = "http://localhost:8081"
val TOPIC = "transactions"
val spark: SparkSession = SparkSession.builder().appName("KafkaConsumerAvro").getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
val df = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", KAFKA_BOOTSTRAP_SERVERS)
.option("subscribe", TOPIC)
.option("startingOffsets", "earliest") // from starting
.load()
// Prints Kafka schema with columns (topic, offset, partition e.t.c)
df.printSchema()
// Create REST service to access schema registry and retrieve topic schema (latest)
val restService = new RestService(SCHEMA_REGISTRY_URL)
val valueRestResponseSchema = restService.getLatestVersion(TOPIC + "-value")
val jsonSchema = valueRestResponseSchema.getSchema
val transactionDF = df.select(
col("key").cast("string"), // cast to string from binary value
from_avro(col("value"), jsonSchema).as("transaction"), // convert from avro value
col("topic"),
col("offset"),
col("timestamp"),
col("timestampType"))
transactionDF.printSchema()
// Stream data to console for testing
transactionDF.writeStream
.format("console")
.outputMode("append")
.start()
.awaitTermination()
}
}
When reading from kafka topic, we have this kind of schema:
key: binary | value: binary | topic: string | partition: integer | offset: long | timestamp: timestamp | timestampType: integer |
As we can see, key and value are binary so we need to cast key as string and in this case, value is avro formatted so we can achieve this by calling from_avro function.
In adition to Spark and Kafka dependencies, we need this dependencies:
<!-- READ AND WRITE AVRO DATA -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-avro_${scala.compat.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- INTEGRATION WITH SCHEMA REGISTRY -->
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>${confluent.version}</version>
</dependency>
This library will do the job for you. It connects to Confluent Schema Registry through Spark Structured Stream.
For Confluent, it copes with the schema id that is sent along with the payload.
In the README you will find a code snippet of how to do it.
DISCLOSURE: I work for ABSA and I developed this library.
Another very simple alternative for pyspark (without full support for schema registry like schema registration, compatibility check, etc.) could be:
import requests
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.avro.functions import *
# variables
topic = "my-topic"
schemaregistry = "http://localhost:8081"
kafka_brokers = "kafka1:9092,kafka2:9092"
# retrieve the latest schema
response = requests.get('{}/subjects/{}-value/versions/latest/schema'.format(schemaregistry, topic))
# error check
response.raise_for_status()
# extract the schema from the response
schema = response.text
# run the query
query = spark.readStream.format("kafka") \
.option("kafka.bootstrap.servers", kafka_brokers) \
.option("subscribe", topic) \
.load() \
# The magic goes here:
# Skip the first 5 bytes (reserved by schema registry encoding protocol)
.selectExpr("substring(value, 6) as avro_value") \
.select(from_avro(col("avro_value"), schema).alias("data")) \
.select(col("data.my_field")) \
.writeStream \
.format("console") \
.outputMode("complete") \
.start()
Databricks now provide this functionality but you have to pay for it :-(
dataDF
.select(
to_avro($"key", lit("t-key"), schemaRegistryAddr).as("key"),
to_avro($"value", lit("t-value"), schemaRegistryAddr).as("value"))
.writeStream
.format("kafka")
.option("kafka.bootstrap.servers", servers)
.option("topic", "t")
.save()
See:
https://docs.databricks.com/spark/latest/structured-streaming/avro-dataframe.html for more info
A good free alternative is ABRIS. See: https://github.com/AbsaOSS/ABRiS the only downside we can see that you need to provide a file of your avro schema at runtime so the framework can enforce this schema on your dataframe before it publishes it to the Kafka topic.
Based on #cricket_007's answers I created the following solution which could run in our cluster environment, including the following new features:
You need add broadcast variables to transfer some values into map operations for cluster environment. Neither Schema.Parser nor KafkaAvroDeserializer could be serialized in spark, so it is why you need initialize them in map operations
My structured streaming used foreachBatch output sink.
I applied org.apache.spark.sql.avro.SchemaConverters to convert avro schema format to spark StructType, so that you could use it in from_json column function to parse dataframe in Kafka topic fields (key and value).
Firstly, you need load some packages:
SCALA_VERSION="2.11"
SPARK_VERSION="2.4.4"
CONFLUENT_VERSION="5.2.2"
jars=(
"org.apache.spark:spark-sql-kafka-0-10_${SCALA_VERSION}:${SPARK_VERSION}" ## format("kafka")
"org.apache.spark:spark-avro_${SCALA_VERSION}:${SPARK_VERSION}" ## SchemaConverters
"io.confluent:kafka-schema-registry:${CONFLUENT_VERSION}" ## import io.confluent.kafka.schemaregistry.client.rest.RestService
"io.confluent:kafka-avro-serializer:${CONFLUENT_VERSION}" ## import io.confluent.kafka.serializers.KafkaAvroDeserializer
)
./bin/spark-shell --packages ${"${jars[*]}"// /,}
Here are the whole codes I tested in spark-shell:
import org.apache.avro.Schema
import io.confluent.kafka.serializers.KafkaAvroDeserializer
import io.confluent.kafka.schemaregistry.client.rest.RestService
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.avro.SchemaConverters
import scala.collection.JavaConverters._
import java.time.LocalDateTime
spark.sparkContext.setLogLevel("Error")
val brokerServers = "xxx.yyy.zzz:9092"
val topicName = "mytopic"
val schemaRegistryURL = "http://xxx.yyy.zzz:8081"
val restService = new RestService(schemaRegistryURL)
val exParser = new Schema.Parser
//-- For both key and value
val schemaNames = Seq("key", "value")
val schemaStrings = schemaNames.map(i => (i -> restService.getLatestVersion(s"$topicName-$i").getSchema)).toMap
val tempStructMap = schemaStrings.transform((k,v) => SchemaConverters.toSqlType(exParser.parse(v)).dataType)
val schemaStruct = new StructType().add("key", tempStructMap("key")).add("value", tempStructMap("value"))
//-- For key only
// val schemaStrings = restService.getLatestVersion(s"$topicName-key").getSchema
// val schemaStruct = SchemaConverters.toSqlType(exParser.parse(schemaStrings)).dataType
//-- For value only
// val schemaStrings = restService.getLatestVersion(s"$topicName-value").getSchema
// val schemaStruct = SchemaConverters.toSqlType(exParser.parse(schemaStrings)).dataType
val query = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", brokerServers)
.option("subscribe", topicName)
.load()
.writeStream
.outputMode("append")
//.option("checkpointLocation", s"cos://$bucket.service/checkpoints/$tableName")
.foreachBatch((batchDF: DataFrame, batchId: Long) => {
val bcTopicName = sc.broadcast(topicName)
val bcSchemaRegistryURL = sc.broadcast(schemaRegistryURL)
val bcSchemaStrings = sc.broadcast(schemaStrings)
val rstDF = batchDF.map {
row =>
val props = Map("schema.registry.url" -> bcSchemaRegistryURL.value)
//-- For both key and value
val isKeys = Map("key" -> true, "value" -> false)
val deserializers = isKeys.transform{ (k,v) =>
val des = new KafkaAvroDeserializer
des.configure(props.asJava, v)
des
}
//-- For key only
// val deserializer = new KafkaAvroDeserializer
// deserializer.configure(props.asJava, true)
//-- For value only
// val deserializer = new KafkaAvroDeserializer
// deserializer.configure(props.asJava, false)
val inParser = new Schema.Parser
//-- For both key and value
val values = bcSchemaStrings.value.transform( (k,v) =>
deserializers(k).deserialize(bcTopicName.value, row.getAs[Array[Byte]](k), inParser.parse(v)).toString)
s"""{"key": ${values("key")}, "value": ${values("value")} }"""
//-- For key only
// deserializer.deserialize(bcTopicName.value, row.getAs[Array[Byte]]("key"), inParser.parse(bcSchemaStrings.value)).toString
//-- For value only
// deserializer.deserialize(bcTopicName.value, row.getAs[Array[Byte]]("value"), inParser.parse(bcSchemaStrings.value)).toString
}
.select(from_json(col("value"), schemaStruct).as("root"))
.select("root.*")
println(s"${LocalDateTime.now} --- Batch $batchId: ${rstDF.count} rows")
rstDF.printSchema
rstDF.show(false)
})
.trigger(Trigger.ProcessingTime("60 seconds"))
.start()
query.awaitTermination()
Summarizing some of answer above and adding some of my own experience, those are the options at the time of writing:
3rd party Abris library. This is what we used initially, but it doesn't seem to support a permissive mode where you can drop malformed packages. It will crash the stream when it encounters a malformed message. If you can guarantee the message validity, that is okay, but it was an issue for us as it kept trying to parse the malformed message after stream restart.
Custom UDF which parses the Avro data, as outlined by OneCricketeer's answer. Gives the most flexibility but also requires the most custom code.
Using Databrick's from_avro variant which allows you to simply pass the URL, and it will find the right schema and parse it for you. Works really well, but only available in their environment, thus hard to test in a codebase.
Using Spark's built-in from_avro function. This functions allows you to pass a JSON schema and parse it from there. Only fix that you have to apply is that in Confluent's Wire format, there is one magic byte and 4 schema bytes before the actual Avro binary data starts as also pointed out in dudssource's answer. You can parse it like this in Scala:
val restService = new RestService(espConfig.schemaRegistryUrl)
val valueRestResponseSchema = restService.getVersion(espConfig.fullTopicName + "-value", schemaVersion)
valueRestResponseSchema.getSchema
streamDf
.withColumn("binary_data", substring(6, Int.MaxValue))
.withColumn("parsed_data", from_avr('binary_data, jsonSchema, Map("MODE" -> "PERMISSIVE")))
For anyone that want's to do this in pyspark: The library that felipe referenced worked nicely on the JVM for me before, so i wrote a small wrapper function that integrates it in python. This looks very hacky, because a lot of types that are implicit in the scala language have to be specified explicitly in py4j. Has been working nicely so far, though, even in spark 2.4.1.
def expand_avro(spark_context, sql_context, data_frame, schema_registry_url, topic):
j = spark_context._gateway.jvm
dataframe_deserializer = j.za.co.absa.abris.avro.AvroSerDe.DataframeDeserializer(data_frame._jdf)
naming_strategy = getattr(
getattr(j.za.co.absa.abris.avro.read.confluent.SchemaManager,
"SchemaStorageNamingStrategies$"), "MODULE$").TOPIC_NAME()
conf = getattr(getattr(j.scala.collection.immutable.Map, "EmptyMap$"), "MODULE$")
conf = getattr(conf, "$plus")(j.scala.Tuple2("schema.registry.url", schema_registry_url))
conf = getattr(conf, "$plus")(j.scala.Tuple2("schema.registry.topic", topic))
conf = getattr(conf, "$plus")(j.scala.Tuple2("value.schema.id", "latest"))
conf = getattr(conf, "$plus")(j.scala.Tuple2("value.schema.naming.strategy", naming_strategy))
schema_path = j.scala.Option.apply(None)
conf = j.scala.Option.apply(conf)
policy = getattr(j.za.co.absa.abris.avro.schemas.policy.SchemaRetentionPolicies, "RETAIN_SELECTED_COLUMN_ONLY$")()
data_frame = dataframe_deserializer.fromConfluentAvro("value", schema_path, conf, policy)
data_frame = DataFrame(data_frame, sql_context)
return data_frame
For that to work, you have to add the library to the spark packages, e.g.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages ' \
'org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.1,' \
'org.apache.spark:spark-avro_2.11:2.4.1,' \
'za.co.absa:abris_2.11:2.2.2 ' \
'--repositories https://packages.confluent.io/maven/ ' \
'pyspark-shell'
#RvdV Great summary. I was trying Abris library and consuming CDC record generated by Debezium.
val abrisConfig: FromAvroConfig = (AbrisConfig
.fromConfluentAvro
.downloadReaderSchemaByLatestVersion
.andTopicNameStrategy(topicName)
.usingSchemaRegistry(schemaRegistryURL))
val df=(spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", brokerServers)
.option("subscribe", topicName)
.load())
val deserializedAvro = (df
.select(from_avro(col("value"), abrisConfig)
.as("data"))
.select(col("data.after.*")))
deserializedAvro.printSchema()
val query = (deserializedAvro
.writeStream
.format("console")
.outputMode("append")
.option("checkpointLocation", s"s3://$bucketName/checkpoints/$tableName")
.trigger(Trigger.ProcessingTime("60 seconds"))
.start())
I added column while the streaming job is running. I was expecting it to print the new col that I added. It did not. Does it not dynamically refresh the schema from the version information in the payload ?

How to configure the Schema Registry and Avro serializer of Confluent with Spark Structured Streaming? [duplicate]

I'm using a Kafka Source in Spark Structured Streaming to receive Confluent encoded Avro records. I intend to use Confluent Schema Registry, but the integration with spark structured streaming seems to be impossible.
I have seen this question, but unable to get it working with the Confluent Schema Registry. Reading Avro messages from Kafka with Spark 2.0.2 (structured streaming)
It took me a couple months of reading source code and testing things out. In a nutshell, Spark can only handle String and Binary serialization. You must manually deserialize the data. In spark, create the confluent rest service object to get the schema. Convert the schema string in the response object into an Avro schema using the Avro parser. Next, read the Kafka topic as normal. Then map over the binary typed "value" column with the Confluent KafkaAvroDeSerializer. I strongly suggest getting into the source code for these classes because there is a lot going on here, so for brevity I'll leave out many details.
//Used Confluent version 3.2.2 to write this.
import io.confluent.kafka.schemaregistry.client.rest.RestService
import io.confluent.kafka.serializers.KafkaAvroDeserializer
import org.apache.avro.Schema
case class DeserializedFromKafkaRecord(key: String, value: String)
val schemaRegistryURL = "http://127.0.0.1:8081"
val topicName = "Schema-Registry-Example-topic1"
val subjectValueName = topicName + "-value"
//create RestService object
val restService = new RestService(schemaRegistryURL)
//.getLatestVersion returns io.confluent.kafka.schemaregistry.client.rest.entities.Schema object.
val valueRestResponseSchema = restService.getLatestVersion(subjectValueName)
//Use Avro parsing classes to get Avro Schema
val parser = new Schema.Parser
val topicValueAvroSchema: Schema = parser.parse(valueRestResponseSchema.getSchema)
//key schema is typically just string but you can do the same process for the key as the value
val keySchemaString = "\"string\""
val keySchema = parser.parse(keySchemaString)
//Create a map with the Schema registry url.
//This is the only Required configuration for Confluent's KafkaAvroDeserializer.
val props = Map("schema.registry.url" -> schemaRegistryURL)
//Declare SerDe vars before using Spark structured streaming map. Avoids non serializable class exception.
var keyDeserializer: KafkaAvroDeserializer = null
var valueDeserializer: KafkaAvroDeserializer = null
//Create structured streaming DF to read from the topic.
val rawTopicMessageDF = sql.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "127.0.0.1:9092")
.option("subscribe", topicName)
.option("startingOffsets", "earliest")
.option("maxOffsetsPerTrigger", 20) //remove for prod
.load()
//instantiate the SerDe classes if not already, then deserialize!
val deserializedTopicMessageDS = rawTopicMessageDF.map{
row =>
if (keyDeserializer == null) {
keyDeserializer = new KafkaAvroDeserializer
keyDeserializer.configure(props.asJava, true) //isKey = true
}
if (valueDeserializer == null) {
valueDeserializer = new KafkaAvroDeserializer
valueDeserializer.configure(props.asJava, false) //isKey = false
}
//Pass the Avro schema.
val deserializedKeyString = keyDeserializer.deserialize(topicName, row.key, keySchema).toString //topic name is actually unused in the source code, just required by the signature. Weird right?
val deserializedValueString = valueDeserializer.deserialize(topicName, row.value, topicValueAvroSchema).toString
DeserializedFromKafkaRecord(deserializedKeyString, deserializedValueString)
}
val deserializedDSOutputStream = deserializedTopicMessageDS.writeStream
.outputMode("append")
.format("console")
.option("truncate", false)
.start()
Disclaimer
This code was only tested on a local master, and has been reported runs into serializer issues in a clustered environment. There's an alternative solution (step 7-9, with Scala code in step 10) that extracts out the schema ids to columns, looks up each unique ID, and then uses schema broadcast variables, which will work better, at scale.
Also, there is an external library AbsaOSS/ABRiS that also addresses using the Registry with Spark
Since the other answer that was mostly useful was removed, I wanted to re-add it with some refactoring and comments.
Here are the dependencies needed. Code tested with Confluent 5.x and Spark 2.4
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>${confluent.version}</version>
<exclusions>
<!-- Conflicts with Spark's version -->
<exclusion>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-avro_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
And here is the Scala implementation (only tested locally on master=local[*])
First section, define the imports, some fields, and a few helper methods to get schemas
import io.confluent.kafka.schemaregistry.client.{CachedSchemaRegistryClient, SchemaRegistryClient}
import io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer
import org.apache.avro.Schema
import org.apache.avro.generic.GenericRecord
import org.apache.commons.cli.CommandLine
import org.apache.spark.sql._
import org.apache.spark.sql.avro.SchemaConverters
import org.apache.spark.sql.streaming.OutputMode
object App {
private var schemaRegistryClient: SchemaRegistryClient = _
private var kafkaAvroDeserializer: AvroDeserializer = _
def lookupTopicSchema(topic: String, isKey: Boolean = false) = {
schemaRegistryClient.getLatestSchemaMetadata(topic + (if (isKey) "-key" else "-value")).getSchema
}
def avroSchemaToSparkSchema(avroSchema: String) = {
SchemaConverters.toSqlType(new Schema.Parser().parse(avroSchema))
}
// ... continues below
Then define a simple main method that parses the CMD args to get Kafka details
def main(args: Array[String]): Unit = {
val cmd: CommandLine = parseArg(args)
val master = cmd.getOptionValue("master", "local[*]")
val spark = SparkSession.builder()
.appName(App.getClass.getName)
.master(master)
.getOrCreate()
val bootstrapServers = cmd.getOptionValue("bootstrap-server")
val topic = cmd.getOptionValue("topic")
val schemaRegistryUrl = cmd.getOptionValue("schema-registry")
consumeAvro(spark, bootstrapServers, topic, schemaRegistryUrl)
spark.stop()
}
// ... still continues
Then, the important method that consumes the Kafka topic and deserializes it
private def consumeAvro(spark: SparkSession, bootstrapServers: String, topic: String, schemaRegistryUrl: String): Unit = {
import spark.implicits._
// Setup the Avro deserialization UDF
schemaRegistryClient = new CachedSchemaRegistryClient(schemaRegistryUrl, 128)
kafkaAvroDeserializer = new AvroDeserializer(schemaRegistryClient)
spark.udf.register("deserialize", (bytes: Array[Byte]) =>
kafkaAvroDeserializer.deserialize(bytes)
)
// Load the raw Kafka topic (byte stream)
val rawDf = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", bootstrapServers)
.option("subscribe", topic)
.option("startingOffsets", "earliest")
.load()
// Deserialize byte stream into strings (Avro fields become JSON)
import org.apache.spark.sql.functions._
val jsonDf = rawDf.select(
// 'key.cast(DataTypes.StringType), // string keys are simplest to use
callUDF("deserialize", 'key).as("key"), // but sometimes they are avro
callUDF("deserialize", 'value).as("value")
// excluding topic, partition, offset, timestamp, etc
)
// Get the Avro schema for the topic from the Schema Registry and convert it into a Spark schema type
val dfValueSchema = {
val rawSchema = lookupTopicSchema(topic)
avroSchemaToSparkSchema(rawSchema)
}
// Apply structured schema to JSON stream
val parsedDf = jsonDf.select(
'key, // keys are usually plain strings
// values are JSONified Avro records
from_json('value, dfValueSchema.dataType).alias("value")
).select(
'key,
$"value.*" // flatten out the value
)
// parsedDf.printSchema()
// Sample schema output
// root
// |-- key: string (nullable = true)
// |-- header: struct (nullable = true) // Not a Kafka record "header". This is part of our value schema
// | |-- time: long (nullable = true)
// | ...
// TODO: Do something interesting with this stream
parsedDf.writeStream
.format("console")
.outputMode(OutputMode.Append())
.option("truncate", false)
.start()
.awaitTermination()
}
// still continues
The command line parser allows for passing in bootstrap servers, schema registry, topic name, and Spark master.
private def parseArg(args: Array[String]): CommandLine = {
import org.apache.commons.cli._
val options = new Options
val masterOption = new Option("m", "master", true, "Spark master")
masterOption.setRequired(false)
options.addOption(masterOption)
val bootstrapOption = new Option("b", "bootstrap-server", true, "Bootstrap servers")
bootstrapOption.setRequired(true)
options.addOption(bootstrapOption)
val topicOption = new Option("t", "topic", true, "Kafka topic")
topicOption.setRequired(true)
options.addOption(topicOption)
val schemaRegOption = new Option("s", "schema-registry", true, "Schema Registry URL")
schemaRegOption.setRequired(true)
options.addOption(schemaRegOption)
val parser = new BasicParser
parser.parse(options, args)
}
// still continues
In order for the UDF above to work, then there needed to be a deserializer to take the DataFrame of bytes to one containing deserialized Avro
// Simple wrapper around Confluent deserializer
class AvroDeserializer extends AbstractKafkaAvroDeserializer {
def this(client: SchemaRegistryClient) {
this()
// TODO: configure the deserializer for authentication
this.schemaRegistry = client
}
override def deserialize(bytes: Array[Byte]): String = {
val value = super.deserialize(bytes)
value match {
case str: String =>
str
case _ =>
val genericRecord = value.asInstanceOf[GenericRecord]
genericRecord.toString
}
}
}
} // end 'object App'
Put each of these blocks together, and it works in IntelliJ after adding -b localhost:9092 -s http://localhost:8081 -t myTopic to Run Configurations > Program Arguments
This is an example of my code integrating spark structured streaming with kafka and schema registry (code in scala)
import org.apache.spark.sql.SparkSession
import io.confluent.kafka.schemaregistry.client.rest.RestService // <artifactId>kafka-schema-registry</artifactId>
import org.apache.spark.sql.avro.from_avro // <artifactId>spark-avro_${scala.compat.version}</artifactId>
import org.apache.spark.sql.functions.col
object KafkaConsumerAvro {
def main(args: Array[String]): Unit = {
val KAFKA_BOOTSTRAP_SERVERS = "localhost:9092"
val SCHEMA_REGISTRY_URL = "http://localhost:8081"
val TOPIC = "transactions"
val spark: SparkSession = SparkSession.builder().appName("KafkaConsumerAvro").getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
val df = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", KAFKA_BOOTSTRAP_SERVERS)
.option("subscribe", TOPIC)
.option("startingOffsets", "earliest") // from starting
.load()
// Prints Kafka schema with columns (topic, offset, partition e.t.c)
df.printSchema()
// Create REST service to access schema registry and retrieve topic schema (latest)
val restService = new RestService(SCHEMA_REGISTRY_URL)
val valueRestResponseSchema = restService.getLatestVersion(TOPIC + "-value")
val jsonSchema = valueRestResponseSchema.getSchema
val transactionDF = df.select(
col("key").cast("string"), // cast to string from binary value
from_avro(col("value"), jsonSchema).as("transaction"), // convert from avro value
col("topic"),
col("offset"),
col("timestamp"),
col("timestampType"))
transactionDF.printSchema()
// Stream data to console for testing
transactionDF.writeStream
.format("console")
.outputMode("append")
.start()
.awaitTermination()
}
}
When reading from kafka topic, we have this kind of schema:
key: binary | value: binary | topic: string | partition: integer | offset: long | timestamp: timestamp | timestampType: integer |
As we can see, key and value are binary so we need to cast key as string and in this case, value is avro formatted so we can achieve this by calling from_avro function.
In adition to Spark and Kafka dependencies, we need this dependencies:
<!-- READ AND WRITE AVRO DATA -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-avro_${scala.compat.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- INTEGRATION WITH SCHEMA REGISTRY -->
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-schema-registry</artifactId>
<version>${confluent.version}</version>
</dependency>
This library will do the job for you. It connects to Confluent Schema Registry through Spark Structured Stream.
For Confluent, it copes with the schema id that is sent along with the payload.
In the README you will find a code snippet of how to do it.
DISCLOSURE: I work for ABSA and I developed this library.
Another very simple alternative for pyspark (without full support for schema registry like schema registration, compatibility check, etc.) could be:
import requests
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.avro.functions import *
# variables
topic = "my-topic"
schemaregistry = "http://localhost:8081"
kafka_brokers = "kafka1:9092,kafka2:9092"
# retrieve the latest schema
response = requests.get('{}/subjects/{}-value/versions/latest/schema'.format(schemaregistry, topic))
# error check
response.raise_for_status()
# extract the schema from the response
schema = response.text
# run the query
query = spark.readStream.format("kafka") \
.option("kafka.bootstrap.servers", kafka_brokers) \
.option("subscribe", topic) \
.load() \
# The magic goes here:
# Skip the first 5 bytes (reserved by schema registry encoding protocol)
.selectExpr("substring(value, 6) as avro_value") \
.select(from_avro(col("avro_value"), schema).alias("data")) \
.select(col("data.my_field")) \
.writeStream \
.format("console") \
.outputMode("complete") \
.start()
Databricks now provide this functionality but you have to pay for it :-(
dataDF
.select(
to_avro($"key", lit("t-key"), schemaRegistryAddr).as("key"),
to_avro($"value", lit("t-value"), schemaRegistryAddr).as("value"))
.writeStream
.format("kafka")
.option("kafka.bootstrap.servers", servers)
.option("topic", "t")
.save()
See:
https://docs.databricks.com/spark/latest/structured-streaming/avro-dataframe.html for more info
A good free alternative is ABRIS. See: https://github.com/AbsaOSS/ABRiS the only downside we can see that you need to provide a file of your avro schema at runtime so the framework can enforce this schema on your dataframe before it publishes it to the Kafka topic.
Based on #cricket_007's answers I created the following solution which could run in our cluster environment, including the following new features:
You need add broadcast variables to transfer some values into map operations for cluster environment. Neither Schema.Parser nor KafkaAvroDeserializer could be serialized in spark, so it is why you need initialize them in map operations
My structured streaming used foreachBatch output sink.
I applied org.apache.spark.sql.avro.SchemaConverters to convert avro schema format to spark StructType, so that you could use it in from_json column function to parse dataframe in Kafka topic fields (key and value).
Firstly, you need load some packages:
SCALA_VERSION="2.11"
SPARK_VERSION="2.4.4"
CONFLUENT_VERSION="5.2.2"
jars=(
"org.apache.spark:spark-sql-kafka-0-10_${SCALA_VERSION}:${SPARK_VERSION}" ## format("kafka")
"org.apache.spark:spark-avro_${SCALA_VERSION}:${SPARK_VERSION}" ## SchemaConverters
"io.confluent:kafka-schema-registry:${CONFLUENT_VERSION}" ## import io.confluent.kafka.schemaregistry.client.rest.RestService
"io.confluent:kafka-avro-serializer:${CONFLUENT_VERSION}" ## import io.confluent.kafka.serializers.KafkaAvroDeserializer
)
./bin/spark-shell --packages ${"${jars[*]}"// /,}
Here are the whole codes I tested in spark-shell:
import org.apache.avro.Schema
import io.confluent.kafka.serializers.KafkaAvroDeserializer
import io.confluent.kafka.schemaregistry.client.rest.RestService
import org.apache.spark.sql.streaming.Trigger
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.avro.SchemaConverters
import scala.collection.JavaConverters._
import java.time.LocalDateTime
spark.sparkContext.setLogLevel("Error")
val brokerServers = "xxx.yyy.zzz:9092"
val topicName = "mytopic"
val schemaRegistryURL = "http://xxx.yyy.zzz:8081"
val restService = new RestService(schemaRegistryURL)
val exParser = new Schema.Parser
//-- For both key and value
val schemaNames = Seq("key", "value")
val schemaStrings = schemaNames.map(i => (i -> restService.getLatestVersion(s"$topicName-$i").getSchema)).toMap
val tempStructMap = schemaStrings.transform((k,v) => SchemaConverters.toSqlType(exParser.parse(v)).dataType)
val schemaStruct = new StructType().add("key", tempStructMap("key")).add("value", tempStructMap("value"))
//-- For key only
// val schemaStrings = restService.getLatestVersion(s"$topicName-key").getSchema
// val schemaStruct = SchemaConverters.toSqlType(exParser.parse(schemaStrings)).dataType
//-- For value only
// val schemaStrings = restService.getLatestVersion(s"$topicName-value").getSchema
// val schemaStruct = SchemaConverters.toSqlType(exParser.parse(schemaStrings)).dataType
val query = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", brokerServers)
.option("subscribe", topicName)
.load()
.writeStream
.outputMode("append")
//.option("checkpointLocation", s"cos://$bucket.service/checkpoints/$tableName")
.foreachBatch((batchDF: DataFrame, batchId: Long) => {
val bcTopicName = sc.broadcast(topicName)
val bcSchemaRegistryURL = sc.broadcast(schemaRegistryURL)
val bcSchemaStrings = sc.broadcast(schemaStrings)
val rstDF = batchDF.map {
row =>
val props = Map("schema.registry.url" -> bcSchemaRegistryURL.value)
//-- For both key and value
val isKeys = Map("key" -> true, "value" -> false)
val deserializers = isKeys.transform{ (k,v) =>
val des = new KafkaAvroDeserializer
des.configure(props.asJava, v)
des
}
//-- For key only
// val deserializer = new KafkaAvroDeserializer
// deserializer.configure(props.asJava, true)
//-- For value only
// val deserializer = new KafkaAvroDeserializer
// deserializer.configure(props.asJava, false)
val inParser = new Schema.Parser
//-- For both key and value
val values = bcSchemaStrings.value.transform( (k,v) =>
deserializers(k).deserialize(bcTopicName.value, row.getAs[Array[Byte]](k), inParser.parse(v)).toString)
s"""{"key": ${values("key")}, "value": ${values("value")} }"""
//-- For key only
// deserializer.deserialize(bcTopicName.value, row.getAs[Array[Byte]]("key"), inParser.parse(bcSchemaStrings.value)).toString
//-- For value only
// deserializer.deserialize(bcTopicName.value, row.getAs[Array[Byte]]("value"), inParser.parse(bcSchemaStrings.value)).toString
}
.select(from_json(col("value"), schemaStruct).as("root"))
.select("root.*")
println(s"${LocalDateTime.now} --- Batch $batchId: ${rstDF.count} rows")
rstDF.printSchema
rstDF.show(false)
})
.trigger(Trigger.ProcessingTime("60 seconds"))
.start()
query.awaitTermination()
Summarizing some of answer above and adding some of my own experience, those are the options at the time of writing:
3rd party Abris library. This is what we used initially, but it doesn't seem to support a permissive mode where you can drop malformed packages. It will crash the stream when it encounters a malformed message. If you can guarantee the message validity, that is okay, but it was an issue for us as it kept trying to parse the malformed message after stream restart.
Custom UDF which parses the Avro data, as outlined by OneCricketeer's answer. Gives the most flexibility but also requires the most custom code.
Using Databrick's from_avro variant which allows you to simply pass the URL, and it will find the right schema and parse it for you. Works really well, but only available in their environment, thus hard to test in a codebase.
Using Spark's built-in from_avro function. This functions allows you to pass a JSON schema and parse it from there. Only fix that you have to apply is that in Confluent's Wire format, there is one magic byte and 4 schema bytes before the actual Avro binary data starts as also pointed out in dudssource's answer. You can parse it like this in Scala:
val restService = new RestService(espConfig.schemaRegistryUrl)
val valueRestResponseSchema = restService.getVersion(espConfig.fullTopicName + "-value", schemaVersion)
valueRestResponseSchema.getSchema
streamDf
.withColumn("binary_data", substring(6, Int.MaxValue))
.withColumn("parsed_data", from_avr('binary_data, jsonSchema, Map("MODE" -> "PERMISSIVE")))
For anyone that want's to do this in pyspark: The library that felipe referenced worked nicely on the JVM for me before, so i wrote a small wrapper function that integrates it in python. This looks very hacky, because a lot of types that are implicit in the scala language have to be specified explicitly in py4j. Has been working nicely so far, though, even in spark 2.4.1.
def expand_avro(spark_context, sql_context, data_frame, schema_registry_url, topic):
j = spark_context._gateway.jvm
dataframe_deserializer = j.za.co.absa.abris.avro.AvroSerDe.DataframeDeserializer(data_frame._jdf)
naming_strategy = getattr(
getattr(j.za.co.absa.abris.avro.read.confluent.SchemaManager,
"SchemaStorageNamingStrategies$"), "MODULE$").TOPIC_NAME()
conf = getattr(getattr(j.scala.collection.immutable.Map, "EmptyMap$"), "MODULE$")
conf = getattr(conf, "$plus")(j.scala.Tuple2("schema.registry.url", schema_registry_url))
conf = getattr(conf, "$plus")(j.scala.Tuple2("schema.registry.topic", topic))
conf = getattr(conf, "$plus")(j.scala.Tuple2("value.schema.id", "latest"))
conf = getattr(conf, "$plus")(j.scala.Tuple2("value.schema.naming.strategy", naming_strategy))
schema_path = j.scala.Option.apply(None)
conf = j.scala.Option.apply(conf)
policy = getattr(j.za.co.absa.abris.avro.schemas.policy.SchemaRetentionPolicies, "RETAIN_SELECTED_COLUMN_ONLY$")()
data_frame = dataframe_deserializer.fromConfluentAvro("value", schema_path, conf, policy)
data_frame = DataFrame(data_frame, sql_context)
return data_frame
For that to work, you have to add the library to the spark packages, e.g.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages ' \
'org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.1,' \
'org.apache.spark:spark-avro_2.11:2.4.1,' \
'za.co.absa:abris_2.11:2.2.2 ' \
'--repositories https://packages.confluent.io/maven/ ' \
'pyspark-shell'
#RvdV Great summary. I was trying Abris library and consuming CDC record generated by Debezium.
val abrisConfig: FromAvroConfig = (AbrisConfig
.fromConfluentAvro
.downloadReaderSchemaByLatestVersion
.andTopicNameStrategy(topicName)
.usingSchemaRegistry(schemaRegistryURL))
val df=(spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", brokerServers)
.option("subscribe", topicName)
.load())
val deserializedAvro = (df
.select(from_avro(col("value"), abrisConfig)
.as("data"))
.select(col("data.after.*")))
deserializedAvro.printSchema()
val query = (deserializedAvro
.writeStream
.format("console")
.outputMode("append")
.option("checkpointLocation", s"s3://$bucketName/checkpoints/$tableName")
.trigger(Trigger.ProcessingTime("60 seconds"))
.start())
I added column while the streaming job is running. I was expecting it to print the new col that I added. It did not. Does it not dynamically refresh the schema from the version information in the payload ?

Serialization of transform function in checkpointing

I'm trying to understand Spark Streaming's RDD transformations and checkpointing in the context of serialization. Consider the following example Spark Streaming app:
private val helperObject = HelperObject()
private def createStreamingContext(): StreamingContext = {
val conf = new SparkConf()
.setAppName(Constants.SparkAppName)
.setIfMissing("spark.master", Constants.SparkMasterDefault)
implicit val streamingContext = new StreamingContext(
new SparkContext(conf),
Seconds(Constants.SparkStreamingBatchSizeDefault))
val myStream = StreamUtils.createStream()
myStream.transform(transformTest(_)).print()
streamingContext
}
def transformTest(rdd: RDD[String]): RDD[String] = {
rdd.map(str => helperObject.doSomething(str))
}
val ssc = StreamingContext.getOrCreate(Settings.progressDir,
createStreamingContext)
ssc.start()
while (true) {
helperObject.setData(...)
}
From what I've read in other SO posts, transformTest will be invoked on the driver program once for every batch after streaming starts. Assuming createStreamingContext is invoked (no checkpoint is available), I would expect that the instance of helperObject defined up top would be serialized out to workers once per batch, hence picking up the changes applied to it via helperObject.setData(...). Is this the case?
Now, if createStreamingContext is not invoked (a checkpoint is available), then I would expect that the instance of helperObject cannot possibly be picked up for each batch, since it can't have been captured if createStreamingContext is not executed. Spark Streaming must have serialized helperObject as part of the checkpoint, correct?
Is it possible to update helperObject throughout execution from the driver program when using checkpointing? If so, what's the best approach?
If helperObject is going to be serialized to each executors?
Ans: Yes.
val helperObject = Instantiate_SomeHow()
rdd.map{_.SomeFunctionUsing(helperObject)}
Spark Streaming must have serialized helperObject as part of the checkpoint, correct?
Ans Yes.
If you wish to refresh your helperObject behaviour for each RDD operation you can still do that by making your helperObject more intelligent and not sending the helperObject directly but via a function which has the following signature () => helperObject_Class.
Since it is a function it is serializable. It is a very common design pattern used for sending objects that are not serializable e.g. database connection object or for your fun use case.
An example is given from Kafka Exactly once semantics using database
package example
import kafka.serializer.StringDecoder
import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import scalikejdbc._
import com.typesafe.config.ConfigFactory
import org.apache.spark.{SparkContext, SparkConf, TaskContext}
import org.apache.spark.SparkContext._
import org.apache.spark.streaming._
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka.{KafkaUtils, HasOffsetRanges, OffsetRange}
/** exactly-once semantics from kafka, by storing offsets in the same transaction as the results
Offsets and results will be stored per-batch, on the driver
*/
object TransactionalPerBatch {
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load
val kafkaParams = Map(
"metadata.broker.list" -> conf.getString("kafka.brokers")
)
val jdbcDriver = conf.getString("jdbc.driver")
val jdbcUrl = conf.getString("jdbc.url")
val jdbcUser = conf.getString("jdbc.user")
val jdbcPassword = conf.getString("jdbc.password")
val ssc = setupSsc(kafkaParams, jdbcDriver, jdbcUrl, jdbcUser, jdbcPassword)()
ssc.start()
ssc.awaitTermination()
}
def setupSsc(
kafkaParams: Map[String, String],
jdbcDriver: String,
jdbcUrl: String,
jdbcUser: String,
jdbcPassword: String
)(): StreamingContext = {
val ssc = new StreamingContext(new SparkConf, Seconds(60))
SetupJdbc(jdbcDriver, jdbcUrl, jdbcUser, jdbcPassword)
// begin from the the offsets committed to the database
val fromOffsets = DB.readOnly { implicit session =>
sql"select topic, part, off from txn_offsets".
map { resultSet =>
TopicAndPartition(resultSet.string(1), resultSet.int(2)) -> resultSet.long(3)
}.list.apply().toMap
}
val stream: InputDStream[(String,Long)] = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, (String, Long)](
ssc, kafkaParams, fromOffsets,
// we're just going to count messages per topic, don't care about the contents, so convert each message to (topic, 1)
(mmd: MessageAndMetadata[String, String]) => (mmd.topic, 1L))
stream.foreachRDD { rdd =>
// Note this block is running on the driver
// Cast the rdd to an interface that lets us get an array of OffsetRange
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
// simplest possible "metric", namely a count of messages per topic
// Notice the aggregation is done using spark methods, and results collected back to driver
val results = rdd.reduceByKey {
// This is the only block of code running on the executors.
// reduceByKey did a shuffle, but that's fine, we're not relying on anything special about partitioning here
_+_
}.collect
// Back to running on the driver
// localTx is transactional, if metric update or offset update fails, neither will be committed
DB.localTx { implicit session =>
// store metric results
results.foreach { pair =>
val (topic, metric) = pair
val metricRows = sql"""
update txn_data set metric = metric + ${metric}
where topic = ${topic}
""".update.apply()
if (metricRows != 1) {
throw new Exception(s"""
Got $metricRows rows affected instead of 1 when attempting to update metrics for $topic
""")
}
}
// store offsets
offsetRanges.foreach { osr =>
val offsetRows = sql"""
update txn_offsets set off = ${osr.untilOffset}
where topic = ${osr.topic} and part = ${osr.partition} and off = ${osr.fromOffset}
""".update.apply()
if (offsetRows != 1) {
throw new Exception(s"""
Got $offsetRows rows affected instead of 1 when attempting to update offsets for
${osr.topic} ${osr.partition} ${osr.fromOffset} -> ${osr.untilOffset}
Was a partition repeated after a worker failure?
""")
}
}
}
}
ssc
}
}

Creating Two DStream from Kafka topic in Spark Streaming not working

In my spark streaming application . I am creating two DStream from two Kafka topic. I am doing so , because i need to process the two DStream differently. Below is the code example:
object KafkaConsumerTest3 {
var sc:SparkContext = null
def main(args: Array[String]) {
Logger.getLogger("org").setLevel(Level.OFF);
Logger.getLogger("akka").setLevel(Level.OFF);
val Array(zkQuorum, group, topics1, topics2, numThreads) = Array("localhost:2181", "group3", "test_topic4", "test_topic5","5")
val sparkConf = new SparkConf().setAppName("SparkConsumer").setMaster("local[2]")
sc = new SparkContext(sparkConf)
val ssc = new StreamingContext(sc, Seconds(2))
val topicMap1 = topics1.split(",").map((_, numThreads.toInt)).toMap
val topicMap2 = topics2.split(",").map((_, numThreads.toInt)).toMap
val lines2 = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap2).map(_._2)
val lines1 = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap1).map(_._2)
lines2.foreachRDD{rdd =>
rdd.foreach { println }}
lines1.foreachRDD{rdd =>
rdd.foreach { println }}
ssc.start()
ssc.awaitTermination()
}
}
Both the topics may or may not have data . In my case first topic is not getting data currently but the second topic is getting. But my spark application is not printing any data. And there is no exception as well.
Is there anything i am missing? or how do i resolve this issue.
Found out the issue with above code. The problem is that we have used master as local[2] and we are registering two receiver.Increasing the number of thread solve the problem.

How to work with real time streaming data/logs using spark streaming?

I am newbie to Spark and Scala.
I want to implement a REAL TIME Spark Consumer which could read the network logs on per minute basis [fetching around 1GB of JSON log lines/minute] from Kafka Publisher and finally store the aggregated values in ElasticSearch.
Aggregations is based on few values [like bytes_in, bytes_out etc] using composite key [like : client MAC, client IP, server MAC, Server IP etc].
Spark Consumer which I have written is:
object LogsAnalyzerScalaCS{
def main(args : Array[String]) {
val sparkConf = new SparkConf().setAppName("LOGS-AGGREGATION")
sparkConf.set("es.nodes", "my ip address")
sparkConf.set("es.port", "9200")
sparkConf.set("es.index.auto.create", "true")
sparkConf.set("es.nodes.discovery", "false")
val elasticResource = "conrec_1min/1minute"
val ssc = new StreamingContext(sparkConf, Seconds(30))
val zkQuorum = "my zk quorum IPs:2181"
val consumerGroupId = "LogsConsumer"
val topics = "Logs"
val topicMap = topics.split(",").map((_,3)).toMap
val json = KafkaUtils.createStream(ssc, zkQuorum, consumerGroupId, topicMap)
val logJSON = json.map(_._2)
try{
logJSON.foreachRDD( rdd =>{
if(!rdd.isEmpty()){
val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext)
import sqlContext.implicits._
val df = sqlContext.read.json(rdd)
val groupedData =
((df.groupBy("id","start_time_formated","l2_c","l3_c",
"l4_c","l2_s","l3_s","l4_s")).agg(count("f_id") as "total_f", sum("p_out") as "total_p_out",sum("p_in") as "total_p_in",sum("b_out") as "total_b_out",sum("b_in") as "total_b_in", sum("duration") as "total_duration"))
val dataForES = groupedData.withColumnRenamed("start_time_formated", "start_time")
dataForES.saveToEs(elasticResource)
dataForES.show();
}
})
}
catch{
case e: Exception => print("Exception has occurred : "+e.getMessage)
}
ssc.start()
ssc.awaitTermination()
}
object SQLContextSingleton {
#transient private var instance: org.apache.spark.sql.SQLContext = _
def getInstance(sparkContext: SparkContext): org.apache.spark.sql.SQLContext = {
if (instance == null) {
instance = new org.apache.spark.sql.SQLContext(sparkContext)
}
instance
}
}
}
First of all I would like to know if at all my approach is correct or not [considering I need 1 min logs aggregation]?
There seems to be an issue using this code:
This Consumer will pull data from the Kafka broker every 30 seconds
and saving the final aggregation to Elasticsearch for that 30
sec data, hence increasing the number of rows in Elasticsearch for
unique key [at least 2 entries per one minute]. UI tool [
let's say Kibana] needs to do further aggregation. If I increase the
polling time from 30 sec to 60 sec then it takes a lot of time to
aggregate and hence not at all remains real time.
I want to implement it in such a way that in ElasticSearch only one
row per key should get saved. Hence I want to perform aggregation
till the time I am not getting new key values in my dataset which is
getting pulled from Kafka broker [per minute basis]. After doing
some googling I have found that this could be achieved using
groupByKey() and updateStateByKey() functions but I am not able to
make out how I could use this in my case [should I convert the JSON
Log lines into a string of log line with flat values and then use
these functions there]? If I will use these functions then when
should I save the final aggregated values into ElasticSearch?
Is there any other way of achieving it?
Your quick help will be appreciated.
Regards,
Bhupesh
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
object Main {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("KafkaWordCount").setMaster("local[*]")
val ssc = new StreamingContext(conf, Seconds(15))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "group1",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)//,localhost:9094,localhost:9095"
val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
val out = stream.map(record =>
record.value
)
val words = out.flatMap(_.split(" "))
val count = words.map(word => (word, 1))
val wdc = count.reduceByKey(_+_)
val sqlContext = SQLContext.getOrCreate(SparkContext.getOrCreate())
wdc.foreachRDD{rdd=>
val es = sqlContext.createDataFrame(rdd).toDF("word","count")
import org.elasticsearch.spark.sql._
es.saveToEs("wordcount/testing")
es.show()
}
ssc.start()
ssc.awaitTermination()
}
}
To see full example and sbt
apache-sparkscalahadoopkafkaapache-spark-sql spark-streamingapache-spark-2.0elastic

Resources