Stream data from one kafka topic to another using pyspark - apache-spark

I have been trying to stream some sample data using pyspark from one kafka topic to another (I want to apply some transformations, but, could not get the basic data movement to work). Below is my spark code.
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode
from pyspark.sql.functions import split
from pyspark.sql.types import StructType, StringType, IntegerType
from pyspark.sql.functions import from_json, col
import time
confluentApiKey = 'someapikeyvalue'
confluentSecret = 'someapikey'
spark = SparkSession.builder\
.appName("repartition-job") \
.config('spark.jars.packages', 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.1')\
.getOrCreate()
df = spark\
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "pkc-cloud:9092") \
.option("subscribe", "test1") \
.option("topic", "test1") \
.option("sasl.mechanisms", "PLAIN")\
.option("security.protocol", "SASL_SSL")\
.option("sasl.username", confluentApiKey)\
.option("kafka.sasl.jaas.config", "kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username='{}' password='{}';".format(confluentApiKey, confluentSecret))\
.option("kafka.ssl.endpoint.identification.algorithm", "https")\
.option("sasl.password", confluentSecret)\
.option("startingOffsets", "earliest")\
.option("basic.auth.credentials.source", "USER_INFO")\
.option("failOnDataLoss", "true").load()
df.printSchema()
query = df \
.selectExpr("CAST(key AS STRING) AS key", "to_json(struct(*)) AS value") \
.writeStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "pkc-cloud:9092") \
.option("topic", "test2") \
.option("sasl.mechanisms", "PLAIN")\
.option("security.protocol", "SASL_SSL")\
.option("sasl.username", confluentApiKey)\
.option("kafka.sasl.jaas.config", "kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username='{}' password='{}';".format(confluentApiKey, confluentSecret))\
.option("kafka.ssl.endpoint.identification.algorithm", "https")\
.option("sasl.password", confluentSecret)\
.option("startingOffsets", "latest")\
.option("basic.auth.credentials.source", "USER_INFO")\
.option("checkpointLocation", "/tmp/checkpoint").start()
I have been able to get the schema printed well.
|-- key: binary (nullable = true)
|-- value: binary (nullable = true)
|-- topic: string (nullable = true)
|-- partition: integer (nullable = true)
|-- offset: long (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampType: integer (nullable = true)
And when attempting to write to another Kafka topic using writeStream, I see the below logs and dont see the data being written and spark shuts down.
22/02/04 18:29:26 INFO CheckpointFileManager: Writing atomically to file:/tmp/checkpoint/metadata using temp file file:/tmp/checkpoint/.metadata.e6c58f93-5c1c-4f26-97cf-a8d3ed389a57.tmp
22/02/04 18:29:26 INFO CheckpointFileManager: Renamed temp file file:/tmp/checkpoint/.metadata.e6c58f93-5c1c-4f26-97cf-a8d3ed389a57.tmp to file:/tmp/checkpoint/metadata
22/02/04 18:29:27 INFO MicroBatchExecution: Starting [id = 71f0aeb8-46fc-49b5-8baf-3b83cb4df71f, runId = 7a274767-8830-448c-b9bc-d03217cd4465]. Use file:/tmp/checkpoint to store the query checkpoint.
22/02/04 18:29:27 INFO MicroBatchExecution: Reading table [org.apache.spark.sql.kafka010.KafkaSourceProvider$KafkaTable#3be72f6d] from DataSourceV2 named 'kafka' [org.apache.spark.sql.kafka010.KafkaSourceProvider#276c9fdc]
22/02/04 18:29:27 INFO SparkUI: Stopped Spark web UI at http://spark-sample-9d328d7ec5fee0bc-driver-svc.default.svc:4045
22/02/04 18:29:27 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
22/02/04 18:29:27 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
22/02/04 18:29:27 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)
22/02/04 18:29:27 INFO MicroBatchExecution: Starting new streaming query.
22/02/04 18:29:27 INFO MicroBatchExecution: Stream started from {}
22/02/04 18:29:27 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/02/04 18:29:27 INFO MemoryStore: MemoryStore cleared
22/02/04 18:29:27 INFO BlockManager: BlockManager stopped
22/02/04 18:29:27 INFO BlockManagerMaster: BlockManagerMaster stopped
22/02/04 18:29:27 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/02/04 18:29:28 INFO SparkContext: Successfully stopped SparkContext
22/02/04 18:29:28 INFO ConsumerConfig: ConsumerConfig values:
....
....
....
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
22/02/04 18:29:28 INFO ShutdownHookManager: Shutdown hook called
22/02/04 18:29:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-7acecef5-0f0b-4b9a-af81-c8aa12f7fcad
22/02/04 18:29:28 INFO AppInfoParser: Kafka version: 2.4.1
22/02/04 18:29:28 INFO AppInfoParser: Kafka commitId: c57222ae8cd7866b
22/02/04 18:29:28 INFO AppInfoParser: Kafka startTimeMs: 1643999368467
22/02/04 18:29:28 INFO ShutdownHookManager: Deleting directory /var/data/spark-641f2e65-8f10-46b9-9821-d3b1f3536c0e/spark-1a103622-3329-4444-8e69-40f5a341c372/pyspark-59e822b4-a4a4-403b-9937-170d99c67584
22/02/04 18:29:28 INFO KafkaConsumer: [Consumer clientId=consumer-spark-kafka-source-75381ad2-1ce9-4e2b-a0b7-18d6ecb5ea8b--2090736517-driver-0-1, groupId=spark-kafka-source-75381ad2-1ce9-4e2b-a0b7-18d6ecb5ea8b--2090736517-driver-0] Subscribed to topic(s): test1
22/02/04 18:29:28 INFO ShutdownHookManager: Deleting directory /var/data/spark-641f2e65-8f10-46b9-9821-d3b1f3536c0e/spark-1a103622-3329-4444-8e69-40f5a341c372
22/02/04 18:29:28 INFO MetricsSystemImpl: Stopping s3a-file-system metrics system...
22/02/04 18:29:28 INFO MetricsSystemImpl: s3a-file-system metrics system stopped.
22/02/04 18:29:28 INFO MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
Also, sometimes, I do see the below logs where the kafka connection fails to establish.
22/02/06 04:50:55 WARN NetworkClient: [Consumer clientId=consumer-spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0-1, groupId=spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0] Bootstrap broker pkc-.confluent.cloud:9092 (id: -1 rack: null) disconnected
22/02/06 04:50:56 WARN NetworkClient: [Consumer clientId=consumer-spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0-1, groupId=spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0] Bootstrap broker pkc-.confluent.cloud:9092 (id: -1 rack: null) disconnected
22/02/06 04:50:57 WARN NetworkClient: [Consumer clientId=consumer-spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0-1, groupId=spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0] Bootstrap broker pkc-.confluent.cloud:9092 (id: -1 rack: null) disconnected
22/02/06 04:50:58 WARN NetworkClient: [Consumer clientId=consumer-spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0-1, groupId=spark-kafka-source-fc21e146-82f0-4fc7-a2da-34f3e8f70026-289148490-driver-0] Bootstrap broker pkc-.confluent.cloud:9092 (id: -1 rack: null) disconnected
What am I doing wrong?

Related

cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.sql.execution.datasources.v2.DataSourceRDD

I am trying to submit a pyspark job to k8s spark cluster using airflow. In that spark job I am using writestream foreachBatch function to write streaming data and irrespective of the sink type facing this issue only when I am trying to write data :
Inside spark cluster
version: spark 3.3.0
pyspark 3.3
scala 2.12.15
OpenJDK 64-Bit Server VM,11.0.15
Inside airflow
spark version 3.1.2
pyspark 3.1.2
scala version 2.12.10
OpenJDK 64-Bit Server VM,1.8.0
dependencies: org.scala-lang:scala-library:2.12.8,org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.0,org.apache.spark:spark-sql_2.12:3.3.0,org.apache.spark:spark-core_2.12:3.3.0,org.postgresql:postgresql:42.3.3 .
Dag which I am using to submit is:
import airflow
from datetime import timedelta
from airflow import DAG
from time import sleep
from datetime import datetime
from airflow.providers.apache.spark.operators.spark_submit import SparkSubmitOperator
dag = DAG( dag_id = 'testpostgres.py', schedule_interval=None , start_date=datetime(2022, 1, 1), catchup=False)
spark_job = SparkSubmitOperator(application= '/usr/local/airflow/data/testpostgres.py',
conn_id= 'spark_kcluster',
task_id= 'spark_job_test',
dag= dag,
packages= "org.scala-lang:scala-library:2.12.8,org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.0,org.apache.spark:spark-sql_2.12:3.3.0,org.apache.spark:spark-core_2.12:3.3.0,org.postgresql:postgresql:42.3.3",
conf ={
'deploy-mode' : 'cluster',
'executor_cores' : 1,
'EXECUTORS_MEM' : '2G',
'name' : 'spark-py',
'spark.kubernetes.namespace' : 'sandbox',
'spark.kubernetes.file.upload.path' : '/usr/local/airflow/data',
'spark.kubernetes.container.image' : '**********',
'spark.kubernetes.container.image.pullPolicy' : 'IfNotPresent',
'spark.kubernetes.authenticate.driver.serviceAccountName' : 'spark',
'spark.kubernetes.driver.volumes.persistentVolumeClaim.rwopvc.options.claimName' : 'data-pvc',
'spark.kubernetes.driver.volumes.persistentVolumeClaim.rwopvc.mount.path' : '/usr/local/airflow/data',
'spark.driver.extraJavaOptions' : '-Divy.cache.dir=/tmp -Divy.home=/tmp'
}
)
This is the job I am trying to submit :
from pyspark.sql.functions import *
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql.functions import dayofweek
from pyspark.sql.functions import date_format
from pyspark.sql.functions import hour
from functools import reduce
from pyspark.sql.types import DoubleType, StringType, ArrayType
import pandas as pd
import json
spark = SparkSession.builder.appName('spark).getOrCreate()
kafka_topic_name = '****'
kafka_bootstrap_servers = '*********' + ':' + '*****'
streaming_dataframe = spark.readStream.format("kafka").option("kafka.bootstrap.servers", kafka_bootstrap_servers).option("subscribe", kafka_topic_name).option("startingOffsets", "earliest").load()
streaming_dataframe = streaming_dataframe.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
dataframe_schema = '******'
streaming_dataframe = streaming_dataframe.select(from_csv(col("value"), dataframe_schema).alias("pipeline")).select("pipeline.*")
tumblingWindows = streaming_dataframe.withWatermark("timeStamp", "48 hour").groupBy(window("timeStamp", "24 hour", "1 hour"), "phoneNumber").agg((F.first(F.col("duration")).alias("firstDuration")))
tumblingWindows = tumblingWindows.withColumn("start_window", F.col('window')['start'])
tumblingWindows = tumblingWindows.withColumn("end_window", F.col('window')['end'])
tumblingWindows = tumblingWindows.drop('window')
def postgres_write(tumblingWindows, epoch_id):
tumblingWindows.write.jdbc(url=db_target_url, table=table_postgres, mode='append', properties=db_target_properties)
pass
db_target_url = 'jdbc:postgresql://' + '*******'+ ':' + '****' + '/' + 'test'
table_postgres = '******'
db_target_properties = {
'user': 'postgres',
'password': 'postgres',
'driver': 'org.postgresql.Driver'
}
query = tumblingWindows.writeStream.foreachBatch(postgres_write).start().awaitTermination()
Error logs:
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2672)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2608)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2607)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2607)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1182)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1182)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2860)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2791)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:952)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2228)
at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:377)
... 42 more
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.sql.execution.datasources.v2.DataSourceRDDPartition.inputPartitions of type scala.collection.Seq in instance of org.apache.spark.sql.execution.datasources.v2.DataSourceRDDPartition
at java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(Unknown Source)
at java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(Unknown Source)
at java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(Unknown Source)
at java.base/java.io.ObjectInputStream.defaultCheckFieldValues(Unknown Source)
at java.base/java.io.ObjectInputStream.readSerialData(Unknown Source)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.base/java.io.ObjectInputStream.readObject0(Unknown Source)
at java.base/java.io.ObjectInputStream.defaultReadFields(Unknown Source)
at java.base/java.io.ObjectInputStream.readSerialData(Unknown Source)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(Unknown Source)
at java.base/java.io.ObjectInputStream.readObject0(Unknown Source)
at java.base/java.io.ObjectInputStream.readObject(Unknown Source)
at java.base/java.io.ObjectInputStream.readObject(Unknown Source)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:507)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Traceback (most recent call last):
File "/usr/local/airflow/data/spark-upload-d03175bc-8c50-4baf-8383-a203182f16c0/debug.py", line 20, in <module>
streaming_dataframe.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")\
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/streaming.py", line 107, in awaitTermination
File "/opt/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 196, in deco
pyspark.sql.utils.StreamingQueryException: Query [id = d0e140c1-830d-49c8-88b7-90b82d301408, runId = c0f38f58-6571-4fda-b3e0-98e4ffaf8c7a] terminated with exception: Writing job aborted
22/08/24 10:12:53 INFO SparkUI: Stopped Spark web UI at ************************
22/08/24 10:12:53 INFO KubernetesClusterSchedulerBackend: Shutting down all executors
22/08/24 10:12:53 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Asking each executor to shut down
22/08/24 10:12:53 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed.
22/08/24 10:12:53 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
22/08/24 10:12:53 INFO MemoryStore: MemoryStore cleared
22/08/24 10:12:53 INFO BlockManager: BlockManager stopped
22/08/24 10:12:53 INFO BlockManagerMaster: BlockManagerMaster stopped
22/08/24 10:12:53 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
22/08/24 10:12:54 INFO SparkContext: Successfully stopped SparkContext
22/08/24 10:12:54 INFO ShutdownHookManager: Shutdown hook called
22/08/24 10:12:54 INFO ShutdownHookManager: Deleting directory /var/data/spark-32ef85e0-e85c-4ac6-a46d-d3379ca58468/spark-adecf44a-dc60-4a85-bbe3-bc125f5cc39f/pyspark-f3ffaa5e-a490-464a-98d2-fbce223628eb
22/08/24 10:12:54 INFO ShutdownHookManager: Deleting directory /var/data/spark-32ef85e0-e85c-4ac6-a46d-d3379ca58468/spark-adecf44a-dc60-4a85-bbe3-bc125f5cc39f
22/08/24 10:12:54 INFO ShutdownHookManager: Deleting directory /tmp/spark-5acdd5e6-7f6e-45ec-adae-e98862e1537c```
I faced this issue recently. I think it occurs when shuffling data coming from Kafka.
I fixed it by loading all dependencies(jars) of org.apache.spark:spark-sql-kafka-0-10_2.12:3.3.0 to the project. You can find them here.
For now, i dont know which ones are enough.

spark streaming kafka : Unknown error fetching data for topic-partition

I'm trying to read a Kafka topic from a Spark cluster using Structured Streaming API with Kafka integration in Spark
val sparkSession = SparkSession.builder()
.master("local[*]")
.appName("some-app")
.getOrCreate()
Kafka stream creation
import sparkSession.implicits._
val dataFrame = sparkSession
.readStream
.format("kafka")
.option("subscribepattern", "preprod-*")
.option("kafka.bootstrap.servers", "<brokerUrl>:9094")
.option("kafka.ssl.protocol", "TLS")
.option("kafka.security.protocol", "SSL")
.option("kafka.ssl.key.password", secretPassword)
.option("kafka.ssl.keystore.location", "/tmp/xyz.jks")
.option("kafka.ssl.keystore.password", secretPassword)
.option("kafka.ssl.truststore.location", "/abc.jks")
.option("kafka.ssl.truststore.password", secretPassword)
.load()
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.as[(String, String)]
.writeStream
.format("console")
.start()
.awaitTermination()
running it using the command
/usr/local/spark/bin/spark-submit
--packages "org.apache.spark:spark-streaming-kafka-0-10_2.11:2.3.1,org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1"
myjar.jar
Getting the below error
2018-09-28 07:29:23 INFO AbstractCoordinator:505 - Discovered coordinator brokerUrl.com:32400 (id: 2147483647 rack: null) for group spark-kafka-source-c72dcb79-f3bc-4dfd-86a5-9d14be48fa04-1188588017-executor.
2018-09-28 07:29:23 INFO AbstractCoordinator:505 - Discovered coordinator brokerUrl.com:32400 (id: 2147483647 rack: null) for group spark-kafka-source-c72dcb79-f3bc-4dfd-86a5-9d14be48fa04-1188588017-executor.
2018-09-28 07:29:23 INFO AbstractCoordinator:505 - Discovered coordinator brokerUrl.com:32400 (id: 2147483647 rack: null) for group spark-kafka-source-c72dcb79-f3bc-4dfd-86a5-9d14be48fa04-1188588017-executor.
2018-09-28 07:29:23 INFO AbstractCoordinator:505 - Discovered coordinator brokerUrl.com:32400 (id: 2147483647 rack: null) for group spark-kafka-source-c72dcb79-f3bc-4dfd-86a5-9d14be48fa04-1188588017-executor.
2018-09-28 07:29:47 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-5
2018-09-28 07:30:25 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-7
2018-09-28 07:30:27 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-7
2018-09-28 07:30:27 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-5
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-8
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-4
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-7
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-8
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-4
2018-09-28 07:30:50 WARN Fetcher:594 - Unknown error fetching data for topic-partition preprod-sanity-test-5
.....
....
so on
What's your Kafka broker version? And how did you generate these messages?
If these messages have headers (https://issues.apache.org/jira/browse/KAFKA-4208), you will need to use Kafka 0.11+ to consume them as old Kafka client cannot read such messages. If so, you can use the following command:
/usr/local/spark/bin/spark-submit --packages "org.apache.kafka:kafka-clients:0.11.0.3,org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.1"
myjar.jar

Spark fails to load data frame from elasticsearch due to field format exception

My dataframe fails due to NumberFormatException on one of the nested JSON fields when reading from Elasticsearch .
I am not providing any schema as it should be inferred automatically from Elastic.
package org.arc
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.log4j._
import scala.io.Source
import java.nio.charset.CodingErrorAction
import scala.io.Codec
import org.apache.spark.storage.StorageLevel
import org.apache.spark._
import org.apache.spark.sql.SparkSession
import org.apache.spark.util.Utils
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.expressions
import org.apache.spark.sql.functions.{ concat, lit }
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.types._
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.types.{ StructType, StructField, StringType };
import org.apache.spark.serializer.KryoSerializer
object SparkOnES {
def main(args: Array[String]) {
val spark = SparkSession
.builder()
.appName("SparkESTest")
.config("spark.master", "local[*]")
.config("spark.sql.warehouse.dir", "C://SparkScala//SparkLocal//spark-warehouse")
.enableHiveSupport()
.getOrCreate()
//1.Read Sample JSON
import spark.implicits._
//val myjson = spark.read.json("C:\\Users\\jasjyotsinghj599\\Desktop\\SampleTest.json")
// myjson.show(false)
//2.Read Data from ES
val esdf = spark.read.format("org.elasticsearch.spark.sql")
.option("es.nodes", "XXXXXX")
.option("es.port", "80")
.option("es.query", "?q=*")
.option("es.nodes.wan.only", "true")
.option("pushdown", "true")
.option("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.load("batch_index/ticket")
esdf.createOrReplaceTempView("esdf")
spark.sql("Select * from esdf limit 1").show(false)
val esdf_fltr_lt = esdf.take(1)
}
}
The ErrorStack says that it cannot parse the input field.Looking at the exception, this issue seems to have caused due to mismatch in the type of data expected ( int, float, double ) and received ( string ) :
Caused by: java.lang.NumberFormatException: For input string: "161.60"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:277)
at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
at org.elasticsearch.spark.serialization.ScalaValueReader.parseLong(ScalaValueReader.scala:142)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$longValue$1.apply(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$longValue$1.apply(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader.checkNull(ScalaValueReader.scala:120)
at org.elasticsearch.spark.serialization.ScalaValueReader.longValue(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader.readValue(ScalaValueReader.scala:89)
at org.elasticsearch.spark.sql.ScalaRowValueReader.readValue(ScalaEsRowValueReader.scala:46)
at org.elasticsearch.hadoop.serialization.ScrollReader.parseValue(ScrollReader.java:770)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:720)
... 25 more
18/04/25 23:33:53 WARN TaskSetManager: Lost task 3.0 in stage 1.0 (TID 4, localhost): org.elasticsearch.hadoop.rest.EsHadoopParsingException: Cannot parse value [161.60] for field [tvl_tkt_tot_chrg_amt]
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:723)
at org.elasticsearch.hadoop.serialization.ScrollReader.map(ScrollReader.java:867)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:710)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHitAsMap(ScrollReader.java:476)
at org.elasticsearch.hadoop.serialization.ScrollReader.readHit(ScrollReader.java:401)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:296)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:269)
at org.elasticsearch.hadoop.rest.RestRepository.scroll(RestRepository.java:393)
at org.elasticsearch.hadoop.rest.ScrollQuery.hasNext(ScrollQuery.java:92)
at org.elasticsearch.spark.rdd.AbstractEsRDDIterator.hasNext(AbstractEsRDDIterator.scala:61)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NumberFormatException: For input string: "161.60"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:277)
at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
at org.elasticsearch.spark.serialization.ScalaValueReader.parseLong(ScalaValueReader.scala:142)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$longValue$1.apply(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader$$anonfun$longValue$1.apply(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader.checkNull(ScalaValueReader.scala:120)
at org.elasticsearch.spark.serialization.ScalaValueReader.longValue(ScalaValueReader.scala:141)
at org.elasticsearch.spark.serialization.ScalaValueReader.readValue(ScalaValueReader.scala:89)
at org.elasticsearch.spark.sql.ScalaRowValueReader.readValue(ScalaEsRowValueReader.scala:46)
at org.elasticsearch.hadoop.serialization.ScrollReader.parseValue(ScrollReader.java:770)
at org.elasticsearch.hadoop.serialization.ScrollReader.read(ScrollReader.java:720)
... 25 more
18/04/25 23:33:53 INFO SparkContext: Invoking stop() from shutdown hook
18/04/25 23:33:53 INFO SparkUI: Stopped Spark web UI at http://10.1.2.244:4040
18/04/25 23:33:53 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/04/25 23:33:53 INFO MemoryStore: MemoryStore cleared
18/04/25 23:33:53 INFO BlockManager: BlockManager stopped
18/04/25 23:33:53 INFO BlockManagerMaster: BlockManagerMaster stopped
18/04/25 23:33:53 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/04/25 23:33:53 INFO SparkContext: Successfully stopped SparkContext
18/04/25 23:33:53 INFO ShutdownHookManager: Shutdown hook called
18/04/25 23:33:53 INFO ShutdownHookManager: Deleting directory

Exception while integrating spark with Kafka

Doing Spark-kafka streaming on word-count. Built a jar using sbt.
When I do spark-submit the following exception is throwing.
Exception in thread "streaming-start" java.lang.NoSuchMethodError: org.apache.hadoop.fs.FileStatus.isDirectory()Z
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.initializeOrRecover(FileBasedWriteAheadLog.scala:245)
at org.apache.spark.streaming.util.FileBasedWriteAheadLog.<init>(FileBasedWriteAheadLog.scala:80)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:142)
at org.apache.spark.streaming.util.WriteAheadLogUtils$$anonfun$2.apply(WriteAheadLogUtils.scala:142)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLog(WriteAheadLogUtils.scala:141)
at org.apache.spark.streaming.util.WriteAheadLogUtils$.createLogForDriver(WriteAheadLogUtils.scala:99)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:256)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker$$anonfun$createWriteAheadLog$1.apply(ReceivedBlockTracker.scala:254)
at scala.Option.map(Option.scala:146)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.createWriteAheadLog(ReceivedBlockTracker.scala:254)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.<init>(ReceivedBlockTracker.scala:77)
at org.apache.spark.streaming.scheduler.ReceiverTracker.<init>(ReceiverTracker.scala:109)
at org.apache.spark.streaming.scheduler.JobScheduler.start(JobScheduler.scala:87)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply$mcV$sp(StreamingContext.scala:583)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at org.apache.spark.streaming.StreamingContext$$anonfun$liftedTree1$1$1.apply(StreamingContext.scala:578)
at org.apache.spark.util.ThreadUtils$$anon$2.run(ThreadUtils.scala:126)
18/03/27 12:43:55 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#12010fd1{/streaming,null,AVAILABLE,#Spark}
18/03/27 12:43:55 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#552ed807{/streaming/json,null,AVAILABLE,#Spark}
18/03/27 12:43:55 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#7318daf8{/streaming/batch,null,AVAILABLE,#Spark}
18/03/27 12:43:55 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#3f1ddac2{/streaming/batch/json,null,AVAILABLE,#Spark}
18/03/27 12:43:55 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler#37864b77{/static/streaming,null,AVAILABLE,#Spark}
18/03/27 12:43:55 INFO streaming.StreamingContext: StreamingContext started
my spark submit:
spark-submit --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 --class "KafkaWordCount" --master local[4] scala_project_2.11-1.0.jar localhost:2181 test-consumer-group word-count 1
scala_version: 2.11.8
spark_version: 2.2.0
sbt_version: 1.0.3
object KafkaWordCount {
def main(args: Array[String]) {
val (zkQuorum, group, topics, numThreads) = ("localhost:2181", "test-consumer-group", "word-count", 1)
val sparkConf = new SparkConf()
.setMaster("local[*]")
.setAppName("KafkaWordCount")
val ssc = new StreamingContext(sparkConf, Seconds(2))
ssc.checkpoint("checkpoint")
val topicMap = topics.split(",").map((_, numThreads)).toMap
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
val words = lines.flatMap(_.split(" "))
words.foreachRDD(rdd => println("#####################rdd###################### " + rdd.first))
val wordCounts = words.map(x => (x, 1L))
.reduceByKeyAndWindow(_ + _, _ - _, Minutes(10), Seconds(2), 2)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}

Sparkstreaming hangs after creating kafka consumer

I am trying to get a very simple kafka + sparkstreaming integration.
On the kafka side, I cloned this repository (https://github.com/confluentinc/cp-docker-images) and did a docker-compose up to get an instance of zookeeper and kafka running. I created a topic called "foo" and added messages. In this case, kafka is running on port 29092.
On the spark side, my build.sbt file looks like this:
name := "KafkaSpark"
version := "0.1"
scalaVersion := "2.11.12"
val sparkVersion = "2.2.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion
)
I was able to get the following code snippet running from consuming data from the terminal:
import org.apache.spark._
import org.apache.spark.streaming._
object SparkTest {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(3))
val lines = ssc.socketTextStream("localhost", 9999)
val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate
}
}
So the sparkstreaming is working.
Now, I created the following to consume from kafka:
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.count
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.sql.types.{StringType, StructType, TimestampType}
object KafkaTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder
.master("local")
.appName("Spark Word Count")
.getOrCreate()
val ssc = new StreamingContext(spark.sparkContext, Seconds(3))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:29092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "stream_group_id",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("foo")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.foreachRDD { (rdd, time) =>
val data = rdd.map(record => record.value)
data.foreach(println)
println(time)
}
ssc.start() // Start the computation
ssc.awaitTermination()
}
}
When it runs, I get the following in the console (I'm running this in intellij). The process just hangs at the last line after "subscribing" to the topic. I've tried creating a topic that does not exist and I get the same result, i.e. it doesn't seem to throw an error despite the lack of a topic existing. If I create a non-existing broker, I do get an error (Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka consumer) so it must be finding the broker when I do use the proper port.
Any suggestions on how to correct this issue?
Here's the log file:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/23 05:29:42 INFO SparkContext: Running Spark version 2.2.0
17/11/23 05:29:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/23 05:29:48 INFO SparkContext: Submitted application: Spark Word Count
17/11/23 05:29:48 INFO SecurityManager: Changing view acls to: jonathandick
17/11/23 05:29:48 INFO SecurityManager: Changing modify acls to: jonathandick
17/11/23 05:29:48 INFO SecurityManager: Changing view acls groups to:
17/11/23 05:29:48 INFO SecurityManager: Changing modify acls groups to:
17/11/23 05:29:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jonathandick); groups with view permissions: Set(); users with modify permissions: Set(jonathandick); groups with modify permissions: Set()
17/11/23 05:29:48 INFO Utils: Successfully started service 'sparkDriver' on port 59606.
17/11/23 05:29:48 DEBUG SparkEnv: Using serializer: class org.apache.spark.serializer.JavaSerializer
17/11/23 05:29:48 INFO SparkEnv: Registering MapOutputTracker
17/11/23 05:29:48 INFO SparkEnv: Registering BlockManagerMaster
17/11/23 05:29:48 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/23 05:29:48 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/23 05:29:48 INFO DiskBlockManager: Created local directory at /private/var/folders/w2/njgz3jnd097cdybxcvp9c2hw0000gn/T/blockmgr-3a3feb00-0fdb-4bc5-867d-808ac65d7c8f
17/11/23 05:29:48 INFO MemoryStore: MemoryStore started with capacity 2004.6 MB
17/11/23 05:29:48 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
17/11/23 05:29:49 INFO Utils: Successfully started service 'SparkUI' on port 4043.
17/11/23 05:29:49 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.67:4043
17/11/23 05:29:49 INFO Executor: Starting executor ID driver on host localhost
17/11/23 05:29:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59613.
17/11/23 05:29:49 INFO NettyBlockTransferService: Server created on 192.168.1.67:59613
17/11/23 05:29:49 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/23 05:29:49 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.67:59613 with 2004.6 MB RAM, BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/Users/jonathandick/IdeaProjects/KafkaSpark/spark-warehouse/').
17/11/23 05:29:49 INFO SharedState: Warehouse path is 'file:/Users/jonathandick/IdeaProjects/KafkaSpark/spark-warehouse/'.
17/11/23 05:29:50 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
17/11/23 05:29:50 WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data, otherwise Spark jobs will not get resources to process the received data.
17/11/23 05:29:50 WARN KafkaUtils: overriding enable.auto.commit to false for executor
17/11/23 05:29:50 WARN KafkaUtils: overriding auto.offset.reset to none for executor
17/11/23 05:29:50 WARN KafkaUtils: overriding executor group.id to spark-executor-stream_group_id
17/11/23 05:29:50 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Slide time = 3000 ms
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Checkpoint interval = null
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Remember interval = 3000 ms
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream#1a38eb73
17/11/23 05:29:50 INFO ForEachDStream: Slide time = 3000 ms
17/11/23 05:29:50 INFO ForEachDStream: Storage level = Serialized 1x Replicated
17/11/23 05:29:50 INFO ForEachDStream: Checkpoint interval = null
17/11/23 05:29:50 INFO ForEachDStream: Remember interval = 3000 ms
17/11/23 05:29:50 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#1e801ce2
17/11/23 05:29:50 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:29092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = stream_group_id
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/11/23 05:29:50 DEBUG KafkaConsumer: Starting the Kafka consumer
17/11/23 05:29:50 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:29092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = consumer-1
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = stream_group_id
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/11/23 05:29:50 INFO AppInfoParser: Kafka version : 0.10.0.1
17/11/23 05:29:50 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5
17/11/23 05:29:50 DEBUG KafkaConsumer: Kafka consumer created
17/11/23 05:29:50 DEBUG KafkaConsumer: Subscribed to topic(s): foo

Resources