Apache Beam - Error with example using Spark Runner pointing to a local spark master URL - apache-spark

I need to support a use case where we are able to run Beam pipelines in an external spark URL.
I took a basic example of a beam pipeline as below
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
class ConvertToByteArray(beam.DoFn):
def __init__(self):
pass
def setup(self):
pass
def process(self, row):
try:
yield bytearray(row + '\n', 'utf-8')
except Exception as e:
raise e
def run():
options = PipelineOptions([
"--runner=SparkRunner",
# "--spark_master_url=spark://0.0.0.0:7077",
# "--spark_version=3",
])
with beam.Pipeline(options=options) as p:
lines = (p
| 'Create words' >> beam.Create(['this is working'])
| 'Split words' >> beam.FlatMap(lambda words: words.split(' '))
| 'Build byte array' >> beam.ParDo(ConvertToByteArray())
| 'Group' >> beam.GroupBy() # Do future batching here
| 'print output' >> beam.Map(print)
)
if __name__ == "__main__":
run()
I try to run this pipeline in 2 ways
Using Apache Beam's internal spark runner
Running Spark locally and passing the spark master URL.
Approach 1 works fine and i'm able to see the output (screenshot below)
Screenshot of Output
Approach 2 gives a class incompatible error on the spark server
Running spark as a docker container and natively on my machine both gave the same error.
Exception trace
Spark Executor Command: "/opt/bitnami/java/bin/java" "-cp" "/opt/bitnami/spark/conf/:/opt/bitnami/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=62420" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#192.168.8.120:62420" "--executor-id" "0" "--hostname" "172.17.0.2" "--cores" "4" "--app-id" "app-20220425143553-0000" "--worker-url" "spark://Worker#172.17.0.2:44575"
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
22/04/25 14:35:55 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 165#ace075bc56c0
22/04/25 14:35:55 INFO SignalUtils: Registering signal handler for TERM
22/04/25 14:35:55 INFO SignalUtils: Registering signal handler for HUP
22/04/25 14:35:55 INFO SignalUtils: Registering signal handler for INT
22/04/25 14:35:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/04/25 14:35:56 INFO SecurityManager: Changing view acls to: spark,nikamath
22/04/25 14:35:56 INFO SecurityManager: Changing modify acls to: spark,nikamath
22/04/25 14:35:56 INFO SecurityManager: Changing view acls groups to:
22/04/25 14:35:56 INFO SecurityManager: Changing modify acls groups to:
22/04/25 14:35:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark, nikamath); groups with view permissions: Set(); users with modify permissions: Set(spark, nikamath); groups with modify permissions: Set()
22/04/25 14:35:57 INFO TransportClientFactory: Successfully created connection to /192.168.8.120:62420 after 194 ms (0 ms spent in bootstraps)
22/04/25 14:35:57 INFO SecurityManager: Changing view acls to: spark,nikamath
22/04/25 14:35:57 INFO SecurityManager: Changing modify acls to: spark,nikamath
22/04/25 14:35:57 INFO SecurityManager: Changing view acls groups to:
22/04/25 14:35:57 INFO SecurityManager: Changing modify acls groups to:
22/04/25 14:35:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark, nikamath); groups with view permissions: Set(); users with modify permissions: Set(spark, nikamath); groups with modify permissions: Set()
22/04/25 14:35:57 INFO TransportClientFactory: Successfully created connection to /192.168.8.120:62420 after 19 ms (0 ms spent in bootstraps)
22/04/25 14:35:58 INFO DiskBlockManager: Created local directory at /tmp/spark-41a0e50e-81aa-48b7-ae55-888ae3a0a4ca/executor-a5ff3b34-0166-405f-9c00-d5a5ee6f8688/blockmgr-d7bd5b95-ddb3-4642-9863-261a6e109fc4
22/04/25 14:35:58 INFO MemoryStore: MemoryStore started with capacity 366.3 MiB
22/04/25 14:35:58 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler#192.168.8.120:62420
22/04/25 14:35:58 INFO WorkerWatcher: Connecting to worker spark://Worker#172.17.0.2:44575
22/04/25 14:35:58 INFO TransportClientFactory: Successfully created connection to /172.17.0.2:44575 after 7 ms (0 ms spent in bootstraps)
22/04/25 14:35:58 INFO WorkerWatcher: Successfully connected to spark://Worker#172.17.0.2:44575
22/04/25 14:35:58 INFO ResourceUtils: ==============================================================
22/04/25 14:35:58 INFO ResourceUtils: No custom resources configured for spark.executor.
22/04/25 14:35:58 INFO ResourceUtils: ==============================================================
22/04/25 14:35:58 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
22/04/25 14:35:58 INFO Executor: Starting executor ID 0 on host 172.17.0.2
22/04/25 14:35:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39027.
22/04/25 14:35:59 INFO NettyBlockTransferService: Server created on 172.17.0.2:39027
22/04/25 14:35:59 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
22/04/25 14:35:59 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(0, 172.17.0.2, 39027, None)
22/04/25 14:35:59 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(0, 172.17.0.2, 39027, None)
22/04/25 14:35:59 INFO BlockManager: Initialized BlockManager: BlockManagerId(0, 172.17.0.2, 39027, None)
22/04/25 14:35:59 INFO Executor: Fetching spark://192.168.8.120:62420/jars/beam-runners-spark-3-job-server-2.33.0.jar with timestamp 1650897351765
22/04/25 14:35:59 INFO TransportClientFactory: Successfully created connection to /192.168.8.120:62420 after 5 ms (0 ms spent in bootstraps)
22/04/25 14:35:59 INFO Utils: Fetching spark://192.168.8.120:62420/jars/beam-runners-spark-3-job-server-2.33.0.jar to /tmp/spark-41a0e50e-81aa-48b7-ae55-888ae3a0a4ca/executor-a5ff3b34-0166-405f-9c00-d5a5ee6f8688/spark-cd481993-e8df-46fd-b00c-9a31e17d245d/fetchFileTemp1955801007454794690.tmp
22/04/25 14:36:02 INFO Utils: Copying /tmp/spark-41a0e50e-81aa-48b7-ae55-888ae3a0a4ca/executor-a5ff3b34-0166-405f-9c00-d5a5ee6f8688/spark-cd481993-e8df-46fd-b00c-9a31e17d245d/-16622853561650897351765_cache to /opt/bitnami/spark/work/app-20220425143553-0000/0/./beam-runners-spark-3-job-server-2.33.0.jar
22/04/25 14:36:04 INFO Executor: Adding file:/opt/bitnami/spark/work/app-20220425143553-0000/0/./beam-runners-spark-3-job-server-2.33.0.jar to class loader
22/04/25 14:36:04 INFO CoarseGrainedExecutorBackend: Got assigned task 0
22/04/25 14:36:04 INFO CoarseGrainedExecutorBackend: Got assigned task 1
22/04/25 14:36:04 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
22/04/25 14:36:04 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
22/04/25 14:36:04 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:109)
at org.apache.spark.scheduler.Task.metrics$lzycompute(Task.scala:72)
at org.apache.spark.scheduler.Task.metrics(Task.scala:71)
at org.apache.spark.scheduler.Task.run(Task.scala:100)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
22/04/25 14:36:04 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:109)
at org.apache.spark.scheduler.Task.metrics$lzycompute(Task.scala:72)
at org.apache.spark.scheduler.Task.metrics(Task.scala:71)
at org.apache.spark.scheduler.Task.run(Task.scala:100)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1462)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
22/04/25 14:36:04 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker for task 0.0 in stage 0.0 (TID 0),5,main]
java.lang.Error: java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:109)
at org.apache.spark.scheduler.Task.metrics$lzycompute(Task.scala:72)
at org.apache.spark.scheduler.Task.metrics(Task.scala:71)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$collectAccumulatorsAndResetStatusOnFailure$1(Executor.scala:424)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$collectAccumulatorsAndResetStatusOnFailure$1$adapted(Executor.scala:423)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.executor.Executor$TaskRunner.collectAccumulatorsAndResetStatusOnFailure(Executor.scala:423)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:704)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
... 2 more
22/04/25 14:36:04 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker for task 1.0 in stage 0.0 (TID 1),5,main]
java.lang.Error: java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1155)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.InvalidClassException: org.apache.spark.util.AccumulatorV2; local class incompatible: stream classdesc serialVersionUID = 8273715124741334009, local class serialVersionUID = 574976528730727648
at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2005)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1852)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2431)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2355)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2213)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1669)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:109)
at org.apache.spark.scheduler.Task.metrics$lzycompute(Task.scala:72)
at org.apache.spark.scheduler.Task.metrics(Task.scala:71)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$collectAccumulatorsAndResetStatusOnFailure$1(Executor.scala:424)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$collectAccumulatorsAndResetStatusOnFailure$1$adapted(Executor.scala:423)
at scala.Option.foreach(Option.scala:407)
at org.apache.spark.executor.Executor$TaskRunner.collectAccumulatorsAndResetStatusOnFailure(Executor.scala:423)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:704)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
... 2 more
22/04/25 14:36:05 INFO MemoryStore: MemoryStore cleared
22/04/25 14:36:05 INFO BlockManager: BlockManager stopped
22/04/25 14:36:05 INFO ShutdownHookManager: Shutdown hook called
22/04/25 14:36:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-41a0e50e-81aa-48b7-ae55-888ae3a0a4ca/executor-a5ff3b34-0166-405f-9c00-d5a5ee6f8688/spark-cd481993-e8df-46fd-b00c-9a31e17d245d
I confirmed that a worker was running and the spark was set up properly.
I submitted a sample spark job to this master and get it to work implying that there isn't anything wrong with the spark master or worker.
Versions used:
Spark 3.2.1
Apache Beam 2.37.0
Python 3.7

Related

Spark DataFrame Filter function throwing Task not Serializable exception

I am trying to filter data frame/dataset records using filter function with scala anonymous function. but it throws Task not serializable exception can someone please look into code and explain to me what mistake with code.
val spark = SparkSession.builder()
.appName("test data frame")
.master("local[*]")
.getOrCreate()
val user_seq = Seq(
Row(1,"John","London"),
Row(1,"Martin","New York"),
Row(1,"Abhishek","New York")
)
val user_schema = StructType(
Array(
StructField("user_id",IntegerType,true),
StructField("user_name",StringType,true),
StructField("user_city",StringType,true)
))
var user_df = spark.createDataFrame(spark.sparkContext.parallelize(user_seq),user_schema)
var user_rdd = user_df.filter((item)=>{
return item.getString(2) == "New York"
})
user_rdd.count();
I can see below exception on console. when I am trying to filter data with ColumnName its working fine.
objc[48765]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/bin/java (0x1059db4c0) and /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/lib/libinstrument.dylib (0x105a5f4e0). One of the two will be used. Which one is undefined.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/07/18 20:10:09 INFO SparkContext: Running Spark version 2.4.6
20/07/18 20:10:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/07/18 20:10:09 INFO SparkContext: Submitted application: test data frame
20/07/18 20:10:09 INFO SecurityManager: Changing view acls groups to:
20/07/18 20:10:09 INFO SecurityManager: Changing modify acls groups to:
20/07/18 20:10:12 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/07/18 20:10:12 INFO ContextCleaner: Cleaned accumulator 0
20/07/18 20:10:13 INFO CodeGenerator: Code generated in 170.789451 ms
20/07/18 20:10:13 INFO CodeGenerator: Code generated in 17.729004 ms
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:416)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:406)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:163)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:872)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:871)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
at org.apache.spark.rdd.RDD.mapPartitionsWithIndex(RDD.scala:871)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:630)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:128)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:391)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec.inputRDDs(HashAggregateExec.scala:151)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2836)
at org.apache.spark.sql.Dataset$$anonfun$count$1.apply(Dataset.scala:2835)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
at org.apache.spark.sql.Dataset.count(Dataset.scala:2835)
at DataFrameTest$.main(DataFrameTest.scala:65)
at DataFrameTest.main(DataFrameTest.scala)
Caused by: java.io.NotSerializableException: java.lang.Object
Serialization stack:
- object not serializable (class: java.lang.Object, value: java.lang.Object#cec590c)
- field (class: DataFrameTest$$anonfun$1, name: nonLocalReturnKey1$1, type: class java.lang.Object)
- object (class DataFrameTest$$anonfun$1, <function1>)
- element of array (index: 1)
- array (class [Ljava.lang.Object;, size 5)
- field (class: org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13, name: references$1, type: class [Ljava.lang.Object;)
- object (class org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13, <function2>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:413)
... 48 more
20/07/18 20:10:13 INFO SparkContext: Invoking stop() from shutdown hook
20/07/18 20:10:13 INFO SparkUI: Stopped Spark web UI at http://192.168.31.239:4040
20/07/18 20:10:13 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/07/18 20:10:13 INFO MemoryStore: MemoryStore cleared
20/07/18 20:10:13 INFO BlockManager: BlockManager stopped
20/07/18 20:10:13 INFO BlockManagerMaster: BlockManagerMaster stopped
20/07/18 20:10:13 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/07/18 20:10:13 INFO SparkContext: Successfully stopped SparkContext
20/07/18 20:10:13 INFO ShutdownHookManager: Shutdown hook called
20/07/18 20:10:13 INFO ShutdownHookManager: Deleting directory /private/var/folders/33/3n6vtfs54mdb7x6882fyqy4mccfmvg/T/spark-3e071448-7ad7-47b8-bf70-68ab74721aa2
Process finished with exit code 1
Remove return keyword in below line.
Change below code
var user_rdd = user_df.filter((item)=>{
return item.getString(2) == "New York"
})
with below line
var user_rdd = user_df.filter(_.getString(2) == "New York")
or
user_df.filter($"user_city" === "New York").count
Also refactor your code like below.
val df = Seq((1,"John","London"),(1,"Martin","New York"),(1,"Abhishek","New York"))
.toDF("user_id","user_name","user_city")
df.filter($"user_city" === "New York").count

When I write a DataFrame to a Parquet file, no errors are shown and no file is created

Hi everybody I have a problem while saving a DataFrame. I found a similar unanswered question: Saving Spark dataFrames as parquet files - no errors, but data is not being saved. My problem is that when I tested the following code:
scala> import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.linalg.Vectors
scala> val dataset = spark.createDataFrame(
| Seq((0, 18, 1.0, Vectors.dense(0.0, 10.0, 0.5), 1.0))
| ).toDF("id", "hour", "mobile", "userFeatures", "clicked")
dataset: org.apache.spark.sql.DataFrame = [id: int, hour: int ... 3 more fields]
scala> dataset.show
+---+----+------+--------------+-------+
| id|hour|mobile| userFeatures|clicked|
+---+----+------+--------------+-------+
| 0| 18| 1.0|[0.0,10.0,0.5]| 1.0|
+---+----+------+--------------+-------+
scala> dataset.write.parquet("/home/vitrion/out")
No errors has been shown and seems that the DF has been saved as Parquet file. Surprisingly, no file has been created in the output directory.
This is my cluster configuration
The logfile says:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/03/01 12:56:53 INFO CoarseGrainedExecutorBackend: Started daemon with process name: 51016#t630-0
18/03/01 12:56:53 INFO SignalUtils: Registered signal handler for TERM
18/03/01 12:56:53 INFO SignalUtils: Registered signal handler for HUP
18/03/01 12:56:53 INFO SignalUtils: Registered signal handler for INT
18/03/01 12:56:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/03/01 12:56:54 WARN Utils: Your hostname, t630-0 resolves to a loopback address: 127.0.1.1; using 192.168.239.218 instead (on interface eno1)
18/03/01 12:56:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/03/01 12:56:54 INFO SecurityManager: Changing view acls to: vitrion
18/03/01 12:56:54 INFO SecurityManager: Changing modify acls to: vitrion
18/03/01 12:56:54 INFO SecurityManager: Changing view acls groups to:
18/03/01 12:56:54 INFO SecurityManager: Changing modify acls groups to:
18/03/01 12:56:54 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vitrion); groups with view permissions: Set(); users with modify permissions: Set(vitrion); groups with modify permissions: Set()
18/03/01 12:56:54 INFO TransportClientFactory: Successfully created connection to /192.168.239.54:42629 after 80 ms (0 ms spent in bootstraps)
18/03/01 12:56:54 INFO SecurityManager: Changing view acls to: vitrion
18/03/01 12:56:54 INFO SecurityManager: Changing modify acls to: vitrion
18/03/01 12:56:54 INFO SecurityManager: Changing view acls groups to:
18/03/01 12:56:54 INFO SecurityManager: Changing modify acls groups to:
18/03/01 12:56:54 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vitrion); groups with view permissions: Set(); users with modify permissions: Set(vitrion); groups with modify permissions: Set()
18/03/01 12:56:54 INFO TransportClientFactory: Successfully created connection to /192.168.239.54:42629 after 2 ms (0 ms spent in bootstraps)
18/03/01 12:56:54 INFO DiskBlockManager: Created local directory at /tmp/spark-d749d72b-6db2-4f02-8dae-481c0ea1f68f/executor-f379929a-3a6a-4366-8983-b38e19fb9cfc/blockmgr-c6d89ef4-b22a-4344-8816-23306722d40c
18/03/01 12:56:54 INFO MemoryStore: MemoryStore started with capacity 8.4 GB
18/03/01 12:56:54 INFO CoarseGrainedExecutorBackend: Connecting to driver: spark://CoarseGrainedScheduler#192.168.239.54:42629
18/03/01 12:56:54 INFO WorkerWatcher: Connecting to worker spark://Worker#192.168.239.218:45532
18/03/01 12:56:54 INFO TransportClientFactory: Successfully created connection to /192.168.239.218:45532 after 1 ms (0 ms spent in bootstraps)
18/03/01 12:56:54 INFO WorkerWatcher: Successfully connected to spark://Worker#192.168.239.218:45532
18/03/01 12:56:54 INFO CoarseGrainedExecutorBackend: Successfully registered with driver
18/03/01 12:56:54 INFO Executor: Starting executor ID 2 on host 192.168.239.218
18/03/01 12:56:54 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 37178.
18/03/01 12:56:54 INFO NettyBlockTransferService: Server created on 192.168.239.218:37178
18/03/01 12:56:54 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
18/03/01 12:56:54 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(2, 192.168.239.218, 37178, None)
18/03/01 12:56:54 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(2, 192.168.239.218, 37178, None)
18/03/01 12:56:54 INFO BlockManager: Initialized BlockManager: BlockManagerId(2, 192.168.239.218, 37178, None)
18/03/01 12:56:54 INFO Executor: Using REPL class URI: spark://192.168.239.54:42629/classes
18/03/01 12:57:54 INFO CoarseGrainedExecutorBackend: Got assigned task 0
18/03/01 12:57:54 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/03/01 12:57:54 INFO TorrentBroadcast: Started reading broadcast variable 0
18/03/01 12:57:55 INFO TransportClientFactory: Successfully created connection to /192.168.239.54:35081 after 1 ms (0 ms spent in bootstraps)
18/03/01 12:57:55 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.1 KB, free 8.4 GB)
18/03/01 12:57:55 INFO TorrentBroadcast: Reading broadcast variable 0 took 103 ms
18/03/01 12:57:55 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 76.6 KB, free 8.4 GB)
18/03/01 12:57:55 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
18/03/01 12:57:55 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
18/03/01 12:57:55 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
18/03/01 12:57:55 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
18/03/01 12:57:55 INFO CodecConfig: Compression: SNAPPY
18/03/01 12:57:55 INFO CodecConfig: Compression: SNAPPY
18/03/01 12:57:55 INFO ParquetOutputFormat: Parquet block size to 134217728
18/03/01 12:57:55 INFO ParquetOutputFormat: Parquet page size to 1048576
18/03/01 12:57:55 INFO ParquetOutputFormat: Parquet dictionary page size to 1048576
18/03/01 12:57:55 INFO ParquetOutputFormat: Dictionary is on
18/03/01 12:57:55 INFO ParquetOutputFormat: Validation is off
18/03/01 12:57:55 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0
18/03/01 12:57:55 INFO ParquetOutputFormat: Maximum row group padding size is 0 bytes
18/03/01 12:57:55 INFO ParquetOutputFormat: Page size checking is: estimated
18/03/01 12:57:55 INFO ParquetOutputFormat: Min row count for page size check is: 100
18/03/01 12:57:55 INFO ParquetOutputFormat: Max row count for page size check is: 10000
18/03/01 12:57:55 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "integer",
"nullable" : false,
"metadata" : { }
}, {
"name" : "hour",
"type" : "integer",
"nullable" : false,
"metadata" : { }
}, {
"name" : "mobile",
"type" : "double",
"nullable" : false,
"metadata" : { }
}, {
"name" : "userFeatures",
"type" : {
"type" : "udt",
"class" : "org.apache.spark.ml.linalg.VectorUDT",
"pyClass" : "pyspark.ml.linalg.VectorUDT",
"sqlType" : {
"type" : "struct",
"fields" : [ {
"name" : "type",
"type" : "byte",
"nullable" : false,
"metadata" : { }
}, {
"name" : "size",
"type" : "integer",
"nullable" : true,
"metadata" : { }
}, {
"name" : "indices",
"type" : {
"type" : "array",
"elementType" : "integer",
"containsNull" : false
},
"nullable" : true,
"metadata" : { }
}, {
"name" : "values",
"type" : {
"type" : "array",
"elementType" : "double",
"containsNull" : false
},
"nullable" : true,
"metadata" : { }
} ]
}
},
"nullable" : true,
"metadata" : { }
}, {
"name" : "clicked",
"type" : "double",
"nullable" : false,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
required int32 id;
required int32 hour;
required double mobile;
optional group userFeatures {
required int32 type (INT_8);
optional int32 size;
optional group indices (LIST) {
repeated group list {
required int32 element;
}
}
optional group values (LIST) {
repeated group list {
required double element;
}
}
}
required double clicked;
}
18/03/01 12:57:55 INFO CodecPool: Got brand-new compressor [.snappy]
18/03/01 12:57:55 INFO InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 84
18/03/01 12:57:55 INFO FileOutputCommitter: Saved output of task 'attempt_20180301125755_0000_m_000000_0' to file:/home/vitrion/out/_temporary/0/task_20180301125755_0000_m_000000
18/03/01 12:57:55 INFO SparkHadoopMapRedUtil: attempt_20180301125755_0000_m_000000_0: Committed
18/03/01 12:57:55 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1967 bytes result sent to driver`
Can you please help me to solve this?
Thank you
Have you tried writing without the Vector? I have seen it in the past where complex data structures would cause writing issues.

Sparkstreaming hangs after creating kafka consumer

I am trying to get a very simple kafka + sparkstreaming integration.
On the kafka side, I cloned this repository (https://github.com/confluentinc/cp-docker-images) and did a docker-compose up to get an instance of zookeeper and kafka running. I created a topic called "foo" and added messages. In this case, kafka is running on port 29092.
On the spark side, my build.sbt file looks like this:
name := "KafkaSpark"
version := "0.1"
scalaVersion := "2.11.12"
val sparkVersion = "2.2.0"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
"org.apache.spark" %% "spark-streaming" % sparkVersion,
"org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion
)
I was able to get the following code snippet running from consuming data from the terminal:
import org.apache.spark._
import org.apache.spark.streaming._
object SparkTest {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(3))
val lines = ssc.socketTextStream("localhost", 9999)
val words = lines.flatMap(_.split(" "))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate
}
}
So the sparkstreaming is working.
Now, I created the following to consume from kafka:
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.count
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.sql.types.{StringType, StructType, TimestampType}
object KafkaTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder
.master("local")
.appName("Spark Word Count")
.getOrCreate()
val ssc = new StreamingContext(spark.sparkContext, Seconds(3))
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:29092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "stream_group_id",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("foo")
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
stream.foreachRDD { (rdd, time) =>
val data = rdd.map(record => record.value)
data.foreach(println)
println(time)
}
ssc.start() // Start the computation
ssc.awaitTermination()
}
}
When it runs, I get the following in the console (I'm running this in intellij). The process just hangs at the last line after "subscribing" to the topic. I've tried creating a topic that does not exist and I get the same result, i.e. it doesn't seem to throw an error despite the lack of a topic existing. If I create a non-existing broker, I do get an error (Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka consumer) so it must be finding the broker when I do use the proper port.
Any suggestions on how to correct this issue?
Here's the log file:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/11/23 05:29:42 INFO SparkContext: Running Spark version 2.2.0
17/11/23 05:29:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/23 05:29:48 INFO SparkContext: Submitted application: Spark Word Count
17/11/23 05:29:48 INFO SecurityManager: Changing view acls to: jonathandick
17/11/23 05:29:48 INFO SecurityManager: Changing modify acls to: jonathandick
17/11/23 05:29:48 INFO SecurityManager: Changing view acls groups to:
17/11/23 05:29:48 INFO SecurityManager: Changing modify acls groups to:
17/11/23 05:29:48 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jonathandick); groups with view permissions: Set(); users with modify permissions: Set(jonathandick); groups with modify permissions: Set()
17/11/23 05:29:48 INFO Utils: Successfully started service 'sparkDriver' on port 59606.
17/11/23 05:29:48 DEBUG SparkEnv: Using serializer: class org.apache.spark.serializer.JavaSerializer
17/11/23 05:29:48 INFO SparkEnv: Registering MapOutputTracker
17/11/23 05:29:48 INFO SparkEnv: Registering BlockManagerMaster
17/11/23 05:29:48 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/11/23 05:29:48 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/11/23 05:29:48 INFO DiskBlockManager: Created local directory at /private/var/folders/w2/njgz3jnd097cdybxcvp9c2hw0000gn/T/blockmgr-3a3feb00-0fdb-4bc5-867d-808ac65d7c8f
17/11/23 05:29:48 INFO MemoryStore: MemoryStore started with capacity 2004.6 MB
17/11/23 05:29:48 INFO SparkEnv: Registering OutputCommitCoordinator
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/11/23 05:29:49 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
17/11/23 05:29:49 INFO Utils: Successfully started service 'SparkUI' on port 4043.
17/11/23 05:29:49 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.1.67:4043
17/11/23 05:29:49 INFO Executor: Starting executor ID driver on host localhost
17/11/23 05:29:49 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 59613.
17/11/23 05:29:49 INFO NettyBlockTransferService: Server created on 192.168.1.67:59613
17/11/23 05:29:49 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/11/23 05:29:49 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.1.67:59613 with 2004.6 MB RAM, BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.67, 59613, None)
17/11/23 05:29:49 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/Users/jonathandick/IdeaProjects/KafkaSpark/spark-warehouse/').
17/11/23 05:29:49 INFO SharedState: Warehouse path is 'file:/Users/jonathandick/IdeaProjects/KafkaSpark/spark-warehouse/'.
17/11/23 05:29:50 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
17/11/23 05:29:50 WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data, otherwise Spark jobs will not get resources to process the received data.
17/11/23 05:29:50 WARN KafkaUtils: overriding enable.auto.commit to false for executor
17/11/23 05:29:50 WARN KafkaUtils: overriding auto.offset.reset to none for executor
17/11/23 05:29:50 WARN KafkaUtils: overriding executor group.id to spark-executor-stream_group_id
17/11/23 05:29:50 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Slide time = 3000 ms
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Storage level = Serialized 1x Replicated
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Checkpoint interval = null
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Remember interval = 3000 ms
17/11/23 05:29:50 INFO DirectKafkaInputDStream: Initialized and validated org.apache.spark.streaming.kafka010.DirectKafkaInputDStream#1a38eb73
17/11/23 05:29:50 INFO ForEachDStream: Slide time = 3000 ms
17/11/23 05:29:50 INFO ForEachDStream: Storage level = Serialized 1x Replicated
17/11/23 05:29:50 INFO ForEachDStream: Checkpoint interval = null
17/11/23 05:29:50 INFO ForEachDStream: Remember interval = 3000 ms
17/11/23 05:29:50 INFO ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream#1e801ce2
17/11/23 05:29:50 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:29092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id =
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = stream_group_id
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/11/23 05:29:50 DEBUG KafkaConsumer: Starting the Kafka consumer
17/11/23 05:29:50 INFO ConsumerConfig: ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = [localhost:29092]
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.mechanism = GSSAPI
interceptor.classes = null
exclude.internal.topics = true
ssl.truststore.password = null
client.id = consumer-1
ssl.endpoint.identification.algorithm = null
max.poll.records = 2147483647
check.crcs = true
request.timeout.ms = 40000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
ssl.truststore.location = null
ssl.keystore.password = null
fetch.min.bytes = 1
send.buffer.bytes = 131072
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
group.id = stream_group_id
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
ssl.key.password = null
fetch.max.wait.ms = 500
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 30000
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = PLAINTEXT
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
auto.offset.reset = latest
17/11/23 05:29:50 INFO AppInfoParser: Kafka version : 0.10.0.1
17/11/23 05:29:50 INFO AppInfoParser: Kafka commitId : a7a17cdec9eaa6c5
17/11/23 05:29:50 DEBUG KafkaConsumer: Kafka consumer created
17/11/23 05:29:50 DEBUG KafkaConsumer: Subscribed to topic(s): foo

Spring boot - Spark Application: Unable to process a file on its nodes

I have the below setup:
Spark Master and Slaves configured and running in my local.
17/11/01 18:03:52 INFO Utils: Successfully started service 'sparkMaster' on port 7077.
17/11/01 18:03:52 INFO Master: Starting Spark master at spark://127.0.0.1:7077
17/11/01 18:03:52 INFO Master: Running Spark version 2.2.0
17/11/01 18:03:52 INFO Utils: Successfully started service 'MasterUI' on port 8080.
I have a spring boot application whose properties file contents look like the below:
spark.home=/usr/local/Cellar/apache-spark/2.2.0/bin/
master.uri=spark://127.0.0.1:7077
#Autowired
SparkConf sparkConf;
public void processFile(String inputFile, String outputFile) {
JavaSparkContext javaSparkContext;
SparkContext sc = new SparkContext(sparkConf);
SerializationWrapper sw= new SerializationWrapper() {
private static final long serialVersionUID = 1L;
#Override
public JavaSparkContext createJavaSparkContext() {
// TODO Auto-generated method stub
return JavaSparkContext.fromSparkContext(sc);
}
};
javaSparkContext=sw.createJavaSparkContext();
JavaRDD<String> lines = javaSparkContext.textFile(inputFile);
Broadcast<JavaRDD<String>> outputLines;
outputLines = javaSparkContext.broadcast(lines.map(new Function<String, String>() {
/**
*
*/
private static final long serialVersionUID = 1L;
#Override
public String call(String arg0) throws Exception {
// TODO Auto-generated method stub
return arg0;
}
}));
outputLines.getValue().saveAsTextFile(outputFile);
//javaSparkContext.close();
}
When i run the code I'm getting the below error:
17/11/01 18:16:36 INFO TorrentBroadcast: Started reading broadcast variable 2
17/11/01 18:16:36 INFO TransportClientFactory: Successfully created connection to /192.168.0.135:51903 after 1 ms (0 ms spent in bootstraps)
17/11/01 18:16:36 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 24.4 KB, free 366.3 MB)
17/11/01 18:16:36 INFO TorrentBroadcast: Reading broadcast variable 2 took 82 ms
17/11/01 18:16:36 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 67.2 KB, free 366.2 MB)
17/11/01 18:16:36 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2251)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
17/11/01 18:16:36 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
The springboot-spark app should process the files based on REST API call where i get the input and output file location shared across the Spark nodes.
Any suggestions to fix the above errors
I think you should not broadcast a JavaRDD, since a RDD is already distributed among your cluster nodes.

Persist RDD as Avro File

I have written this sample program to persist an RDD into an avro file.
I am using CDH 5.4 with Spark 1.3
I wrote this avsc file and then generated code for class User
{"namespace": "com.abhi",
"type": "record",
"name": "User",
"fields": [
{"name": "firstname", "type": "string"},
{"name": "lastname", "type": "string"} ]
}
Then I generated the code for User
java -jar ~/Downloads/avro-tools-1.7.7.jar compile schema User.avsc .
The I wrote my example
package com.abhi
import org.apache.hadoop.mapreduce.Job
import org.apache.spark.SparkConf
import org.apache.avro.generic.GenericRecord
import org.apache.avro.mapred.AvroKey
import org.apache.avro.mapreduce.{AvroKeyOutputFormat, AvroJob, AvroKeyInputFormat}
import org.apache.hadoop.io.NullWritable
import org.apache.spark.SparkContext
object MySpark {
def main(args : Array[String]) : Unit = {
val sf = new SparkConf()
.setMaster("local[2]")
.setAppName("MySpark")
val sc = new SparkContext(sf)
val user1 = new User();
user1.setFirstname("Test1");
user1.setLastname("Test2");
val user2 = new User("Test3", "Test4");
// Construct via builder
val user3 = User.newBuilder()
.setFirstname("Test5")
.setLastname("Test6")
.build()
val list = Array(user1, user2, user3)
val userRdd = sc.parallelize(list)
val job: Job = Job.getInstance()
AvroJob.setOutputKeySchema(job, user1.getSchema)
val output = "/user/cloudera/users.avro"
userRdd.map(row => (new AvroKey(row), NullWritable.get()))
.saveAsNewAPIHadoopFile(
output,
classOf[AvroKey[User]],
classOf[NullWritable],
classOf[AvroKeyOutputFormat[User]],
job.getConfiguration)
}
}
I have two concerns with this code
Some of the imports are from the old mapreduce api and I wonder why are they required for Spark code
import org.apache.hadoop.mapreduce.Job
import org.apache.avro.mapred.AvroKey
import org.apache.avro.mapreduce.{AvroKeyOutputFormat, AvroJob,
AvroKeyInputFormat}
The code throws an exception when I submit it to the hadoop cluster
It does create an empty directory called /user/cloudera/users.avro in HDFS
15/11/01 08:20:42 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
15/11/01 08:20:42 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
15/11/01 08:20:42 INFO spark.SparkContext: Starting job: saveAsNewAPIHadoopFile at MySpark.scala:52
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Got job 1 (saveAsNewAPIHadoopFile at MySpark.scala:52) with 2 output partitions (allowLocal=false)
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Final stage: Stage 1(saveAsNewAPIHadoopFile at MySpark.scala:52)
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Missing parents: List()
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Submitting Stage 1 (MapPartitionsRDD[2] at map at MySpark.scala:51), which has no missing parents
15/11/01 08:20:42 INFO storage.MemoryStore: ensureFreeSpace(66904) called with curMem=301745, maxMem=280248975
15/11/01 08:20:42 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 65.3 KB, free 266.9 MB)
15/11/01 08:20:42 INFO storage.MemoryStore: ensureFreeSpace(23066) called with curMem=368649, maxMem=280248975
15/11/01 08:20:42 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 22.5 KB, free 266.9 MB)
15/11/01 08:20:42 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:34630 (size: 22.5 KB, free: 267.2 MB)
15/11/01 08:20:42 INFO storage.BlockManagerMaster: Updated info of block broadcast_2_piece0
15/11/01 08:20:42 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:839
15/11/01 08:20:42 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 1 (MapPartitionsRDD[2] at map at MySpark.scala:51)
15/11/01 08:20:42 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/11/01 08:20:42 ERROR scheduler.TaskSetManager: Failed to serialize task 1, not attempting to retry it.
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.serializer.SerializationDebugger$ObjectStreamClassMethods$.getObjFieldValues$extension(SerializationDebugger.scala:240)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visitSerializable(SerializationDebugger.scala:150)
at org.apache.spark.serializer.SerializationDebugger$SerializationDebugger.visit(SerializationDebugger.scala:99)
at org.apache.spark.serializer.SerializationDebugger$.find(SerializationDebugger.scala:58)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:39)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.scheduler.Task$.serializeWithDependencies(Task.scala:149)
at org.apache.spark.scheduler.TaskSetManager.resourceOffer(TaskSetManager.scala:464)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet$1.apply$mcVI$sp(TaskSchedulerImpl.scala:232)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.scheduler.TaskSchedulerImpl.org$apache$spark$scheduler$TaskSchedulerImpl$$resourceOfferSingleTaskSet(TaskSchedulerImpl.scala:227)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3$$anonfun$apply$6.apply(TaskSchedulerImpl.scala:296)
at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$3$$anonfun$apply$6.apply(TaskSchedulerImpl.scala:294)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
Problem is that Spark can't serialise your User class, try setting up a KryoConfigurator and registering your class there.

Resources