Trying to catch up with the Spark 2.3 documentation on how to deploy jobs on a Kubernetes 1.9.3 cluster : http://spark.apache.org/docs/latest/running-on-kubernetes.html
The Kubernetes 1.9.3 cluster is operating properly on offline bare-metal servers and was installed with kubeadm. The following command was used to submit the job (SparkPi example job):
/opt/spark/bin/spark-submit --master k8s://https://k8s-master:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=spark:v2.3.0 local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
Here is the stacktrace that we all love:
++ id -u
+ myuid=0
++ id -g
+ mygid=0
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/ash
+ '[' -z root:x:0:0:root:/root:/bin/ash ']'
+ SPARK_K8S_CMD=driver
+ '[' -z driver ']'
+ shift 1
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_JAVA_OPTS
+ '[' -n /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar ']'
+ SPARK_CLASSPATH=':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar'
+ '[' -n '' ']'
+ case "$SPARK_K8S_CMD" in
+ CMD=(${JAVA_HOME}/bin/java "${SPARK_JAVA_OPTS[#]}" -cp "$SPARK_CLASSPATH" -Xms$SPARK_DRIVER_MEMORY -Xmx$SPARK_DRIVER_MEMORY -Dspark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS $SPARK_DRIVER_CLASS $SPARK_DRIVER_ARGS)
+ exec /sbin/tini -s -- /usr/lib/jvm/java-1.8-openjdk/bin/java -Dspark.kubernetes.driver.pod.name=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver -Dspark.driver.port=7078 -Dspark.submit.deployMode=cluster -Dspark.master=k8s://https://k8s-master:6443 -Dspark.kubernetes.executor.podNamePrefix=spark-pi-b6f8a60df70a3b9d869c4e305518f43a -Dspark.driver.blockManager.port=7079 -Dspark.app.id=spark-7077ad8f86114551b0ae04ae63a74d5a -Dspark.driver.host=spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc -Dspark.app.name=spark-pi -Dspark.kubernetes.container.image=spark:v2.3.0 -Dspark.jars=/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar,/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar -Dspark.executor.instances=2 -cp ':/opt/spark/jars/*:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar:/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar' -Xms1g -Xmx1g -Dspark.driver.bindAddress=10.244.1.17 org.apache.spark.examples.SparkPi
2018-03-07 12:39:35 INFO SparkContext:54 - Running Spark version 2.3.0
2018-03-07 12:39:36 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-03-07 12:39:36 INFO SparkContext:54 - Submitted application: Spark Pi
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls to: root
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls to: root
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing view acls groups to:
2018-03-07 12:39:36 INFO SecurityManager:54 - Changing modify acls groups to:
2018-03-07 12:39:36 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'sparkDriver' on port 7078.
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering MapOutputTracker
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering BlockManagerMaster
2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
2018-03-07 12:39:36 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up
2018-03-07 12:39:36 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-7f5370ad-b495-4943-ad75-285b7ead3e5b
2018-03-07 12:39:36 INFO MemoryStore:54 - MemoryStore started with capacity 408.9 MB
2018-03-07 12:39:36 INFO SparkEnv:54 - Registering OutputCommitCoordinator
2018-03-07 12:39:36 INFO log:192 - Logging initialized #1936ms
2018-03-07 12:39:36 INFO Server:346 - jetty-9.3.z-SNAPSHOT
2018-03-07 12:39:36 INFO Server:414 - Started #2019ms
2018-03-07 12:39:36 INFO AbstractConnector:278 - Started ServerConnector#4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-07 12:39:36 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040.
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5b6813df{/jobs,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#495083a0{/jobs/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5fd62371{/jobs/job,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2b62442c{/jobs/job/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#66629f63{/stages,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#841e575{/stages/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#27a5328c{/stages/stage,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6b5966e1{/stages/stage/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#65e61854{/stages/pool,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#1568159{/stages/pool/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4fcee388{/storage,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6f80fafe{/storage/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#3af17be2{/storage/rdd,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#f9879ac{/storage/rdd/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#37f21974{/environment,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5f4d427e{/environment/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#6e521c1e{/executors,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#224b4d61{/executors/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#5d5d9e5{/executors/threadDump,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#303e3593{/executors/threadDump/json,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4ef27d66{/static,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#62dae245{/,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#4b6579e8{/api,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#3954d008{/jobs/job/kill,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler#2f94c4db{/stages/stage/kill,null,AVAILABLE,#Spark}
2018-03-07 12:39:36 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040
2018-03-07 12:39:36 INFO SparkContext:54 - Added JAR /opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar at spark://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:7078/jars/spark-examples_2.11-2.3.0.jar with timestamp 1520426376949
2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2018-03-07 12:39:37 WARN KubernetesClusterManager:66 - The executor's init-container config map key is not specified. Executors will therefore not attempt to fetch remote or submitted dependencies.
2018-03-07 12:39:42 ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 8 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 12 more
2018-03-07 12:39:42 INFO AbstractConnector:318 - Stopped Spark#4215838f{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2018-03-07 12:39:42 INFO SparkUI:54 - Stopped Spark web UI at http://spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver-svc.default.svc:4040
2018-03-07 12:39:42 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2018-03-07 12:39:42 INFO MemoryStore:54 - MemoryStore cleared
2018-03-07 12:39:42 INFO BlockManager:54 - BlockManager stopped
2018-03-07 12:39:42 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2018-03-07 12:39:42 WARN MetricsSystem:66 - Stopping a MetricsSystem that is not running
2018-03-07 12:39:42 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2018-03-07 12:39:42 INFO SparkContext:54 - Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2747)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:492)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2486)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:930)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:921)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [spark-pi-b6f8a60df70a3b9d869c4e305518f43a-driver] in namespace: [default] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:228)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:184)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.<init>(KubernetesClusterSchedulerBackend.scala:70)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:120)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2741)
... 8 more
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at okhttp3.Dns$1.lookup(Dns.java:39)
at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.java:171)
at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.java:137)
at okhttp3.internal.connection.RouteSelector.next(RouteSelector.java:82)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:171)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:121)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:100)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:120)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:92)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:67)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:185)
at okhttp3.RealCall.execute(RealCall.java:69)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:377)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:343)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:312)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:295)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:783)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:217)
... 12 more
2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Shutdown hook called
2018-03-07 12:39:42 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-64fe7ad8-669f-4591-a3f6-67440d450a44
So apparently the Kubernetes Scheduler Backend cannot contact the pod because it is unable to resolve kubernetes.default.svc. Hum.. why?
I also configured RBAC with a spark service account as mentionned in the documentation but the same problem occurs. (also tried on a different namespace, same problem)
Here are the logs from kube-dns:
I0306 16:04:04.170889 1 dns.go:555] Could not find endpoints for service "spark-pi-b9e8b4c66fe83c4d94a8d46abc2ee8f5-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0306 16:04:29.751201 1 dns.go:555] Could not find endpoints for service "spark-pi-0665ad323820371cb215063987a31e05-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0306 16:06:26.414146 1 dns.go:555] Could not find endpoints for service "spark-pi-2bf24282e8033fa9a59098616323e267-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0307 08:16:17.404971 1 dns.go:555] Could not find endpoints for service "spark-pi-3887031e031732108711154b2ec57d28-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I0307 08:17:11.682218 1 dns.go:555] Could not find endpoints for service "spark-pi-3d84127226393fc99e2fe035db56bfb5-driver-svc" in namespace "default". DNS records will be created once endpoints show up.
I really can't figure out why those errors come up.
Try to alter the pod network with one method except Calico, check whether kube-dns work well.
To create a custom service account, a user can use the kubectl create serviceaccount command. For example, the following command creates a service account named spark:
$ kubectl create serviceaccount spark
To grant a service account a Role or ClusterRole, a RoleBinding or ClusterRoleBinding is needed. To create a RoleBinding or ClusterRoleBinding, a user can use the kubectl create rolebinding (or clusterrolebinding for ClusterRoleBinding) command. For example, the following command creates an edit ClusterRole in the default namespace and grants it to the spark service account created above:
$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default
Depending on the version and setup of Kubernetes deployed, this default service account may or may not have the role that allows driver pods to create pods and services under the default Kubernetes RBAC policies. Sometimes users may need to specify a custom service account that has the right role granted. Spark on Kubernetes supports specifying a custom service account to be used by the driver pod through the configuration property spark.kubernetes.authenticate.driver.serviceAccountName=. For example to make the driver pod use the spark service account, a user simply adds the following option to the spark-submit command:
spark-submit --master k8s://https://192.168.1.5:6443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=5 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.container.image=leeivan/spark:latest local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
I faced the same issue. If you're using minikube. Try deleting minikube using minikube delete and minikube start.
Then create serviceaccount and clusterrolebinding
To add to openbrace's answer,
and based on Ivan Lee's answer too,
if you are using minikube,
running the following command was enough for me:
kubectl create clusterrolebinding default --clusterrole=edit --serviceaccount=default:default --namespace=default
That way, I didn't have to change spark.kubernetes.authenticate.driver.serviceAccountName when using spark-submit.
Test if pod with your spark image can resolve dns
create test_dns.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: testdns
namespace: default
spec:
containers:
- name: testdns
image: <your-spark-image>
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
apply it
kubectl apply -f test-dns.yaml
run nslookup for kubernetes.default.svc
kubectl exec -ti testdns -- nslookup kubernetes.default.svc
run it multiple time to check if your dns run consistently
Related
I submit a spark job first like this in a pyspark file
os.system(f'spark-submit --master local --jars ./examples/lib/app.jar app.py')
Then in the submitted app.py file, I create a new SparkSession like this:
spark = SparkSession.builder.appName(appName) \
.config('spark.jars') \
.getOrCreate()
Error message:
23/01/17 11:02:52 INFO SparkContext: Running Spark version 3.3.0
23/01/17 11:02:52 INFO ResourceUtils: ==============================================================
23/01/17 11:02:52 INFO ResourceUtils: No custom resources configured for spark.driver.
23/01/17 11:02:52 INFO ResourceUtils: ==============================================================
23/01/17 11:02:52 INFO SparkContext: Submitted application: symbolic_test
23/01/17 11:02:52 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
23/01/17 11:02:52 INFO ResourceProfile: Limiting resource is cpu
23/01/17 11:02:53 INFO ResourceProfileManager: Added ResourceProfile id: 0
23/01/17 11:02:53 INFO SecurityManager: Changing view acls to: annie
23/01/17 11:02:53 INFO SecurityManager: Changing modify acls to: annie
23/01/17 11:02:53 INFO SecurityManager: Changing view acls groups to:
23/01/17 11:02:53 INFO SecurityManager: Changing modify acls groups to:
23/01/17 11:02:53 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(annie); groups with view permissions: Set(); users with modify permissions: Set(annie); groups with modify permissions: Set()
23/01/17 11:02:53 INFO Utils: Successfully started service 'sparkDriver' on port 42141.
23/01/17 11:02:53 INFO SparkEnv: Registering MapOutputTracker
23/01/17 11:02:53 INFO SparkEnv: Registering BlockManagerMaster
23/01/17 11:02:53 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
23/01/17 11:02:53 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
23/01/17 11:02:53 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
23/01/17 11:02:53 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e4cc3b01-a6d5-4454-ad2d-4d0f42066479
23/01/17 11:02:53 INFO MemoryStore: MemoryStore started with capacity 434.4 MiB
23/01/17 11:02:53 INFO SparkEnv: Registering OutputCommitCoordinator
23/01/17 11:02:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
23/01/17 11:02:53 ERROR SparkContext: Failed to add None to Spark environment
java.io.FileNotFoundException: Jar /home/annie/exampleApp/example/None not found
at org.apache.spark.SparkContext.addLocalJarFile$1(SparkContext.scala:1949)
at org.apache.spark.SparkContext.addJar(SparkContext.scala:2004)
at org.apache.spark.SparkContext.$anonfun$new$12(SparkContext.scala:507)
at org.apache.spark.SparkContext.$anonfun$new$12$adapted(SparkContext.scala:507)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:507)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:238)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:829)
when creating spark session through pyspark, I get the following error messages, which only arise when I add .config('spark.jars').
I've set my $SPARK_HOME variable correctly...
Any help will be appreciated!
If your code sample is true you do not assign any value to spark.jars key while creating spark session. Assigning jar path as value may solve the error.
SparkSession.builder.appName(appName) \
.config('config_key', config_value) \
guys!
I'm having a problem when trying to execue a spark job on kubernetes (using k8s spark-operator). I'm trying to access a local minio bucket using s3a endpoint. My code work locally but when I try to kubectl apply -f .yaml to start the cluster, I get the following error:
++ id -u
+ myuid=1001
++ id -g
+ mygid=0
+ set +e
++ getent passwd 1001
+ uidentry=
+ set -e
+ '[' -z '' ']'
+ '[' -w /etc/passwd ']'
+ echo '1001:x:1001:0:anonymous uid:/opt/spark:/bin/false'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
+ readarray -t SPARK_EXECUTOR_JAVA_OPTS
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z ']'
+ '[' -n '' ']'
+ '[' -z ']'
+ '[' -z x ']'
+ SPARK_CLASSPATH='/opt/spark/conf::/opt/spark/jars/*'
+ case "$1" in
+ shift 1
+ CMD=("$SPARK_HOME/bin/spark-submit" --conf "spark.driver.bindAddress=$SPARK_DRIVER_BIND_ADDRESS" --deploy-mode client "$#")
+ exec /usr/bin/tini -s -- /opt/spark/bin/spark-submit --conf spark.driver.bindAddress=172.17.0.6 --deploy-mode client --properties-file /opt/spark/conf/spark.properties --class org.apache.spark.deploy.PythonRunner local:///app/main.py
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.1.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
21/09/23 15:13:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/09/23 15:13:55 INFO SparkContext: Running Spark version 3.1.1
21/09/23 15:13:55 INFO ResourceUtils: ==============================================================
21/09/23 15:13:55 INFO ResourceUtils: No custom resources configured for spark.driver.
21/09/23 15:13:55 INFO ResourceUtils: ==============================================================
21/09/23 15:13:55 INFO SparkContext: Submitted application: job-a
21/09/23 15:13:55 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 1024, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
21/09/23 15:13:55 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
21/09/23 15:13:55 INFO ResourceProfileManager: Added ResourceProfile id: 0
21/09/23 15:13:55 INFO SecurityManager: Changing view acls to: 1001,root
21/09/23 15:13:55 INFO SecurityManager: Changing modify acls to: 1001,root
21/09/23 15:13:55 INFO SecurityManager: Changing view acls groups to:
21/09/23 15:13:55 INFO SecurityManager: Changing modify acls groups to:
21/09/23 15:13:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(1001, root); groups with view permissions: Set(); users with modify permissions: Set(1001, root); groups with modify permissions: Set()
21/09/23 15:13:55 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
21/09/23 15:13:55 INFO SparkEnv: Registering MapOutputTracker
21/09/23 15:13:55 INFO SparkEnv: Registering BlockManagerMaster
21/09/23 15:13:55 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
21/09/23 15:13:55 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
21/09/23 15:13:55 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
21/09/23 15:13:55 INFO DiskBlockManager: Created local directory at /var/data/spark-db401653-53fc-4038-8a27-913af26bb90b/blockmgr-e5c8b6a8-76d5-4960-94c0-a30aa8a63997
21/09/23 15:13:55 INFO MemoryStore: MemoryStore started with capacity 516.0 MiB
21/09/23 15:13:55 INFO SparkEnv: Registering OutputCommitCoordinator
21/09/23 15:13:55 INFO Utils: Successfully started service 'SparkUI' on port 4040.
21/09/23 15:13:55 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://job-a-4901ff7c1337fcec-driver-svc.default.svc:4040
21/09/23 15:13:55 INFO SparkContext: Added JAR file:/tmp/spark-2396d3e2-06fb-40d1-a42f-a438d79ba31f/hadoop-aws-3.2.0.jar at spark://job-a-4901ff7c1337fcec-driver-svc.default.svc:7078/jars/hadoop-aws-3.2.0.jar with timestamp 1632410035212
21/09/23 15:13:55 INFO SparkContext: Added JAR file:/tmp/spark-2396d3e2-06fb-40d1-a42f-a438d79ba31f/aws-java-sdk-bundle-1.11.375.jar at spark://job-a-4901ff7c1337fcec-driver-svc.default.svc:7078/jars/aws-java-sdk-bundle-1.11.375.jar with timestamp 1632410035212
21/09/23 15:13:55 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
21/09/23 15:13:56 INFO ExecutorPodsAllocator: Going to request 1 executors from Kubernetes for ResourceProfile Id: 0, target: 1 running: 0.
21/09/23 15:13:56 INFO BasicExecutorFeatureStep: Decommissioning not enabled, skipping shutdown script
21/09/23 15:13:56 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
21/09/23 15:13:56 INFO NettyBlockTransferService: Server created on job-a-4901ff7c1337fcec-driver-svc.default.svc:7079
21/09/23 15:13:56 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
21/09/23 15:13:56 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, job-a-4901ff7c1337fcec-driver-svc.default.svc, 7079, None)
21/09/23 15:13:56 INFO BlockManagerMasterEndpoint: Registering block manager job-a-4901ff7c1337fcec-driver-svc.default.svc:7079 with 516.0 MiB RAM, BlockManagerId(driver, job-a-4901ff7c1337fcec-driver-svc.default.svc, 7079, None)
21/09/23 15:13:56 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, job-a-4901ff7c1337fcec-driver-svc.default.svc, 7079, None)
21/09/23 15:13:56 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, job-a-4901ff7c1337fcec-driver-svc.default.svc, 7079, None)
21/09/23 15:14:01 INFO KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.17.0.7:33978) with ID 1, ResourceProfileId 0
21/09/23 15:14:02 INFO KubernetesClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
21/09/23 15:14:02 INFO BlockManagerMasterEndpoint: Registering block manager 172.17.0.7:38165 with 413.9 MiB RAM, BlockManagerId(1, 172.17.0.7, 38165, None)
21/09/23 15:14:02 ERROR TaskSchedulerImpl: Lost an executor 1 (already removed): Unable to create executor due to ./hadoop-aws-3.2.0.jar
Bellow are my Docker file that I used to build the image I'm using and my .yaml file with the kubernetes configuration for spark:
yaml file:
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: job-a
namespace: default
spec:
type: Python
mode: cluster
image: "arthexbr77/sparkjob:latest"
imagePullPolicy: Always
mainApplicationFile: local:///app/main.py
sparkVersion: "3.1.1"
restartPolicy:
type: Never
hadoopConf:
"fs.s3a.endpoint": "http://127.0.0.1:9000"
driver:
cores: 1
coreLimit: "2000m"
memory: "1200m"
labels:
version: 3.1.1
serviceAccount: spark
executor:
cores: 1
instances: 1
memory: "1024m"
labels:
version: 3.1.1
deps:
jars:
- https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.2.0/hadoop-aws-3.2.0.jar
- https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.375/aws-java-sdk-bundle-1.11.375.jar
Dockerfile:
FROM spark-base/spark-py:1.0.0
USER root:root
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY main.py .
USER 1001
It seens that the error is related to the 2 jars needed to connect with an s3a endpoint (aws-hadoop and java-sdk-bundle) but these two are listed in the yaml file, so I'm really lost in what to do to fix the problem.
Thanks for helping!
I'm trying to run spark on kubernetes(using minikube with VirtualBox or docker driver, I tested in both) and now I have an error that I don't know how to solve.
The error is a "SparkException: External scheduler cannot be instantiated". I'm new in Kubernetes world, so I really don't know if this is a newbie error, but trying to resolve by myself I failed.
Please help me.
In the next lines, follow the command and the error.
I use this spark submit command:
spark-submit --master k8s://https://192.168.99.102:8443 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=2 \
--executor-memory 1024m \
--conf spark.kubernetes.container.image=spark:latest \
local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar
And i got this error in the pod:
20/06/23 15:24:56 INFO SparkContext: Submitted application: Spark Pi
20/06/23 15:24:56 INFO SecurityManager: Changing view acls to: 185,luan
20/06/23 15:24:56 INFO SecurityManager: Changing modify acls to: 185,luan
20/06/23 15:24:56 INFO SecurityManager: Changing view acls groups to:
20/06/23 15:24:56 INFO SecurityManager: Changing modify acls groups to:
20/06/23 15:24:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(185, luan); groups with view permissions: Set(); users with modify permissions: Set(185, luan); groups with modify permissions: Set()
20/06/23 15:24:57 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
20/06/23 15:24:57 INFO SparkEnv: Registering MapOutputTracker
20/06/23 15:24:57 INFO SparkEnv: Registering BlockManagerMaster
20/06/23 15:24:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/06/23 15:24:57 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/06/23 15:24:57 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
20/06/23 15:24:57 INFO DiskBlockManager: Created local directory at /var/data/spark-4f7b787b-ec75-4ae5-b703-f9f90ef130cb/blockmgr-1ef6d02a-48f6-4bd7-9d7d-fe2518850f5e
20/06/23 15:24:57 INFO MemoryStore: MemoryStore started with capacity 413.9 MiB
20/06/23 15:24:57 INFO SparkEnv: Registering OutputCommitCoordinator
20/06/23 15:24:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/06/23 15:24:57 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://spark-pi-a8278472e1c83236-driver-svc.default.svc:4040
20/06/23 15:24:57 INFO SparkContext: Added JAR local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar at file:/opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar with timestamp 1592925897650
20/06/23 15:24:57 WARN SparkContext: The jar local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar has been added already. Overwriting of added jars is not supported in the current version.
20/06/23 15:24:57 INFO SparkKubernetesClientFactory: Auto-configuring K8S client using current context from users K8S config file
20/06/23 15:24:58 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2934)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-a8278472e1c83236-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:568)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:505)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:430)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:395)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:376)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:845)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:214)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:168)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$driverPod$1(ExecutorPodsAllocator.scala:59)
at scala.Option.map(Option.scala:230)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:58)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:113)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2928)
... 19 more
20/06/23 15:24:58 INFO SparkUI: Stopped Spark web UI at http://spark-pi-a8278472e1c83236-driver-svc.default.svc:4040
20/06/23 15:24:58 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/06/23 15:24:58 INFO MemoryStore: MemoryStore cleared
20/06/23 15:24:58 INFO BlockManager: BlockManager stopped
20/06/23 15:24:58 INFO BlockManagerMaster: BlockManagerMaster stopped
20/06/23 15:24:58 WARN MetricsSystem: Stopping a MetricsSystem that is not running
20/06/23 15:24:58 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/06/23 15:24:58 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: External scheduler cannot be instantiated
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2934)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:528)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default.svc/api/v1/namespaces/default/pods/spark-pi-a8278472e1c83236-driver. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:568)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:505)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:471)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:430)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:395)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:376)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:845)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:214)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:168)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.$anonfun$driverPod$1(ExecutorPodsAllocator.scala:59)
at scala.Option.map(Option.scala:230)
at org.apache.spark.scheduler.cluster.k8s.ExecutorPodsAllocator.<init>(ExecutorPodsAllocator.scala:58)
at org.apache.spark.scheduler.cluster.k8s.KubernetesClusterManager.createSchedulerBackend(KubernetesClusterManager.scala:113)
at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:2928)
... 19 more
20/06/23 15:24:58 INFO ShutdownHookManager: Shutdown hook called
20/06/23 15:24:58 INFO ShutdownHookManager: Deleting directory /var/data/spark-4f7b787b-ec75-4ae5-b703-f9f90ef130cb/spark-616edc5e-b42d-4c77-9f11-8465b4d69642
20/06/23 15:24:58 INFO ShutdownHookManager: Deleting directory /tmp/spark-71e3bd59-3b7d-4d72-a442-b0ad0c7092fb
Thank You!
Ps: Im using Spark 3.0 - new version, minikube - 1.11.0
Based on the log file:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "spark-pi-a8278472e1c83236-driver" is forbidden: User "system:serviceaccount:default:default" cannot get resource "pods" in API group "" in the namespace "default".
It looks like the default:default service account doesn't have edit permissions. You can run this to create the ClusterRoleBinding to add the permissions.
$ kubectl create clusterrolebinding default \
--clusterrole=edit --serviceaccount=default:default --namespace=default
You can take a look at this cheat sheet.
I am using HDP-2.3 sandbox for Consuming kafka messages by running SPARK submit job.
i am putting some messages in kafka as below:
kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic webevent
OR
kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic test --new-producer < myfile.txt
Now i need to consume above messages from spark job as shown below:
./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar 192.168.255.150:2181 webevent 10
Where 2181 is a zookeeper port
I am getting Error as shown(Guide me how to consume that message from Kafka):
16/05/02 15:21:30 INFO SparkContext: Running Spark version 1.3.1
16/05/02 15:21:30 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/02 15:21:31 INFO SecurityManager: Changing view acls to: root
16/05/02 15:21:31 INFO SecurityManager: Changing modify acls to: root
16/05/02 15:21:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/05/02 15:21:31 INFO Slf4jLogger: Slf4jLogger started
16/05/02 15:21:31 INFO Remoting: Starting remoting
16/05/02 15:21:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#sandbox.hortonworks.com:53950]
16/05/02 15:21:32 INFO Utils: Successfully started service 'sparkDriver' on port 53950.
16/05/02 15:21:32 INFO SparkEnv: Registering MapOutputTracker
16/05/02 15:21:32 INFO SparkEnv: Registering BlockManagerMaster
16/05/02 15:21:32 INFO DiskBlockManager: Created local directory at /tmp/spark-c70b08b9-41a3-42c8-9d83-bc4258e299c6/blockmgr-c2d86de6-34a7-497c-8018-d3437a100e87
16/05/02 15:21:32 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
16/05/02 15:21:32 INFO HttpFileServer: HTTP File server directory is /tmp/spark-a8f7ade9-292c-42c4-9e54-43b3b3495b0c/httpd-65d36d04-1e2a-4e69-8d20-295465100070
16/05/02 15:21:32 INFO HttpServer: Starting HTTP Server
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SocketConnector#0.0.0.0:37014
16/05/02 15:21:32 INFO Utils: Successfully started service 'HTTP file server' on port 37014.
16/05/02 15:21:32 INFO SparkEnv: Registering OutputCommitCoordinator
16/05/02 15:21:32 INFO Server: jetty-8.y.z-SNAPSHOT
16/05/02 15:21:32 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/05/02 15:21:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/05/02 15:21:32 INFO SparkUI: Started SparkUI at http://sandbox.hortonworks.com:4040
16/05/02 15:21:33 INFO SparkContext: Added JAR file:/usr/hdp/2.3.0.0-2130/spark/lib/spark-examples-1.4.1-hadoop2.4.0.jar at http://192.168.255.150:37014/jars/spark-examples-1.4.1-hadoop2.4.0.jar with timestamp 1462202493866
16/05/02 15:21:34 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#192.168.255.150:7077/user/Master...
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160502152134-0000
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor added: app-20160502152134-0000/0 on worker-20160502150437-sandbox.hortonworks.com-36920 (sandbox.hortonworks.com:36920) with 1 cores
16/05/02 15:21:34 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160502152134-0000/0 on hostPort sandbox.hortonworks.com:36920 with 1 cores, 512.0 MB RAM
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now RUNNING
16/05/02 15:21:34 INFO AppClient$ClientActor: Executor updated: app-20160502152134-0000/0 is now LOADING
16/05/02 15:21:34 INFO NettyBlockTransferService: Server created on 43440
16/05/02 15:21:34 INFO BlockManagerMaster: Trying to register BlockManager
16/05/02 15:21:34 INFO BlockManagerMasterActor: Registering block manager sandbox.hortonworks.com:43440 with 265.4 MB RAM, BlockManagerId(<driver>, sandbox.hortonworks.com, 43440)
16/05/02 15:21:34 INFO BlockManagerMaster: Registered BlockManager
16/05/02 15:21:35 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/05/02 15:21:35 INFO VerifiableProperties: Verifying properties
16/05/02 15:21:35 INFO VerifiableProperties: Property group.id is overridden to
16/05/02 15:21:35 INFO VerifiableProperties: Property zookeeper.connect is overridden to
16/05/02 15:21:35 INFO SimpleConsumer: Reconnect due to socket error: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
Error: application failed with exception
org.apache.spark.SparkException: java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at scala.util.Either.fold(Either.scala:97)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:415)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at org.apache.spark.examples.streaming.JavaDirectKafkaWordCount.main(JavaDirectKafkaWordCount.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:577)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:174)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:197)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
OR
wen i use this:
./bin/spark-submit --master spark://192.168.255.150:7077 --executor-memory 512m --class org.apache.spark.examples.streaming.JavaDirectKafkaWordCount lib/spark-examples-1.4.1-hadoop2.4.0.jar 192.168.255.150:6667 webevent 10
where 6667 is a Kafka’s message producing port, i am getting this error:
16/05/02 15:27:26 INFO SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException
Error: application failed with exception
org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
at org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:416)
i dont know if this can help:
./bin/spark-submit --class consumer.kafka.client.Consumer --master spark://192.168.255.150:7077 --executor-memory 1G lib/kafka-spark-consumer-1.0.6.jar 10
Goal
Run our scala spark app jar on yarn-cluster mode. It works with standalone cluster mode and with yarn-client, but for some reason it does not run to completion for yarn-cluster mode.
Details
The last portion of the code it seems to execute is on assigning the initial value to the Dataframe when reading the input file. It looks like it does not do anything after that. None of the logs look abnormal and there are no Warns or errors either. It suddenly gets unregistered with status succeeded and everything gets killed. On any other deployment mode (eg. yarn-client, standalone cluster mode) everything runs smoothly to completion.
15/07/22 15:57:00 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
I have also ran this job on spark 1.3.x and 1.4.x on a vanilla spark/YARN cluster and a cdh 5.4.3 cluster as well. All with the same results. What could possibly be the issue?
Job was run with the command below and the input file is accessible through hdfs.
bin/spark-submit --master yarn-cluster --class AssocApp ../associationRulesScala/target/scala-2.10/AssociationRule_2.10.4-1.0.0.SNAPSHOT.jar hdfs://sparkMaster-hk:9000/user/root/BreastCancer.csv
Code snippets
this is the code in the area were the dataframe is loaded. It spits out the log message "Uploading Dataframe..." but there is nothing else after that. Refer to the driver's logs below
//...
logger.info("Uploading Dataframe from %s".format(filename))
sparkParams.sqlContext.csvFile(filename)
MDC.put("jobID",jobID.takeRight(3))
logger.info("Extracting Unique Vals from each of %d columns...".format(frame.columns.length))
private val uniqueVals = frame.columns.zipWithIndex.map(colname => (colname._2, colname._1, frame.select(colname._1).distinct.cache)).
//...
Driver logs
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-root/nm-local-dir/usercache/root/filecache/60/spark-assembly-1.4.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/07/22 15:56:52 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
15/07/22 15:56:54 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1434116948302_0097_000001
15/07/22 15:56:55 INFO spark.SecurityManager: Changing view acls to: root
15/07/22 15:56:55 INFO spark.SecurityManager: Changing modify acls to: root
15/07/22 15:56:55 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/07/22 15:56:55 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
15/07/22 15:56:55 INFO yarn.ApplicationMaster: Waiting for spark context initialization
15/07/22 15:56:55 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
15/07/22 15:56:56 INFO AssocApp$: Starting new Association Rules calculation. From File: hdfs://sparkMaster-hk:9000/user/root/BreastCancer.csv
15/07/22 15:56:56 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
15/07/22 15:56:57 INFO associationRules.primaryPackageSpark: Uploading Dataframe from hdfs://sparkMaster-hk:9000/user/root/BreastCancer.csv
15/07/22 15:56:57 INFO spark.SparkContext: Running Spark version 1.4.0
15/07/22 15:56:57 INFO spark.SecurityManager: Changing view acls to: root
15/07/22 15:56:57 INFO spark.SecurityManager: Changing modify acls to: root
15/07/22 15:56:57 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/07/22 15:56:57 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/07/22 15:56:57 INFO Remoting: Starting remoting
15/07/22 15:56:57 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#119.81.232.13:41459]
15/07/22 15:56:57 INFO util.Utils: Successfully started service 'sparkDriver' on port 41459.
15/07/22 15:56:57 INFO spark.SparkEnv: Registering MapOutputTracker
15/07/22 15:56:57 INFO spark.SparkEnv: Registering BlockManagerMaster
15/07/22 15:56:57 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1434116948302_0097/blockmgr-f0e66040-1fdb-4a05-87e1-160194829f84
15/07/22 15:56:57 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
15/07/22 15:56:58 INFO spark.HttpFileServer: HTTP File server directory is /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1434116948302_0097/httpd-79b304a1-3cf4-4951-9e22-bbdfac435824
15/07/22 15:56:58 INFO spark.HttpServer: Starting HTTP Server
15/07/22 15:56:58 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/07/22 15:56:58 INFO server.AbstractConnector: Started SocketConnector#0.0.0.0:36021
15/07/22 15:56:58 INFO util.Utils: Successfully started service 'HTTP file server' on port 36021.
15/07/22 15:56:58 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/07/22 15:56:58 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/07/22 15:56:58 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/07/22 15:56:58 INFO server.AbstractConnector: Started SelectChannelConnector#0.0.0.0:53274
15/07/22 15:56:58 INFO util.Utils: Successfully started service 'SparkUI' on port 53274.
15/07/22 15:56:58 INFO ui.SparkUI: Started SparkUI at http://119.XX.XXX.XX:53274
15/07/22 15:56:58 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
15/07/22 15:56:59 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34498.
15/07/22 15:56:59 INFO netty.NettyBlockTransferService: Server created on 34498
15/07/22 15:56:59 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/07/22 15:56:59 INFO storage.BlockManagerMasterEndpoint: Registering block manager 119.81.232.13:34498 with 267.3 MB RAM, BlockManagerId(driver, 119.81.232.13, 34498)
15/07/22 15:56:59 INFO storage.BlockManagerMaster: Registered BlockManager
15/07/22 15:56:59 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#-819146876])
15/07/22 15:56:59 INFO client.RMProxy: Connecting to ResourceManager at sparkMaster-hk/119.81.232.24:8030
15/07/22 15:56:59 INFO yarn.YarnRMClient: Registering the ApplicationMaster
15/07/22 15:57:00 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
15/07/22 15:57:00 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/07/22 15:57:00 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
15/07/22 15:57:00 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/07/22 15:57:00 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
15/07/22 15:57:00 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
15/07/22 15:57:00 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1434116948302_0097
15/07/22 15:57:00 INFO storage.DiskBlockManager: Shutdown hook called
15/07/22 15:57:00 INFO util.Utils: Shutdown hook called
15/07/22 15:57:00 INFO util.Utils: Deleting directory /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1434116948302_0097/httpd-79b304a1-3cf4-4951-9e22-bbdfac435824
15/07/22 15:57:00 INFO util.Utils: Deleting directory /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1434116948302_0097/userFiles-e01b4dd2-681c-4108-aec6-879774652c7a