It's possible to configure the Beam portable runner with the spark configurations? - apache-spark

TLDR;
It's possible to configure the Beam portable runner with the spark configurations? More precisely, it's possible to configure the spark.driver.host in the Portable Runner?
Motivation
Currently, we have airflow implemented in a Kubernetes cluster, and aiming to use TensorFlow Extended we need to use Apache beam. For our use case Spark would be the appropriate runner to be used, and as airflow and TensorFlow are coded in python we would need to use the Apache Beam's Portable Runner (https://beam.apache.org/documentation/runners/spark/#portability).
The problem
The portable runner creates the spark context inside its container and does not leave space for the driver DNS configuration making the executors inside the worker pods non-communicable to the driver (the job server).
Setup
Following the beam documentation, the job serer was implemented in the same pod as the airflow to use the local network between these two containers.
Job server config:
- name: beam-spark-job-server
image: apache/beam_spark_job_server:2.27.0
args: ["--spark-master-url=spark://spark-master:7077"]
Job server/airflow service:
apiVersion: v1
kind: Service
metadata:
name: airflow-scheduler
labels:
app: airflow-k8s
spec:
type: ClusterIP
selector:
app: airflow-scheduler
ports:
- port: 8793
protocol: TCP
targetPort: 8793
name: scheduler
- port: 8099
protocol: TCP
targetPort: 8099
name: job-server
- port: 7077
protocol: TCP
targetPort: 7077
name: spark-master
- port: 8098
protocol: TCP
targetPort: 8098
name: artifact
- port: 8097
protocol: TCP
targetPort: 8097
name: java-expansion
The ports 8097,8098 and 8099 are related to the job server, 8793 to airflow, and 7077 to the spark master.
Development/Errors
When testing a simple beam example python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK from the airflow container I get the following response on the airflow pod:
Defaulting container name to airflow-scheduler.
Use 'kubectl describe pod/airflow-scheduler-local-f685b5bc7-9d7r6 -n airflow-main-local' to see all of the containers in this pod.
airflow#airflow-scheduler-local-f685b5bc7-9d7r6:/opt/airflow$ python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:oauth2client.client:Timeout attempting to reach GCE metadata service.
WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Connecting anonymously.
INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:35837
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:root:Default Python SDK image for environment is apache/beam_python3.7_sdk:2.27.0
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
And the worker log:
21/02/19 19:50:00 INFO Worker: Asked to launch executor app-20210219194804-0000/47 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:00 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:00 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "47" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
21/02/19 19:50:02 INFO Worker: Executor app-20210219194804-0000/47 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 47
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=47)
21/02/19 19:50:02 INFO Worker: Asked to launch executor app-20210219194804-0000/48 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:02 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "48" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
21/02/19 19:50:04 INFO Worker: Executor app-20210219194804-0000/48 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 48
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=48)
21/02/19 19:50:04 INFO Worker: Asked to launch executor app-20210219194804-0000/49 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:04 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:04 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "49" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
.
.
.
As we can see, the executor is being exited constantly, and by what I know this issue is created by the missing communication between the executor and the driver (the job server in this case). Also, the "--driver-url" is translated to the driver pod name using the random port "-Dspark.driver.port".
As we can't define the name of the service, the worker tries to use the original name from the driver and to use a randomly generated port. As the configuration comes from the driver, changing the default conf files in the worker/master doesn't create any results.
Using this answer as an example, I tried to use the env variable SPARK_PUBLIC_DNS in the job server but this didn't result in any changes in the worker logs.
Obs
Using directly in kubernetes a spark job
kubectl run spark-base --rm -it --labels="app=spark-client" --image bde2020/spark-base:2.4.5-hadoop2.7 -- bash ./spark/bin/pyspark --master spark://spark-master:7077 --conf spark.driver.host=spark-client
having the service:
apiVersion: v1
kind: Service
metadata:
name: spark-client
spec:
selector:
app: spark-client
clusterIP: None
I get a full working pyspark shell. If I omit the --conf parameter I get the same behavior as the first setup (exiting executors indefinitely)
21/02/19 20:21:02 INFO Worker: Executor app-20210219202050-0002/4 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 4
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219202050-0002, execId=4)
21/02/19 20:21:02 INFO Worker: Asked to launch executor app-20210219202050-0002/5 for Spark shell
21/02/19 20:21:02 INFO SecurityManager: Changing view acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 20:21:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 20:21:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=46161" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#spark-base:46161" "--executor-id" "5" "--hostname" "172.18.0.20" "--cores" "1" "--app-id" "app-20210219202050-0002" "--worker-url" "spark://Worker#172.18.0.20:45151"

I have three solutions to choose from depending on your deployment requirements. In order of difficulty:
Use the Spark "uber jar" job server. This starts an embedded job server inside the Spark master, instead of using a standalone job server in a container. This would simplify your deployment a lot, since you would not need to start the beam_spark_job_server container at all.
python -m apache_beam.examples.wordcount \
--output ./data_test/ \
--runner=SparkRunner \
--spark_submit_uber_jar \
--spark_master_url=spark://spark-master:7077 \
--environment_type=LOOPBACK
You can pass the properties through a Spark configuration file. Create the Spark configuration file, and add spark.driver.host and whatever other properties you need. In the docker run command for the job server, mount that configuration file to the container, and set the SPARK_CONF_DIR environment variable to point to that directory.
If that neither of those work for you, you can alternatively build your own customized version of the job server container. Pull Beam source from Github. Check out the release branch you want to use (e.g. git checkout origin/release-2.28.0). Modify the entrypoint spark-job-server.sh and set -Dspark.driver.host=x there. Then build the container using ./gradlew :runners:spark:job-server:container:docker -Pdocker-repository-root="your-repo" -Pdocker-tag="your-tag".

Let me revise the answer. The Job server need to able to communicate with the workers vice verse. The error of keep exiting is due to this. You need to configure such that they can communicate. A k8s headless service able to solve this.
reference of workable example at https://github.com/cometta/python-apache-beam-spark . If it is useful for you, can help me to 'Star' the repository

Related

Apache Spark on IPv6

I am trying to install Spark on the IPv6, however, spark-master comes on the IPv6 - DNS hostname but the spark worker node doesn't start even though I pass IP or DNS the error is the same. I need to use LOCAL_WORKER_IP=127.0.0.1 to make the spark worker start.
SPARK Worker log:
*fd74:ca9b:3a09:868c:172:18:0:462a spark-master.t253-u000265.svc.cluster.local
21/08/09 16:41:40 INFO Worker: Started daemon with process name: 10#v4-virtio-spark-worker-zjgxs
21/08/09 16:41:40 INFO SignalUtils: Registered signal handler for TERM
21/08/09 16:41:40 INFO SignalUtils: Registered signal handler for HUP
21/08/09 16:41:40 INFO SignalUtils: Registered signal handler for INT
21/08/09 16:41:41 WARN NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
21/08/09 16:41:41 INFO SecurityManager: Changing view acls to: root
21/08/09 16:41:41 INFO SecurityManager: Changing modify acls to: root
21/08/09 16:41:41 INFO SecurityManager: Changing view acls groups to:
21/08/09 16:41:41 INFO SecurityManager: Changing modify acls groups to:
21/08/09 16:41:41 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/08/09 16:41:42 INFO Utils: Successfully started service 'sparkWorker' on port 37816
21/08/09 16:41:42 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.AssertionError: assertion failed: Expected hostname (not IP) but got fd74:ca9b:3a09:868c:172:18:0:488e
at scala.Predef$.assert(Predef.scala:170)
at org.apache.spark.util.Utils$.checkHost(Utils.scala:1014)
at org.apache.spark.deploy.worker.Worker.<init>(Worker.scala:60)
at org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker.scala:811)
at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:779)
at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
21/08/09 16:41:42 INFO ShutdownHookManager: Shutdown hook called*
Does anyone know if the SPARK worker can be configured on the Ipv6 using any configurations?

Spark Cluster: initial job has not accept any resources and executor keep exit

I have a spark cluster using cloud resource in two instances. One as master and one as worker. The total resource is 4 cores and 10G ram.
I can start shell, and worker can register successfully. But when I run simple code.
The error from shell is:
Spark version:2.3.0
System: CentOS v7
The firewalls are stopped.
Here is the config:
export JAVA_HOME=/usr/java/jdk1.8.0_144
export SPARK_MASTER_IP=IP
export PYSPARK_PYTHON=/opt/anaconda3/bin/python
export SPARK_WORKER_MEMORY=2g
export SPARK_WORK_INSTANCES=1
export SPARK_WORkER_CORES=4
export SPARK_EXECUTOR_MEMORY=1g
I set up another spark cluster using the similar config using three physical machines and they worked well. At the start I got the same error, but I solved it by stopping the firewalls. Right I want to set up the cluster on cloud, and unfortunately I got the same error, but didn't resolve it using the same solution. I am curious whether it is the port problem, because I only open the port on http 80,4040,6066,7077,8080,8081,8787.
Here is the error:
Here are the logs:
Master log:
2018-04-12 13:09:14 INFO Master:54 - Registering app Spark shell
2018-04-12 13:09:14 INFO Master:54 - Registered app Spark shell with ID app-20180412130914-0000
2018-04-12 13:09:14 INFO Master:54 - Launching executor app-20180412130914-0000/0 on worker worker-20180411144020-192.**.**.**-44986
2018-04-12 13:11:15 INFO Master:54 - Removing executor app-20180412130914-0000/0 because it is EXITED
2018-04-12 13:11:15 INFO Master:54 - Launching executor app-20180412130914-0000/1 on worker worker-20180411144020-192.**.**.**-44986
2018-04-12 13:13:16 INFO Master:54 - Removing executor app-20180412130914-0000/1 because it is EXITED
2018-04-12 13:13:16 INFO Master:54 - Launching executor app-20180412130914-0000/2 on worker worker-20180411144020-192.**.**.**-44986
2018-04-12 13:15:17 INFO Master:54 - Removing executor app-20180412130914-0000/2 because it is EXITED
2018-04-12 13:15:17 INFO Master:54 - Launching executor app-20180412130914-0000/3 on worker worker-20180411144020-192.**.**.**-44986
2018-04-12 13:16:15 INFO Master:54 - Removing app app-20180412130914-0000
2018-04-12 13:16:15 INFO Master:54 - 192.**.**.**:39766 got disassociated, removing it.
2018-04-12 13:16:15 INFO Master:54 - IP:39928 got disassociated, removing it.
2018-04-12 13:16:15 WARN Master:66 - Got status update for unknown executor app-20180412130914-0000/3
Worker log:
2018-04-12 13:09:12 INFO Worker:54 - Asked to launch executor app-20180412130914-0000/0 for Spark shell
2018-04-12 13:09:12 INFO SecurityManager:54 - Changing view acls to: root
2018-04-12 13:09:12 INFO SecurityManager:54 - Changing modify acls to: root
2018-04-12 13:09:12 INFO SecurityManager:54 - Changing view acls groups to:
2018-04-12 13:09:12 INFO SecurityManager:54 - Changing modify acls groups to:
2018-04-12 13:09:12 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2018-04-12 13:09:12 INFO ExecutorRunner:54 - Launch command: "/usr/java/jdk1.8.0_144/bin/java" "-cp" "/opt/spark-2.3.0-bin-hadoop2.7/conf/:/opt/spark-2.3.0-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.port=39928" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#IP:39928" "--executor-id" "0" "--hostname" "192.**.**.**" "--cores" "4" "--app-id" "app-20180412130914-0000" "--worker-url" "spark://Worker#192.**.**.**:44986"
2018-04-12 13:11:13 INFO Worker:54 - Executor app-20180412130914-0000/0 finished with state EXITED message Command exited with code 1 exitStatus 1
2018-04-12 13:11:13 INFO Worker:54 - Asked to launch executor app-20180412130914-0000/1 for Spark shell
2018-04-12 13:11:13 INFO SecurityManager:54 - Changing view acls to: root
2018-04-12 13:11:13 INFO SecurityManager:54 - Changing modify acls to: root
2018-04-12 13:11:13 INFO SecurityManager:54 - Changing view acls groups to:
2018-04-12 13:11:13 INFO SecurityManager:54 - Changing modify acls groups to:
2018-04-12 13:11:13 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2018-04-12 13:11:13 INFO ExecutorRunner:54 - Launch command: "/usr/java/jdk1.8.0_144/bin/java" "-cp" "/opt/spark-2.3.0-bin-hadoop2.7/conf/:/opt/spark-2.3.0-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.port=39928" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#spark-master.novalocal:39928" "--executor-id" "1" "--hostname" "192.**.**.**" "--cores" "4" "--app-id" "app-20180412130914-0000" "--worker-url" "spark://Worker#192.**.**.**:44986"
2018-04-12 13:13:15 INFO Worker:54 - Executor app-20180412130914-0000/1 finished with state EXITED message Command exited with code 1 exitStatus 1
2018-04-12 13:13:15 INFO Worker:54 - Asked to launch executor app-20180412130914-0000/2 for Spark shell
2018-04-12 13:13:15 INFO SecurityManager:54 - Changing view acls to: root
2018-04-12 13:13:15 INFO SecurityManager:54 - Changing modify acls to: root
2018-04-12 13:13:15 INFO SecurityManager:54 - Changing view acls groups to:
2018-04-12 13:13:15 INFO SecurityManager:54 - Changing modify acls groups to:
2018-04-12 13:13:15 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
2018-04-12 13:13:15 INFO ExecutorRunner:54 - Launch command: "/usr/java/jdk1.8.0_144/bin/java" "-cp" "/opt/spark-2.3.0-bin-hadoop2.7/conf/:/opt/spark-2.3.0-bin-hadoop2.7/jars/*" "-Xmx1024M" "-Dspark.driver.port=39928" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#spark-master.novalocal:39928" "--executor-id" "2" "--hostname" "192.**.**.**" "--cores" "4" "--app-id" "app-20180412130914-0000" "--worker-url" "spark://Worker#192.**.**.**:44986"
2018-04-12 13:15:16 INFO Worker:54 - Executor app-20180412130914-0000/2 finished with state EXITED message Command exited with code 1 exitStatus 1

EMR 5.0 + Spark getting stack at endless loop

i am trying to deploy Spark 2.0 Streaming over Amazon EMR 5.0.
it seems that the application is getting stuck at endless loop with the log
"endless loop of "INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)"
and then exit.
Here is how i am trying to submit it through the command line:
aws emr add-steps --cluster-id --steps
Type=Spark,Name="Spark
Program",ActionOnFailure=CONTINUE,Args=[--deploy-mode,cluster,--class,,s3://.jar]
any idea ?
thanks,
Eran
16/08/30 15:43:27 INFO SecurityManager: Changing view acls to: hadoop
16/08/30 15:43:27 INFO SecurityManager: Changing modify acls to: hadoop
16/08/30 15:43:27 INFO SecurityManager: Changing view acls groups to:
16/08/30 15:43:27 INFO SecurityManager: Changing modify acls groups to:
16/08/30 15:43:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
16/08/30 15:43:27 INFO Client: Submitting application application_14111979683_1111 to ResourceManager
16/08/30 15:43:27 INFO YarnClientImpl: Submitted application application_14111979683_1111
16/08/30 15:43:28 INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)
16/08/30 15:43:28 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1472571807467
final status: UNDEFINED
tracking URL: http://xxxxxx:20888/proxy/application_14111979683_1111/
user: hadoop
16/08/30 15:43:29 INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)
and this the exception thrown:
16/08/31 08:14:48 INFO Client:
client token: N/A
diagnostics: Application application_1472630652740_0001 failed 2 times due to AM Container for appattempt_1472630652740_0001_000002 exited with exitCode: 13
For more detailed output, check application tracking page:http://ip-10-0-0-8.eu-west-1.compute.internal:8088/cluster/app/application_1472630652740_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1472630652740_0001_02_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
EMR is actually a wrapper to Yarn.
so, we need to add "--master yarn" as an argument to the deployment command line.
Example:
aws emr add-steps --cluster-id j-XXXXXXXXX --steps Type=Spark,Name="Spark Program",ActionOnFailure=CONTINUE,Args=[--deploy-mode,cluster,--master,yarn,--class,com.xxx.MyMainClass,s3://]
Another thing which is needed, is removing 'sparkConf.setMaster("local[*]")',
from the initialization of spark conf.

Can't connect slaves to master in Spark

Using 4 instances on Compute Engine, each running spark set up with Cloudera Manager. I have no problems starting the master and connecting in my local browser, and it connects as spark://instance-1:7077. When I start the start-slave on the remaining instances I get no errors, until I look in the log:
16/05/02 13:10:18 INFO worker.Worker: Started daemon with process name: 12612#instance-2.c.cluster1-1294.internal
16/05/02 13:10:18 INFO worker.Worker: Registered signal handlers for [TERM, HUP, INT]
16/05/02 13:10:18 INFO spark.SecurityManager: Changing view acls to: root
16/05/02 13:10:18 INFO spark.SecurityManager: Changing modify acls to: root
16/05/02 13:10:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with mod$
16/05/02 13:10:19 INFO util.Utils: Successfully started service 'sparkWorker' on port 60270.
16/05/02 13:10:19 INFO worker.Worker: Starting Spark worker 10.142.0.3:60270 with 2 cores, 6.3 GB RAM
16/05/02 13:10:19 INFO worker.Worker: Running Spark version 1.6.0
16/05/02 13:10:19 INFO worker.Worker: Spark home: /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark
16/05/02 13:10:19 ERROR worker.Worker: Failed to create work directory /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark/work
If i use mkdir to create 'work' then it throws and error and says the directory already exists:
mkdir: cannot create directory ‘work’: File exists
The file does exist and when using ls to find it it is highlighted in red with a black background. Any help would be appreciated.
Maybe this is the permission issue,
Try this,
$sudo chown -R your_userName:your_groupName /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark
Now change the Mode of the above path
$sudo chmod 777 /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/lib/spark
Also all the slaves must have ssh to each other and can able to talk one another.
And Copy all the Configuration file of spark to the slave nodes also.

set the ip of the mesos master on a slave

I have three mesos slave nodes and one master on 10.14.56.157, 10.14.56.159 and 10.14.56.160 and 10.14.56.156 respectively. The names of the machines are worker1, worker2, worker3 and master.
I managed to set up the mesos cluster correctly (I believe). The web UI on 10.0.0.4:5050 shows all the three slaves. Then I'm running a spark shell on the cluster. Everything initially works fine: shell starts, UI shows a new framework started etc. Then I'm trying to run a simple test:
val numbers = sc.parallelize(1 to 1000000, 1000)
which works fine and then
numbers.count
Of course this is when spark actually does some work. So it starts the tasks, sends it to slaves (I can see it in the logs) but then none of the tasks completes (status: LOST). Spark retries the tasks up to 4 times and eventually gives up. I looked into the logs on the slave machines (the sandbox link in the UI) and I get the following output:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0227 13:47:59.842319 17015 fetcher.cpp:76] Fetching URI '/home/user01/spark-1.2.1-bin-hadoop1.tgz'
I0227 13:47:59.842658 17015 fetcher.cpp:179] Copying resource from '/home/user01/spark-1.2.1-bin-hadoop1.tgz' to '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13'
I0227 13:48:09.896682 17015 fetcher.cpp:64] Extracted resource '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13/spark-1.2.1-bin-hadoop1.tgz' into '/tmp/mesos/slaves/20150226-160235-2620919306-5050-14323-1/frameworks/20150227-132220-2620919306-5050-30420-0001/executors/20150226-160235-2620919306-5050-14323-1/runs/1978f267-cb47-4a6c-bd1f-69e99c00ae13'
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/02/27 13:48:11 INFO MesosExecutorBackend: Registered signal handlers for [TERM, HUP, INT]
I0227 13:48:11.493357 17124 exec.cpp:132] Version: 0.20.1
I0227 13:48:11.496057 17142 exec.cpp:206] Executor registered on slave 20150226-160235-2620919306-5050-14323-1
15/02/27 13:48:11 INFO MesosExecutorBackend: Registered with Mesos as executor ID 20150226-160235-2620919306-5050-14323-1 with 1 cpus
15/02/27 13:48:11 INFO Executor: Starting executor ID 20150226-160235-2620919306-5050-14323-1 on host 10.14.56.160
15/02/27 13:48:11 INFO SecurityManager: Changing view acls to: user01
15/02/27 13:48:11 INFO SecurityManager: Changing modify acls to: user01
15/02/27 13:48:11 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user01); users with modify permissions: Set(user01)
15/02/27 13:48:12 INFO Slf4jLogger: Slf4jLogger started
15/02/27 13:48:12 INFO Remoting: Starting remoting
15/02/27 13:48:12 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkExecutor#10.14.56.160:42869]
15/02/27 13:48:12 INFO Utils: Successfully started service 'sparkExecutor' on port 42869.
15/02/27 13:48:12 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://sparkDriver#master:48886/user/MapOutputTracker
15/02/27 13:48:12 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkDriver#master:48886]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: master: Name or service not known
akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://sparkDriver#master:48886/), Path(/user/MapOutputTracker)]
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:267)
at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:508)
at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:541)
at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:531)
at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
at akka.remote.EndpointWriter.postStop(Endpoint.scala:561)
at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:415)
at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
at akka.actor.ActorCell.terminate(ActorCell.scala:369)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Exception in thread "Thread-1" I0227 13:48:12.364940 17142 exec.cpp:413] Deactivating the executor libprocess
The line where the error occurs says:
Tried to associate with unreachable remote address [akka.tcp://sparkDriver#master:48886]
It seems to me that the slave cannot resolve the name master to the master's IP. Is that correct? If so how to change it to the actual IP. If not, how to fix it? Thanks!
What happens if you type ping master on one of the slave machines? If that fails, that's your problem, and you could fix it by adding a line to each slave's /etc/hosts file pointing master to the correct IP.
You could also try setting spark.driver.host to its IP when launching the spark driver, to change what "host" it advertises itself as.

Resources