EMR 5.0 + Spark getting stack at endless loop - apache-spark

i am trying to deploy Spark 2.0 Streaming over Amazon EMR 5.0.
it seems that the application is getting stuck at endless loop with the log
"endless loop of "INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)"
and then exit.
Here is how i am trying to submit it through the command line:
aws emr add-steps --cluster-id --steps
Type=Spark,Name="Spark
Program",ActionOnFailure=CONTINUE,Args=[--deploy-mode,cluster,--class,,s3://.jar]
any idea ?
thanks,
Eran
16/08/30 15:43:27 INFO SecurityManager: Changing view acls to: hadoop
16/08/30 15:43:27 INFO SecurityManager: Changing modify acls to: hadoop
16/08/30 15:43:27 INFO SecurityManager: Changing view acls groups to:
16/08/30 15:43:27 INFO SecurityManager: Changing modify acls groups to:
16/08/30 15:43:27 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
16/08/30 15:43:27 INFO Client: Submitting application application_14111979683_1111 to ResourceManager
16/08/30 15:43:27 INFO YarnClientImpl: Submitted application application_14111979683_1111
16/08/30 15:43:28 INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)
16/08/30 15:43:28 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1472571807467
final status: UNDEFINED
tracking URL: http://xxxxxx:20888/proxy/application_14111979683_1111/
user: hadoop
16/08/30 15:43:29 INFO Client: Application report for application_14111979683_1111 (state: ACCEPTED)
and this the exception thrown:
16/08/31 08:14:48 INFO Client:
client token: N/A
diagnostics: Application application_1472630652740_0001 failed 2 times due to AM Container for appattempt_1472630652740_0001_000002 exited with exitCode: 13
For more detailed output, check application tracking page:http://ip-10-0-0-8.eu-west-1.compute.internal:8088/cluster/app/application_1472630652740_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1472630652740_0001_02_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

EMR is actually a wrapper to Yarn.
so, we need to add "--master yarn" as an argument to the deployment command line.
Example:
aws emr add-steps --cluster-id j-XXXXXXXXX --steps Type=Spark,Name="Spark Program",ActionOnFailure=CONTINUE,Args=[--deploy-mode,cluster,--master,yarn,--class,com.xxx.MyMainClass,s3://]
Another thing which is needed, is removing 'sparkConf.setMaster("local[*]")',
from the initialization of spark conf.

Related

PySpark application submission is going on endless ACCEPTED STATE

I was trying to submit a pyspark application using the command
spark-submit --master yarn --deploy-mode cluster app.py
And I am getting the INFO Client: Application report for application_1663517069168_0003 (state: ACCEPTED) endlessly
I have created an AWS EMR cluster with 1 master and 1 core node only and trying to submit application by connecting to the master node
22/09/18 18:25:16 INFO RMProxy: Connecting to ResourceManager at ip-172-31-90-73.ec2.internal/172.31.90.73:8032
22/09/18 18:25:16 INFO Client: Requesting a new application from cluster with 1 NodeManagers
22/09/18 18:25:16 INFO Configuration: resource-types.xml not found
22/09/18 18:25:16 INFO ResourceUtils: Unable to find 'resource-types.xml'.
22/09/18 18:25:16 INFO ResourceUtils: Adding resource type - name = memory-mb, units = Mi, type = COUNTABLE
22/09/18 18:25:16 INFO ResourceUtils: Adding resource type - name = vcores, units = , type = COUNTABLE
22/09/18 18:25:16 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (6144 MB per container)
22/09/18 18:25:16 INFO Client: Will allocate AM container, with 2432 MB memory including 384 MB overhead
22/09/18 18:25:16 INFO Client: Setting up container launch context for our AM
22/09/18 18:25:16 INFO Client: Setting up the launch environment for our AM container
22/09/18 18:25:16 INFO Client: Preparing resources for our AM container
22/09/18 18:25:16 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
22/09/18 18:25:19 INFO Client: Uploading resource file:/mnt/tmp/spark-fb80e661-49d6-4738-8f29-351b4efdf337/__spark_libs__2372700901935780238.zip -> hdfs://ip-172-31-90-73.ec2.internal:8020/user/hadoop/.sparkStaging/application_1663517069168_0003/__spark_libs__2372700901935780238.zip
22/09/18 18:25:21 INFO Client: Uploading resource file:/home/hadoop/app.py -> hdfs://ip-172-31-90-73.ec2.internal:8020/user/hadoop/.sparkStaging/application_1663517069168_0003/app.py
22/09/18 18:25:21 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-172-31-90-73.ec2.internal:8020/user/hadoop/.sparkStaging/application_1663517069168_0003/pyspark.zip
22/09/18 18:25:21 INFO Client: Uploading resource file:/usr/lib/spark/python/lib/py4j-0.10.7-src.zip -> hdfs://ip-172-31-90-73.ec2.internal:8020/user/hadoop/.sparkStaging/application_1663517069168_0003/py4j-0.10.7-src.zip
22/09/18 18:25:21 INFO Client: Uploading resource file:/mnt/tmp/spark-fb80e661-49d6-4738-8f29-351b4efdf337/__spark_conf__1449366969974574614.zip -> hdfs://ip-172-31-90-73.ec2.internal:8020/user/hadoop/.sparkStaging/application_1663517069168_0003/__spark_conf__.zip
22/09/18 18:25:22 INFO SecurityManager: Changing view acls to: hadoop
22/09/18 18:25:22 INFO SecurityManager: Changing modify acls to: hadoop
22/09/18 18:25:22 INFO SecurityManager: Changing view acls groups to:
22/09/18 18:25:22 INFO SecurityManager: Changing modify acls groups to:
22/09/18 18:25:22 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
22/09/18 18:25:24 INFO Client: Submitting application application_1663517069168_0003 to ResourceManager
22/09/18 18:25:24 INFO YarnClientImpl: Submitted application application_1663517069168_0003
22/09/18 18:25:25 INFO Client: Application report for application_1663517069168_0003 (state: ACCEPTED)
22/09/18 18:25:25 INFO Client:
client token: N/A
diagnostics: [Sun Sep 18 18:25:24 +0000 2022] Application is added to the scheduler and is not yet activated. Queue's AM resource limit exceeded. Details : AM Partition = CORE; AM Resource Request = <memory:2432, max memory:6144, vCores:1, max vCores:4>; Queue Resource Limit for AM = <memory:3072, vCores:1>; User AM Resource Limit of the queue = <memory:3072, vCores:1>; Queue AM Resource Usage = <memory:2432, vCores:1>;
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1663525524352
final status: UNDEFINED
tracking URL: http://ip-172-31-90-73.ec2.internal:20888/proxy/application_1663517069168_0003/
user: hadoop
22/09/18 18:25:26 INFO Client: Application report for application_1663517069168_0003 (state: ACCEPTED)
22/09/18 18:25:27 INFO Client: Application report for application_1663517069168_0003 (state: ACCEPTED)
22/09/18 18:25:28 INFO Client: Application report .......```
The problem is pretty clear just by reading your log
Application is added to the scheduler and is not yet activated. Queue's AM resource limit exceeded. Details : AM Partition = CORE; AM Resource Request = <memory:2432, max memory:6144, vCores:1, max vCores:4>; Queue Resource Limit for AM = <memory:3072, vCores:1>; User AM Resource Limit of the queue = <memory:3072, vCores:1>; Queue AM Resource Usage = <memory:2432, vCores:1>;
You're requesting more resources than your cluster has. You can change that by giving more cores to your EMR cluster

It's possible to configure the Beam portable runner with the spark configurations?

TLDR;
It's possible to configure the Beam portable runner with the spark configurations? More precisely, it's possible to configure the spark.driver.host in the Portable Runner?
Motivation
Currently, we have airflow implemented in a Kubernetes cluster, and aiming to use TensorFlow Extended we need to use Apache beam. For our use case Spark would be the appropriate runner to be used, and as airflow and TensorFlow are coded in python we would need to use the Apache Beam's Portable Runner (https://beam.apache.org/documentation/runners/spark/#portability).
The problem
The portable runner creates the spark context inside its container and does not leave space for the driver DNS configuration making the executors inside the worker pods non-communicable to the driver (the job server).
Setup
Following the beam documentation, the job serer was implemented in the same pod as the airflow to use the local network between these two containers.
Job server config:
- name: beam-spark-job-server
image: apache/beam_spark_job_server:2.27.0
args: ["--spark-master-url=spark://spark-master:7077"]
Job server/airflow service:
apiVersion: v1
kind: Service
metadata:
name: airflow-scheduler
labels:
app: airflow-k8s
spec:
type: ClusterIP
selector:
app: airflow-scheduler
ports:
- port: 8793
protocol: TCP
targetPort: 8793
name: scheduler
- port: 8099
protocol: TCP
targetPort: 8099
name: job-server
- port: 7077
protocol: TCP
targetPort: 7077
name: spark-master
- port: 8098
protocol: TCP
targetPort: 8098
name: artifact
- port: 8097
protocol: TCP
targetPort: 8097
name: java-expansion
The ports 8097,8098 and 8099 are related to the job server, 8793 to airflow, and 7077 to the spark master.
Development/Errors
When testing a simple beam example python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK from the airflow container I get the following response on the airflow pod:
Defaulting container name to airflow-scheduler.
Use 'kubectl describe pod/airflow-scheduler-local-f685b5bc7-9d7r6 -n airflow-main-local' to see all of the containers in this pod.
airflow#airflow-scheduler-local-f685b5bc7-9d7r6:/opt/airflow$ python -m apache_beam.examples.wordcount --output ./data_test/ --runner=PortableRunner --job_endpoint=localhost:8099 --environment_type=LOOPBACK
INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds.
INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds.
INFO:oauth2client.client:Timeout attempting to reach GCE metadata service.
WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Connecting anonymously.
INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:35837
WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter.
INFO:root:Default Python SDK image for environment is apache/beam_python3.7_sdk:2.27.0
INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
with Pipeline() as p:
p.apply(..)
This ensures that the pipeline finishes before this program exits.
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
And the worker log:
21/02/19 19:50:00 INFO Worker: Asked to launch executor app-20210219194804-0000/47 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:00 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:00 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:00 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:00 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "47" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
21/02/19 19:50:02 INFO Worker: Executor app-20210219194804-0000/47 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 47
21/02/19 19:50:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=47)
21/02/19 19:50:02 INFO Worker: Asked to launch executor app-20210219194804-0000/48 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:02 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "48" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
21/02/19 19:50:04 INFO Worker: Executor app-20210219194804-0000/48 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 48
21/02/19 19:50:04 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219194804-0000, execId=48)
21/02/19 19:50:04 INFO Worker: Asked to launch executor app-20210219194804-0000/49 for BeamApp-root-0219194747-7d7938cf_51452c51-dffe-4c61-bcb7-60c7779e3256
21/02/19 19:50:04 INFO SecurityManager: Changing view acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls to: root
21/02/19 19:50:04 INFO SecurityManager: Changing view acls groups to:
21/02/19 19:50:04 INFO SecurityManager: Changing modify acls groups to:
21/02/19 19:50:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 19:50:04 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=44447" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#airflow-scheduler-local-f685b5bc7-9d7r6:44447" "--executor-id" "49" "--hostname" "172.18.0.3" "--cores" "1" "--app-id" "app-20210219194804-0000" "--worker-url" "spark://Worker#172.18.0.3:35837"
.
.
.
As we can see, the executor is being exited constantly, and by what I know this issue is created by the missing communication between the executor and the driver (the job server in this case). Also, the "--driver-url" is translated to the driver pod name using the random port "-Dspark.driver.port".
As we can't define the name of the service, the worker tries to use the original name from the driver and to use a randomly generated port. As the configuration comes from the driver, changing the default conf files in the worker/master doesn't create any results.
Using this answer as an example, I tried to use the env variable SPARK_PUBLIC_DNS in the job server but this didn't result in any changes in the worker logs.
Obs
Using directly in kubernetes a spark job
kubectl run spark-base --rm -it --labels="app=spark-client" --image bde2020/spark-base:2.4.5-hadoop2.7 -- bash ./spark/bin/pyspark --master spark://spark-master:7077 --conf spark.driver.host=spark-client
having the service:
apiVersion: v1
kind: Service
metadata:
name: spark-client
spec:
selector:
app: spark-client
clusterIP: None
I get a full working pyspark shell. If I omit the --conf parameter I get the same behavior as the first setup (exiting executors indefinitely)
21/02/19 20:21:02 INFO Worker: Executor app-20210219202050-0002/4 finished with state EXITED message Command exited with code 1 exitStatus 1
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Clean up non-shuffle files associated with the finished executor 4
21/02/19 20:21:02 INFO ExternalShuffleBlockResolver: Executor is not registered (appId=app-20210219202050-0002, execId=4)
21/02/19 20:21:02 INFO Worker: Asked to launch executor app-20210219202050-0002/5 for Spark shell
21/02/19 20:21:02 INFO SecurityManager: Changing view acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls to: root
21/02/19 20:21:02 INFO SecurityManager: Changing view acls groups to:
21/02/19 20:21:02 INFO SecurityManager: Changing modify acls groups to:
21/02/19 20:21:02 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
21/02/19 20:21:02 INFO ExecutorRunner: Launch command: "/usr/local/openjdk-8/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=46161" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler#spark-base:46161" "--executor-id" "5" "--hostname" "172.18.0.20" "--cores" "1" "--app-id" "app-20210219202050-0002" "--worker-url" "spark://Worker#172.18.0.20:45151"
I have three solutions to choose from depending on your deployment requirements. In order of difficulty:
Use the Spark "uber jar" job server. This starts an embedded job server inside the Spark master, instead of using a standalone job server in a container. This would simplify your deployment a lot, since you would not need to start the beam_spark_job_server container at all.
python -m apache_beam.examples.wordcount \
--output ./data_test/ \
--runner=SparkRunner \
--spark_submit_uber_jar \
--spark_master_url=spark://spark-master:7077 \
--environment_type=LOOPBACK
You can pass the properties through a Spark configuration file. Create the Spark configuration file, and add spark.driver.host and whatever other properties you need. In the docker run command for the job server, mount that configuration file to the container, and set the SPARK_CONF_DIR environment variable to point to that directory.
If that neither of those work for you, you can alternatively build your own customized version of the job server container. Pull Beam source from Github. Check out the release branch you want to use (e.g. git checkout origin/release-2.28.0). Modify the entrypoint spark-job-server.sh and set -Dspark.driver.host=x there. Then build the container using ./gradlew :runners:spark:job-server:container:docker -Pdocker-repository-root="your-repo" -Pdocker-tag="your-tag".
Let me revise the answer. The Job server need to able to communicate with the workers vice verse. The error of keep exiting is due to this. You need to configure such that they can communicate. A k8s headless service able to solve this.
reference of workable example at https://github.com/cometta/python-apache-beam-spark . If it is useful for you, can help me to 'Star' the repository

Spark on Yarn error : Yarn application has already ended! It might have been killed or unable to launch application master

While starting spark-shell --master yarn --deploy-mode client I am getting error :
Yarn application has already ended! It might have been killed or
unable to launch application master.
Here is the complete log from Yarn:
19/08/28 00:54:55 INFO client.RMProxy: Connecting to ResourceManager
at /0.0.0.0:8032
Container: container_1566921956926_0010_01_000001 on
rhel7-cloudera-dev_33917
=============================================================================== LogType:stderr Log Upload Time:28-Aug-2019 00:46:31 LogLength:523 Log
Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J:
Found binding in
[jar:file:/yarn/local/usercache/rhel/filecache/26/__spark_libs__5634501618166443611.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/etc/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation. SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory]
LogType:stdout Log Upload Time:28-Aug-2019 00:46:31 LogLength:5597 Log
Contents: 2019-08-28 00:46:19 INFO SignalUtils:54 - Registered signal
handler for TERM 2019-08-28 00:46:19 INFO SignalUtils:54 - Registered
signal handler for HUP 2019-08-28 00:46:19 INFO SignalUtils:54 -
Registered signal handler for INT 2019-08-28 00:46:19 INFO
SecurityManager:54 - Changing view acls to: yarn,rhel 2019-08-28
00:46:19 INFO SecurityManager:54 - Changing modify acls to: yarn,rhel
2019-08-28 00:46:19 INFO SecurityManager:54 - Changing view acls
groups to: 2019-08-28 00:46:19 INFO SecurityManager:54 - Changing
modify acls groups to: 2019-08-28 00:46:19 INFO SecurityManager:54 -
SecurityManager: authentication disabled; ui acls disabled; users
with view permissions: Set(yarn, rhel); groups with view permissions:
Set(); users with modify permissions: Set(yarn, rhel); groups with
modify permissions: Set() 2019-08-28 00:46:20 INFO
ApplicationMaster:54 - Preparing Local resources 2019-08-28 00:46:21
INFO ApplicationMaster:54 - ApplicationAttemptId:
appattempt_1566921956926_0010_000001 2019-08-28 00:46:21 INFO
ApplicationMaster:54 - Waiting for Spark driver to be reachable.
2019-08-28 00:46:21 INFO ApplicationMaster:54 - Driver now available:
rhel7-cloudera-dev:34872 2019-08-28 00:46:21 INFO
TransportClientFactory:267 - Successfully created connection to
rhel7-cloudera-dev/192.168.56.112:34872 after 107 ms (0 ms spent in
bootstraps) 2019-08-28 00:46:22 INFO ApplicationMaster:54 -
=============================================================================== YARN executor launch context: env:
CLASSPATH -> {{PWD}}{{PWD}}/spark_conf{{PWD}}/spark_libs/$HADOOP_CONF_DIR$HADOOP_COMMON_HOME/share/hadoop/common/$HADOOP_COMMON_HOME/share/hadoop/common/lib/$HADOOP_HDFS_HOME/share/hadoop/hdfs/$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/$HADOOP_YARN_HOME/share/hadoop/yarn/$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
$HADOOP_COMMON_HOME/$HADOOP_COMMON_HOME/lib/$HADOOP_HDFS_HOME/$HADOOP_HDFS_HOME/lib/$HADOOP_MAPRED_HOME/$HADOOP_MAPRED_HOME/lib/$HADOOP_YARN_HOME/$HADOOP_YARN_HOME/lib/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib//etc/hadoop-2.6.0/etc/hadoop:/etc/hadoop-2.6.0/share/hadoop/common/lib/:/etc/hadoop-2.6.0/share/hadoop/common/:/etc/hadoop-2.6.0/share/hadoop/hdfs:/etc/hadoop-2.6.0/share/hadoop/hdfs/lib/:/etc/hadoop-2.6.0/share/hadoop/hdfs/:/etc/hadoop-2.6.0/share/hadoop/yarn/lib/:/etc/hadoop-2.6.0/share/hadoop/yarn/:/etc/hadoop-2.6.0/share/hadoop/mapreduce/lib/:/etc/hadoop-2.6.0/share/hadoop/mapreduce/:/etc/hadoop-2.6.0/contrib/capacity-scheduler/.jar{{PWD}}/spark_conf/hadoop_conf
SPARK_DIST_CLASSPATH -> /etc/hadoop-2.6.0/etc/hadoop:/etc/hadoop-2.6.0/share/hadoop/common/lib/:/etc/hadoop-2.6.0/share/hadoop/common/:/etc/hadoop-2.6.0/share/hadoop/hdfs:/etc/hadoop-2.6.0/share/hadoop/hdfs/lib/:/etc/hadoop-2.6.0/share/hadoop/hdfs/:/etc/hadoop-2.6.0/share/hadoop/yarn/lib/:/etc/hadoop-2.6.0/share/hadoop/yarn/:/etc/hadoop-2.6.0/share/hadoop/mapreduce/lib/:/etc/hadoop-2.6.0/share/hadoop/mapreduce/:/etc/hadoop-2.6.0/contrib/capacity-scheduler/.jar
SPARK_YARN_STAGING_DIR -> *********(redacted)
SPARK_USER -> *********(redacted)
SPARK_CONF_DIR -> /etc/spark/conf
SPARK_HOME -> /etc/spark
command:
{{JAVA_HOME}}/bin/java \
-server \
-Xmx1024m \
-Djava.io.tmpdir={{PWD}}/tmp \
'-Dspark.driver.port=34872' \
-Dspark.yarn.app.container.log.dir= \
-XX:OnOutOfMemoryError='kill %p' \
org.apache.spark.executor.CoarseGrainedExecutorBackend \
--driver-url \
spark://CoarseGrainedScheduler#rhel7-cloudera-dev:34872 \
--executor-id \
\
--hostname \
\
--cores \
1 \
--app-id \
application_1566921956926_0010 \
--user-class-path \
file:$PWD/app.jar \
1>/stdout \
2>/stderr
resources:
spark_libs -> resource { scheme: "hdfs" host: "rhel7-cloudera-dev" port: 9000 file:
"/user/rhel/.sparkStaging/application_1566921956926_0010/spark_libs__5634501618166443611.zip"
} size: 232107209 timestamp: 1566933362350 type: ARCHIVE visibility:
PRIVATE
__spark_conf -> resource { scheme: "hdfs" host: "rhel7-cloudera-dev" port: 9000 file:
"/user/rhel/.sparkStaging/application_1566921956926_0010/spark_conf.zip"
} size: 208377 timestamp: 1566933365411 type: ARCHIVE visibility:
PRIVATE
=============================================================================== 2019-08-28 00:46:22 INFO RMProxy:98 - Connecting to ResourceManager
at /0.0.0.0:8030 2019-08-28 00:46:22 INFO YarnRMClient:54 -
Registering the ApplicationMaster 2019-08-28 00:46:22 INFO
YarnAllocator:54 - Will request 2 executor container(s), each with 1
core(s) and 1408 MB memory (including 384 MB of overhead) 2019-08-28
00:46:22 INFO YarnAllocator:54 - Submitted 2 unlocalized container
requests. 2019-08-28 00:46:22 INFO ApplicationMaster:54 - Started
progress reporter thread with (heartbeat : 3000, initial allocation :
200) intervals 2019-08-28 00:46:22 ERROR ApplicationMaster:43 -
RECEIVED SIGNAL TERM 2019-08-28 00:46:23 INFO ApplicationMaster:54 -
Final app status: UNDEFINED, exitCode: 16, (reason: Shutdown hook
called before final status was reported.) 2019-08-28 00:46:23 INFO
ShutdownHookManager:54 - Shutdown hook called
Container: container_1566921956926_0010_02_000001 on
rhel7-cloudera-dev_33917
=============================================================================== LogType:stderr Log Upload Time:28-Aug-2019 00:46:31 LogLength:3576 Log
Contents: SLF4J: Class path contains multiple SLF4J bindings. SLF4J:
Found binding in
[jar:file:/yarn/local/usercache/rhel/filecache/26/__spark_libs__5634501618166443611.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/etc/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation. SLF4J: Actual binding is of type
[org.slf4j.impl.Log4jLoggerFactory] Exception in thread "main"
java.io.IOException: Failed on local exception: java.io.IOException;
Host Details : local host is: "rhel7-cloudera-dev/192.168.56.112";
destination host is: "rhel7-cloudera-dev":9000; at
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at
org.apache.hadoop.ipc.Client.call(Client.java:1474) at
org.apache.hadoop.ipc.Client.call(Client.java:1401) at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) at
org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1977) at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
at
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7$$anonfun$apply$3.apply(ApplicationMaster.scala:235)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7$$anonfun$apply$3.apply(ApplicationMaster.scala:232)
at scala.Option.foreach(Option.scala:257) at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7.apply(ApplicationMaster.scala:232)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$7.apply(ApplicationMaster.scala:197)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$5.run(ApplicationMaster.scala:800)
at java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:422) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
at
org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:799)
at
org.apache.spark.deploy.yarn.ApplicationMaster.(ApplicationMaster.scala:197)
at
org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:823)
at
org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:854)
at
org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: java.io.IOException at
org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:935)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:967)
Caused by: java.lang.InterruptedException ... 2 more
LogType:stdout Log Upload Time:28-Aug-2019 00:46:31 LogLength:975 Log
Contents: 2019-08-28 00:46:26 INFO SignalUtils:54 - Registered signal
handler for TERM 2019-08-28 00:46:26 INFO SignalUtils:54 - Registered
signal handler for HUP 2019-08-28 00:46:26 INFO SignalUtils:54 -
Registered signal handler for INT 2019-08-28 00:46:27 INFO
SecurityManager:54 - Changing view acls to: yarn,rhel 2019-08-28
00:46:27 INFO SecurityManager:54 - Changing modify acls to: yarn,rhel
2019-08-28 00:46:27 INFO SecurityManager:54 - Changing view acls
groups to: 2019-08-28 00:46:27 INFO SecurityManager:54 - Changing
modify acls groups to: 2019-08-28 00:46:27 INFO SecurityManager:54 -
SecurityManager: authentication disabled; ui acls disabled; users
with view permissions: Set(yarn, rhel); groups with view permissions:
Set(); users with modify permissions: Set(yarn, rhel); groups with
modify permissions: Set() 2019-08-28 00:46:28 INFO
ApplicationMaster:54 - Preparing Local resources 2019-08-28 00:46:28
ERROR ApplicationMaster:43 - RECEIVED SIGNAL TERM
Any suggestion to resolve this issue?

spark-submit: unable to get driver status

I'm running a job on a test Spark standalone in cluster mode, but I'm finding myself unable to monitor the status of the driver.
Here is a minimal example using spark-2.4.3 (master and one worker running on the same node, started running sbin/start-all.sh on a freshly unarchived installation using the default conf, no conf/slaves set), executing spark-submit from the node itself:
$ spark-submit --master spark://ip-172-31-15-245:7077 --deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
/home/ubuntu/spark/examples/jars/spark-examples_2.11-2.4.3.jar 100
log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.NativeCodeLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/06/27 09:08:28 INFO SecurityManager: Changing view acls to: ubuntu
19/06/27 09:08:28 INFO SecurityManager: Changing modify acls to: ubuntu
19/06/27 09:08:28 INFO SecurityManager: Changing view acls groups to:
19/06/27 09:08:28 INFO SecurityManager: Changing modify acls groups to:
19/06/27 09:08:28 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); groups with view permissions: Set(); users with modify permissions: Set(ubuntu); groups with modify permissions: Set()
19/06/27 09:08:28 INFO Utils: Successfully started service 'driverClient' on port 36067.
19/06/27 09:08:28 INFO TransportClientFactory: Successfully created connection to ip-172-31-15-245/172.31.15.245:7077 after 29 ms (0 ms spent in bootstraps)
19/06/27 09:08:28 INFO ClientEndpoint: Driver successfully submitted as driver-20190627090828-0008
19/06/27 09:08:28 INFO ClientEndpoint: ... waiting before polling master for driver state
19/06/27 09:08:33 INFO ClientEndpoint: ... polling master for driver state
19/06/27 09:08:33 INFO ClientEndpoint: State of driver-20190627090828-0008 is RUNNING
19/06/27 09:08:33 INFO ClientEndpoint: Driver running on 172.31.15.245:41057 (worker-20190627083412-172.31.15.245-41057)
19/06/27 09:08:33 INFO ShutdownHookManager: Shutdown hook called
19/06/27 09:08:33 INFO ShutdownHookManager: Deleting directory /tmp/spark-34082661-f0de-4c56-92b7-648ea24fa59c
> spark-submit --master spark://ip-172-31-15-245:7077 --status driver-20190627090828-0008
19/06/27 09:09:27 WARN RestSubmissionClient: Unable to connect to server spark://ip-172-31-15-245:7077.
Exception in thread "main" org.apache.spark.deploy.rest.SubmitRestConnectionException: Unable to connect to server
at org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$requestSubmissionStatus$3.apply(RestSubmissionClient.scala:165)
at org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$requestSubmissionStatus$3.apply(RestSubmissionClient.scala:148)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.deploy.rest.RestSubmissionClient.requestSubmissionStatus(RestSubmissionClient.scala:148)
at org.apache.spark.deploy.SparkSubmit.requestStatus(SparkSubmit.scala:111)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:88)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.deploy.rest.SubmitRestConnectionException: No response from server
at org.apache.spark.deploy.rest.RestSubmissionClient.readResponse(RestSubmissionClient.scala:285)
at org.apache.spark.deploy.rest.RestSubmissionClient.org$apache$spark$deploy$rest$RestSubmissionClient$$get(RestSubmissionClient.scala:195)
at org.apache.spark.deploy.rest.RestSubmissionClient$$anonfun$requestSubmissionStatus$3.apply(RestSubmissionClient.scala:152)
... 11 more
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.deploy.rest.RestSubmissionClient.readResponse(RestSubmissionClient.scala:278)
... 13 more
Spark is in good health (I'm able to run other jobs after the one above), the driver-20190627090828-0008 appears as "FINISHED" in the web UI.
Is there something I am missing?
UPDATE:
on the master log all I get is
19/07/01 09:40:24 INFO master.Master: 172.31.15.245:42308 got disassociated, removing it.

Spark fails to register multiple workers to master

I have been working on creating a Spark cluster using 1 master and 4 workers on Linux.
It works fine for one worker. When I try to add more than one worker, only the first worker gets registered to master while the rest fails with the below error,
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/08/06 14:17:39 INFO Worker: Started daemon with process name: 24104#barracuda5
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for TERM
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for HUP
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for INT
18/08/06 14:17:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/06 14:17:39 INFO SecurityManager: Changing view acls to: barracuda5
18/08/06 14:17:39 INFO SecurityManager: Changing modify acls to: barracuda5
18/08/06 14:17:39 INFO SecurityManager: Changing view acls groups to:
18/08/06 14:17:39 INFO SecurityManager: Changing modify acls groups to:
18/08/06 14:17:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(barracuda5); groups with view permissions: Set(); users with modify permissions: Set(barracuda5); groups with modify permissions: Set()
18/08/06 14:17:40 INFO Utils: Successfully started service 'sparkWorker' on port 46635.
18/08/06 14:17:40 INFO Worker: Starting Spark worker 10.0.6.6:46635 with 4 cores, 14.7 GB RAM
18/08/06 14:17:40 INFO Worker: Running Spark version 2.1.0
18/08/06 14:17:40 INFO Worker: Spark home: /usr/lib/spark/spark-2.1.0-bin-hadoop2.7
18/08/06 14:17:40 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
18/08/06 14:17:40 INFO WorkerWebUI: Bound WorkerWebUI to 0.0.0.0, and started at http://10.0.6.6:8081
18/08/06 14:17:40 INFO Worker: Connecting to master Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077...
18/08/06 14:17:40 WARN Worker: Failed to connect to master Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077
org.apache.spark.SparkException: Exception thrown in awaitResult
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
... 4 more
Caused by: java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:101)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:242)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:205)
at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:517)
at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:970)
at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:215)
at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
Let me know if I have missed something here. Or if anyone knows what might be the solution to this.
Thanks

Resources