I am building a cube in kylin, source tables are in hive saved in hudi parquet. It gives NullPointerException on second step. It works fine on simple tables that are not saved using hudi.
Error stack is given below.
2022-06-21 08:20:59,365 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ProjectInstance from hdfs://127.0.0.1:9000/kylin/kylin_metadata/learn_kylin/job_tmp/68af66a7-75dc-463f-abc5-943ffe3204c0-01/meta/project
2022-06-21 08:20:59,368 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 ProjectInstance(s) out of 1 resource with 0 errors
2022-06-21 08:20:59,368 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.cachesync.Broadcaster
2022-06-21 08:20:59,368 DEBUG [pool-1-thread-1] cachesync.Broadcaster : 1 nodes in the cluster: [localhost:7070]
2022-06-21 08:20:59,368 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.model.DataModelManager
2022-06-21 08:20:59,368 INFO [pool-1-thread-1] common.KylinConfig : Creating new manager instance of class org.apache.kylin.metadata.TableMetadataManager
2022-06-21 08:20:59,368 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableDesc from hdfs://127.0.0.1:9000/kylin/kylin_metadata/learn_kylin/job_tmp/68af66a7-75dc-463f-abc5-943ffe3204c0-01/meta/table
2022-06-21 08:20:59,375 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 2 TableDesc(s) out of 2 resource with 0 errors
2022-06-21 08:20:59,375 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading TableExtDesc from hdfs://127.0.0.1:9000/kylin/kylin_metadata/learn_kylin/job_tmp/68af66a7-75dc-463f-abc5-943ffe3204c0-01/meta/table_exd
2022-06-21 08:20:59,379 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 2 TableExtDesc(s) out of 2 resource with 0 errors
2022-06-21 08:20:59,379 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading ExternalFilterDesc from hdfs://127.0.0.1:9000/kylin/kylin_metadata/learn_kylin/job_tmp/68af66a7-75dc-463f-abc5-943ffe3204c0-01/meta/ext_filter
2022-06-21 08:20:59,379 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 0 ExternalFilterDesc(s) out of 0 resource with 0 errors
2022-06-21 08:20:59,379 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Reloading DataModelDesc from hdfs://127.0.0.1:9000/kylin/kylin_metadata/learn_kylin/job_tmp/68af66a7-75dc-463f-abc5-943ffe3204c0-01/meta/model_desc
2022-06-21 08:20:59,382 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 DataModelDesc(s) out of 1 resource with 0 errors
2022-06-21 08:20:59,383 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeDesc(s) out of 1 resource with 0 errors
2022-06-21 08:20:59,383 DEBUG [pool-1-thread-1] cachesync.CachedCrudAssist : Loaded 1 CubeInstance(s) out of 1 resource with 0 errors
2022-06-21 08:20:59,392 INFO [pool-1-thread-1] job.CubeBuildJob : There are 0 cuboids to be built in segment 20210212024500_20220601000000.
2022-06-21 08:20:59,392 INFO [pool-1-thread-1] job.CubeBuildJob : Updating segment info
2022-06-21 08:20:59,398 INFO [pool-1-thread-1] job.CubeBuildJob : Building job takes 49 ms
2022-06-21 08:20:59,398 ERROR [pool-1-thread-1] application.SparkApplication : The spark job execute failed!
java.lang.NullPointerException
at org.apache.kylin.engine.spark.job.CubeBuildJob.doExecute(CubeBuildJob.java:204)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:308)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:94)
at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2022-06-21 08:20:59,399 ERROR [pool-1-thread-1] application.JobMonitor : Job failed the 2 times.
java.lang.RuntimeException: Error execute org.apache.kylin.engine.spark.job.CubeBuildJob
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:97)
at org.apache.spark.application.JobWorker$$anon$2.run(JobWorker.scala:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at org.apache.kylin.engine.spark.job.CubeBuildJob.doExecute(CubeBuildJob.java:204)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:308)
at org.apache.kylin.engine.spark.application.SparkApplication.execute(SparkApplication.java:94)
... 4 more
2022-06-21 08:20:59,399 INFO [pool-1-thread-1] autoheal.ExceptionTerminator : Reset spark.executor.memory=6144MB when retry.
2022-06-21 08:20:59,399 INFO [pool-1-thread-1] autoheal.ExceptionTerminator : Reset spark.executor.memoryOverhead=1228MB when retry.
Related
I try to use pyspark on jupyterhub (which on kubernetes) for interactive programming to a remote spark cluster on kubernetes. So I use sparkmagic and livy (which on kubernetes, too)
When I try to get sparkContext and sparkSession in notebook, util the livy session dead, it is still stuck on 'starting' status.
My spark-driver-pod is running, and I can see this log:
53469 [pool-8-thread-1] INFO org.apache.livy.rsc.driver.SparkEntries - Spark context finished initialization in 34532ms
53625 [pool-8-thread-1] INFO org.apache.livy.rsc.driver.SparkEntries - Created Spark session.
128775 [dispatcher-CoarseGrainedScheduler] INFO org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.83.128.194:35040) with ID 1, ResourceProfileId 0
128927 [dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager 10.83.128.194:42385 with 4.6 GiB RAM, BlockManagerId(1, 10.83.128.194, 42385, None)
131902 [dispatcher-CoarseGrainedScheduler] INFO org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.83.128.130:58232) with ID 2, ResourceProfileId 0
132041 [dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager 10.83.128.130:37991 with 4.6 GiB RAM, BlockManagerId(2, 10.83.128.130, 37991, None)
My spark-executor-pod is also running.
This is my livy-server's log:
2022-05-19 08:36:54,959 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:36:56,969 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:36:58,979 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:37:01,002 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:37:03,015 ERROR LivySession Session 0 did not reach idle status in time. Current status is starting.
2022-05-19 08:37:03,016 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionCreationEnd,Timestamp: 2022-05-19 08:37:03.016038,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: starting,Success: False,ExceptionType: LivyClientTimeoutException,ExceptionMessage: Session 0 did not start up in 600 seconds.
2022-05-19 08:37:03,016 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionDeletionStart,Timestamp: 2022-05-19 08:37:03.016288,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: starting
2022-05-19 08:37:03,016 DEBUG LivySession Deleting session '0'
2022-05-19 08:37:03,037 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionDeletionEnd,Timestamp: 2022-05-19 08:37:03.036919,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: dead,Success: True,ExceptionType: ,ExceptionMessage:
2022-05-19 08:37:03,037 ERROR SparkMagics Error creating session: Session 0 did not start up in 600 seconds.
Please tell me how can I solve this problem, thanks!
My spark version:3.2.1
livy version:0.8.0
I have this submitted job fine in YARN mode but I get the following error
I added a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes and added .bashrc yarn_config_dir and hadoop_config_dir.
ERROR:
Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
9068548 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client - Application report for application_1531990849146_0010 (state: FAILED)
9068548 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client -
client token: N/A
diagnostics: Application application_1531990849146_0010 failed 2 times due to AM Container for appattempt_1531990849146_0010_000002 exited with exitCode: 1
Failing this attempt.Diagnostics: [2018-07-19 11:56:58.484]Exception from container-launch.
Container id: container_1531990849146_0010_02_000001
Exit code: 1
[2018-07-19 11:56:58.484]
[2018-07-19 11:56:58.486]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
[2018-07-19 11:56:58.486]
[2018-07-19 11:56:58.486]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher
[2018-07-19 11:56:58.486]
For more detailed output, check the application tracking page: http://localhost:8088/cluster/app/application_1531990849146_0010 Then click on links to logs of each attempt.
. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1532001413903
final status: FAILED
tracking URL: http://localhost:8088/cluster/app/application_1531990849146_0010
user: root
9068611 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.deploy.yarn.Client - Deleted staging directory file:/root/.sparkStaging/application_1531990849146_0010
9068612 [bioingine-management-service-akka.actor.default-dispatcher-14] ERROR org.apache.spark.SparkContext - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
9068618 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.s.jetty.server.AbstractConnector - Stopped Spark#2152c728{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
9068619 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.ui.SparkUI - Stopped Spark web UI at http://localhost:4040
9068620 [dispatcher-event-loop-16] WARN o.a.s.s.c.YarnSchedulerBackend$YarnSchedulerEndpoint - Attempted to request executors before the AM has registered!
9068621 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.YarnClientSchedulerBackend - Shutting down all executors
9068621 [dispatcher-event-loop-17] INFO o.a.s.s.c.YarnSchedulerBackend$YarnDriverEndpoint - Asking each executor to shut down
9068622 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.SchedulerExtensionServices - Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
9068622 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO o.a.s.s.c.YarnClientSchedulerBackend - Stopped
9068623 [dispatcher-event-loop-20] INFO o.a.s.MapOutputTrackerMasterEndpoint - MapOutputTrackerMasterEndpoint stopped!
9068624 [bioingine-management-service-akka.actor.default-dispatcher-14] ERROR org.apache.spark.util.Utils - Uncaught exception in thread bioingine-management-service-akka.actor.default-dispatcher-14
java.lang.NullPointerException: null
at org.apache.spark.network.shuffle.ExternalShuffleClient.close(ExternalShuffleClient.java:141)
at org.apache.spark.storage.BlockManager.stop(BlockManager.scala:1485)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:90)
at org.apache.spark.SparkContext$$anonfun$stop$11.apply$mcV$sp(SparkContext.scala:1937)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1317)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1936)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:587)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
9068624 [bioingine-management-service-akka.actor.default-dispatcher-14] INFO org.apache.spark.SparkContext - Successfully stopped SparkContext
9068627 [bioingine-management-service-akka.actor.default-dispatcher-15] ERROR akka.actor.ActorSystemImpl - Error during processing of request: 'Yarn application has already ended! It might have been killed or unable to launch application master.'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at com.bioingine.smash.management.services.SmashExtractorService.getSparkSession(SmashExtractorService.scala:79)
at com.bioingine.smash.management.services.SmashExtractorService.getFileHeaders(SmashExtractorService.scala:83)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at com.bioingine.smash.management.services.SmashService$$anonfun$getcolumnHeaders$1.apply(SmashService.scala:90)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
**SparkSession configuration**
new SparkConf().setMaster("yarn").setAppName("Test").set("spark.executor.memory", "3g")
.set("spark.ui.enabled","true")
.set("spark.driver.memory","9g")
.set("spark.default.parallelism","10")
.set("spark.executor.cores","3")
.set("spark.cores.max","9")
.set("spark.memory.offHeap.enabled","true")
.set("spark.memory.offHeap.size","6g")
.set("spark.yarn.am.memory","2g")
.set("spark.yarn.am.cores","2")
.set("spark.yarn.am.cores","2")
.set("spark.yarn.archive","hdfs://localhost:9000/user/spark/share/lib/spark2-hdp-yarn-archive.tar.gz")
.set("spark.yarn.jars","hdfs://localhost:9000/user/spark/share/lib/spark-yarn_2.11.2.2.0.jar")
**We are added below configuration**
1. These entries are in $SPARK_HOME/conf/spark-defaults.conf
spark.driver.extraJavaOptions -Dhdp.version=2.9.0
spark.yarn.am.extraJavaOptions -Dhdp.version=2.9.0
log4j.rootCategory=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
2. yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle,spark_shuffle</value>
<description>shuffle service that needs to be set for Map Reduce to run</description>
</property>
<property>
<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/usr/share/hadoop/etc/hadoop,/usr/share/hadoop/,/usr/share/hadoop/lib/,/usr/share/hadoop/share/hadoop/common/,/usr/share/hadoop/share/hadoop/common/lib, /usr/share/hadoop/share/hadoop/hdfs/,/usr/share/hadoop/share/hadoop/hdfs/lib/,/usr/share/hadoop/share/hadoop/mapreduce/,/usr/share/hadoop/share/hadoop/mapreduce/lib/,/usr/share/hadoop/share/hadoop/tools/lib/,/usr/share/hadoop/share/hadoop/yarn/,/usr/share/hadoop/share/hadoop/yarn/lib/*,/usr/share/spark/jars/spark-yarn_2.11-2.2.0.jar</value>
</property>
</configuration>
3.Spark-env.sh
export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop
export SPARK_HOME=/home/hadoop/spark
SPARK_DIST_CLASSPATH="/usr/share/spark/jars/*"
4. .bashrc
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/"
export SBT_OPTS="-Xms16G -Xmx16G"
export HADOOP_INSTALL=/usr/share/hadoop
export HADOOP_CONF_DIR=/usr/share/hadoop/etc/hadoop/
export YARN_CONF_DIR=/usr/share/hadoop/etc/hadoop/
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export SPARK_CLASSPATH="/usr/share/spark/jars/*"
export SPARK_HOME="/usr/share/spark/"
export PATH=$PATH:$SPARK_HOME
I have a PySpark job which works successfully with a small cluster, but starts to get a lot of the following errors in the first few minutes when it starts up. Any idea on how I can solve it? This is with PySpark 2.2.0 and mesos.
17/09/29 18:54:26 INFO Executor: Running task 5717.0 in stage 0.0 (TID 5717)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5813
17/09/29 18:54:26 INFO Executor: Running task 5813.0 in stage 0.0 (TID 5813)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5909
17/09/29 18:54:26 INFO Executor: Running task 5909.0 in stage 0.0 (TID 5909)
17/09/29 18:54:56 ERROR TransportClientFactory: Exception while bootstrapping client after 30001 ms
java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.base.Throwables.propagate(Throwables.java:160)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:275)
at org.apache.spark.network.sasl.SaslClientBootstrap.doBootstrap(SaslClientBootstrap.java:70)
at org.apache.spark.network.crypto.AuthClientBootstrap.doSaslAuth(AuthClientBootstrap.java:117)
at org.apache.spark.network.crypto.AuthClientBootstrap.doBootstrap(AuthClientBootstrap.java:76)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:244)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:366)
at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:332)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:654)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:467)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:684)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:681)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:681)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:308)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:96)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:271)
... 23 more
17/09/29 18:54:56 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /10.0.1.25:37314 is closed
17/09/29 18:54:56 INFO Executor: Fetching spark://10.0.1.25:37314/files/djinn.spark.zip with timestamp 1506711239350
I'm using ubuntu 16.04 LTS.
I've installed datastax with this procedure.
The very first day it worked perfectly.
But now it's not working. Here's the
screenshot.
My debug.log:
ERROR [main] 2017-01-30 13:54:08,951 DseDaemon.java:490 - Unable to start DSE server.
java.lang.RuntimeException: Unable to create thrift socket to localhost/127.0.0.1:9042
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:269) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:46) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.<init>(ThriftServer.java:131) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at com.datastax.bdp.server.DseThriftServer.start(DseThriftServer.java:46) ~[dse-core-5.0.5.jar:5.0.5]
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:486) [cassandra-all-3.0.11.1485.jar:3.0.11.1485]
at com.datastax.bdp.server.DseDaemon.start(DseDaemon.java:485) ~[dse-core-5.0.5.jar:5.0.5]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:583) [cassandra-all-3.0.11.1485.jar:3.0.11.1485]
at com.datastax.bdp.DseModule.main(DseModule.java:91) [dse-core-5.0.5.jar:5.0.5]
Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address localhost/127.0.0.1:9042.
at org.apache.cassandra.thrift.TCustomServerSocket.<init>(TCustomServerSocket.java:75) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:264) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
... 8 common frames omitted
ERROR [main] 2017-01-30 13:54:08,952 CassandraDaemon.java:709 - Exception encountered during startup
java.lang.RuntimeException: java.lang.RuntimeException: Unable to create thrift socket to localhost/127.0.0.1:9042
at com.datastax.bdp.server.DseDaemon.start(DseDaemon.java:493) ~[dse-core-5.0.5.jar:5.0.5]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:583) ~[cassandra-all-3.0.11.1485.jar:3.0.11.1485]
at com.datastax.bdp.DseModule.main(DseModule.java:91) [dse-core-5.0.5.jar:5.0.5]
Caused by: java.lang.RuntimeException: Unable to create thrift socket to localhost/127.0.0.1:9042
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:269) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.TServerCustomFactory.buildTServer(TServerCustomFactory.java:46) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.ThriftServer$ThriftServerThread.<init>(ThriftServer.java:131) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.ThriftServer.start(ThriftServer.java:58) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at com.datastax.bdp.server.DseThriftServer.start(DseThriftServer.java:46) ~[dse-core-5.0.5.jar:5.0.5]
at org.apache.cassandra.service.CassandraDaemon.start(CassandraDaemon.java:486) ~[cassandra-all-3.0.11.1485.jar:3.0.11.1485]
at com.datastax.bdp.server.DseDaemon.start(DseDaemon.java:485) ~[dse-core-5.0.5.jar:5.0.5]
... 2 common frames omitted
Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address localhost/127.0.0.1:9042.
at org.apache.cassandra.thrift.TCustomServerSocket.<init>(TCustomServerSocket.java:75) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
at org.apache.cassandra.thrift.CustomTThreadPoolServer$Factory.buildTServer(CustomTThreadPoolServer.java:264) ~[cassandra-all-3.0.11.1485.jar:5.0.5]
... 8 common frames omitted
INFO [Daemon shutdown] 2017-01-30 13:54:08,953 DseDaemon.java:577 - DSE shutting down...
DEBUG [StorageServiceShutdownHook] 2017-01-30 13:54:08,954 StorageService.java:1210 - DRAINING: starting drain process
INFO [StorageServiceShutdownHook] 2017-01-30 13:54:08,955 HintsService.java:212 - Paused hints dispatch
INFO [Daemon shutdown] 2017-01-30 13:54:08,957 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.plugin.health.NodeHealthPlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,958 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.plugin.DseClientToolPlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,959 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.plugin.InternalQueryRouterPlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,959 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.leasemanager.LeasePlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,962 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.cassandra.cql3.CqlSlowLogPlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,966 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.cassandra.metrics.PerformanceObjectsPlugin
INFO [Daemon shutdown] 2017-01-30 13:54:08,966 PluginManager.java:347 - Deactivating plugin: com.datastax.bdp.plugin.ThreadPoolPlugin
i have an application with one microservice and one gateway. I deploy my war(s) application in my external server
so how can i deploy jhipster-registry in my external server with war file
Thanks.
#Gaƫl Marziou
The log file of my Registry
:: Running Spring Boot 1.3.6.
:: http://jhipster.github.io
::-10-09 03:40:55.863 INFO 5278 --- [ost-startStop-1] i.g.jhipster.registry.ApplicationWebXml : The following profiles are active: dev
2016-10-09 03:40:57.076 WARN 5278 --- [ost-startStop-1] o.s.c.a.ConfigurationClassPostProcessor : Cannot enhance #Configuration bean definitio$
2016-10-09 03:40:57.646 DEBUG 5278 --- [ost-startStop-1] i.g.j.r.config.MetricsConfiguration : Registering JVM gauges
2016-10-09 03:40:57.672 DEBUG 5278 --- [ost-startStop-1] i.g.j.r.config.MetricsConfiguration : Initializing Metrics JMX reporting
2016-10-09 03:40:59.427 INFO 5278 --- [ost-startStop-1] i.g.j.registry.config.WebConfigurer : Web application configuration, using profile$
2016-10-09 03:40:59.427 DEBUG 5278 --- [ost-startStop-1] i.g.j.registry.config.WebConfigurer : Initializing Metrics registries
2016-10-09 03:40:59.430 DEBUG 5278 --- [ost-startStop-1] i.g.j.registry.config.WebConfigurer : Registering Metrics Filter
2016-10-09 03:40:59.431 DEBUG 5278 --- [ost-startStop-1] i.g.j.registry.config.WebConfigurer : Registering Metrics Servlet
2016-10-09 03:40:59.431 INFO 5278 --- [ost-startStop-1] i.g.j.registry.config.WebConfigurer : Web application fully configured
2016-10-09 03:40:59.442 INFO 5278 --- [ost-startStop-1] i.g.jhipster.registry.JHipsterRegistry : Running with Spring profile(s) : [dev]
2016-10-09 03:41:00.536 INFO 5278 --- [ost-startStop-1] com.netflix.discovery.DiscoveryClient : Client configured to neither register nor qu$
2016-10-09 03:41:00.545 INFO 5278 --- [ost-startStop-1] com.netflix.discovery.DiscoveryClient : Discovery Client initialized at timestamp 14$
2016-10-09 03:41:00.623 INFO 5278 --- [ost-startStop-1] c.n.eureka.DefaultEurekaServerContext : Initializing ...
2016-10-09 03:41:00.625 INFO 5278 --- [ost-startStop-1] c.n.eureka.cluster.PeerEurekaNodes : Adding new peer nodes [http ://admin:admin#lo$
2016-10-09 03:41:00.747 INFO 5278 --- [ost-startStop-1] c.n.d.provider.DiscoveryJerseyProvider : Using JSON encoding codec LegacyJacksonJson
2016-10-09 03:41:00.748 INFO 5278 --- [ost-startStop-1] c.n.d.provider.DiscoveryJerseyProvider : Using JSON decoding codec LegacyJacksonJson
2016-10-09 03:41:00.748 INFO 5278 --- [ost-startStop-1] c.n.d.provider.DiscoveryJerseyProvider : Using XML encoding codec XStreamXml
2016-10-09 03:41:00.748 INFO 5278 --- [ost-startStop-1] c.n.d.provider.DiscoveryJerseyProvider : Using XML decoding codec XStreamXml
2016-10-09 03:41:00.989 INFO 5278 --- [ost-startStop-1] c.n.eureka.cluster.PeerEurekaNodes : Replica node URL: http ://admin:admin#localh$
2016-10-09 03:41:00.997 INFO 5278 --- [ost-startStop-1] c.n.e.registry.AbstractInstanceRegistry : Finished initializing remote region registri$
2016-10-09 03:41:00.998 INFO 5278 --- [ost-startStop-1] c.n.eureka.DefaultEurekaServerContext : Initialized
2016-10-09 03:41:02.969 WARN 5278 --- [ost-startStop-1] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configurat$
2016-10-09 03:41:02.969 INFO 5278 --- [ost-startStop-1] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sour$
2016-10-09 03:41:02.977 WARN 5278 --- [ost-startStop-1] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configurat$
2016-10-09 03:41:02.977 INFO 5278 --- [ost-startStop-1] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sour$
2016-10-09 03:41:05.582 INFO 5278 --- [ Thread-13] c.n.e.r.PeerAwareInstanceRegistryImpl : Got 1 instances from neighboring DS node
2016-10-09 03:41:05.583 INFO 5278 --- [ Thread-13] c.n.e.r.PeerAwareInstanceRegistryImpl : Renew threshold is: 1
2016-10-09 03:41:05.583 INFO 5278 --- [ Thread-13] c.n.e.r.PeerAwareInstanceRegistryImpl : Changing status to UP
2016-10-09 03:41:05.660 INFO 5278 --- [ost-startStop-1] i.g.jhipster.registry.ApplicationWebXml : Started ApplicationWebXml in 11.934 seconds $
Oct 09, 2016 3:41:05 AM org.apache.catalina.core.ContainerBase startInternal
SEVERE: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHos$
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:799)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
... 6 more
Caused by: java.lang.NoSuchMethodError: javax.servlet.ServletContext.getVirtualServerName()Ljava/lang/String;
at org.apache.tomcat.websocket.server.WsServerContainer.(WsServerContainer.java:150)
at org.apache.tomcat.websocket.server.WsSci.init(WsSci.java:131)
at org.apache.tomcat.websocket.server.WsSci.onStartup(WsSci.java:47)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5493)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 6 more
Oct 09, 2016 3:41:05 AM org.apache.catalina.core.ContainerBase startInternal
SEVERE: A child container failed during start
java.util.concurrent.ExecutionException: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHos$
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1123)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:300)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:443)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:731)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.startup.Catalina.start(Catalina.java:689)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:321)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:455)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1559)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1549)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1131)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:799)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 6 more
Oct 09, 2016 3:41:05 AM org.apache.catalina.startup.Catalina start
SEVERE: The required Server component failed to start so Tomcat is unable to start.
org.apache.catalina.LifecycleException: Failed to start component [StandardServer[8004]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.startup.Catalina.start(Catalina.java:689)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:321)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:455)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardService[Catalina]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:731)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 7 more
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:443)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 9 more
Caused by: org.apache.catalina.LifecycleException: A child container failed during start
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:1131)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:300)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
... 11 more
Oct 09, 2016 3:41:05 AM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-bio-8762"]
Oct 09, 2016 3:41:05 AM org.apache.catalina.core.StandardService stopInternal
INFO: Stopping service Catalina
Oct 09, 2016 3:41:05 AM org.apache.coyote.AbstractProtocol destroy
INFO: Destroying ProtocolHandler ["http-bio-8762"]
2016-10-09 03:41:05.706 INFO 5278 --- [ main] com.netflix.discovery.DiscoveryClient : Completed shut down of DiscoveryClient
2016-10-09 03:41:05.725 INFO 5278 --- [ main] c.n.eureka.DefaultEurekaServerContext : Shutting down ...
2016-10-09 03:41:05.735 INFO 5278 --- [ main] c.n.eureka.DefaultEurekaServerContext : Shut down
Oct 09, 2016 3:41:05 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [ReplicaAwareInstanceRegistry - RenewalThresholdUpdater] but has failed to$
Oct 09, 2016 3:41:05 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [Eureka-JerseyClient-Conn-Cleaner2] but has failed to stop it. This is ver$
Oct 09, 2016 3:41:05 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [StatsMonitor-0] but has failed to stop it. This is very likely to create $
Oct 09, 2016 3:41:05 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [] appears to have started a thread named [Eureka-CacheFillTimer] but has failed to stop it. This is very likely to $
Oct 09, 2016 3:41:05 AM org.apache.catalina.loader.WebappClassLoader loadClass
INFO: Illegal access: this web application instance has been stopped already. Could not load org.eclipse.jgit.util.FileUtils. The eventual fol$
java.lang.IllegalStateException
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1610)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1569)
at org.springframework.cloud.config.server.support.AbstractScmAccessor$1.run(AbstractScmAccessor.java:91)
Exception in thread "Thread-5" java.lang.NoClassDefFoundError: org/eclipse/jgit/util/FileUtils
at org.springframework.cloud.config.server.support.AbstractScmAccessor$1.run(AbstractScmAccessor.java:91)
Caused by: java.lang.ClassNotFoundException: org.eclipse.jgit.util.FileUtils
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1718)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1569)
... 1 more