I've recently installed AS 3.1 and whenever I create a new project AS stuck at "Building xxxx Gradle project info" for hours.
I've already followed the answer here, here, here and here, and none of them solved the problem.
OS: Windows 10
Gradle file: 4.4
Gradle directory:
C:\Users\Ahmed\.gradle\wrapper\dists\gradle-4.4-all\9br9xq1tocpiv8o6njlyu5op1
Gradle directory components:
- gradle-4.4/
- gradle-4.4-all.zip
- gradle-4.4-all.zip.lck
- gradle-4.4-all.zip.ok
Here's a screenshot:
Update:
After waiting for 4 hours, the program started with the following error message:
and here's the last part of the idea.log file:
2018-04-10 19:11:54,478 [e-1024-b02] INFO - j.ide.ui.OptionsTopHitProvider - 10386 ms spent to cache options in application
2018-04-10 19:11:54,723 [e-1024-b02] INFO - rd.FirstRunWizardFrameProvider - Overriding welcome frame to be resizable
2018-04-10 19:12:26,145 [d thread 2] INFO - .openapi.application.Preloader - Finished preloading com.intellij.ide.ui.search.SearchableOptionPreloader#201f6ae4
2018-04-10 19:12:35,171 [d thread 2] INFO - .openapi.application.Preloader - Finished preloading com.intellij.codeInsight.completion.CompletionPreloader#67e41d09
2018-04-10 19:21:08,465 [e-1024-b02] INFO - idea.project.IndexingSuspender - Subscribing project 'Project 'F:\Programming\Mobile\Opensource Android Apps\LeafPic-dev' LeafPic-dev' to indexing suspender events (com.android.tools.idea.project.IndexingSuspender#777708be)
2018-04-10 19:21:08,666 [e-1024-b02] INFO - ellij.project.impl.ProjectImpl - 147 project components initialized in 20284 ms
2018-04-10 19:21:08,668 [e-1024-b02] INFO - le.impl.ModuleManagerComponent - 0 module(s) loaded in 0 ms
2018-04-10 19:21:15,186 [e-1024-b02] INFO - e.project.sync.GradleSyncState - Started sync with Gradle for project 'LeafPic-dev'.
2018-04-10 19:21:15,490 [e-1024-b02] INFO - idea.project.IndexingSuspender - Consuming IndexingSuspender activation event: SYNC_STARTED
2018-04-10 19:21:21,837 [d thread 2] INFO - s.plugins.gradle.GradleManager - Instructing gradle to use java from C:/Program Files/Android/Android Studio/jre
2018-04-10 19:21:22,251 [d thread 2] INFO - s.plugins.gradle.GradleManager - Instructing gradle to use java from C:/Program Files/Android/Android Studio/jre
2018-04-10 19:21:25,535 [e-1024-b02] INFO - rojectCodeStyleSettingsManager - Initialized from default code style settings.
2018-04-10 19:24:23,889 [d thread 2] INFO - xecution.GradleExecutionHelper - Passing command-line args to Gradle Tooling API: -Didea.version=3.1 -Djava.awt.headless=true -Pandroid.injected.build.model.only=true -Pandroid.injected.build.model.only.advanced=true -Pandroid.injected.invoked.from.ide=true -Pandroid.injected.build.model.only.versioned=3 -Pandroid.injected.studio.version=3.1.1.0 -Pandroid.builder.sdkDownload=false --init-script C:\Users\Ahmed\AppData\Local\Temp\ijinit25.gradle --offline
2018-04-10 23:11:08,898 [d thread 2] INFO - .project.GradleProjectResolver - Gradle project resolve error
org.gradle.tooling.GradleConnectionException: Could not run build action using Gradle distribution 'https://services.gradle.org/distributions/gradle-4.4-all.zip'.
at org.gradle.tooling.internal.consumer.ExceptionTransformer.transform(ExceptionTransformer.java:55)
at org.gradle.tooling.internal.consumer.ExceptionTransformer.transform(ExceptionTransformer.java:29)
at org.gradle.tooling.internal.consumer.ResultHandlerAdapter.onFailure(ResultHandlerAdapter.java:41)
at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:57)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:745)
at org.gradle.tooling.internal.consumer.BlockingResultHandler.getResult(BlockingResultHandler.java:46)
at org.gradle.tooling.internal.consumer.DefaultBuildActionExecuter.run(DefaultBuildActionExecuter.java:60)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.doResolveProjectInfo(GradleProjectResolver.java:283)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.access$200(GradleProjectResolver.java:79)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver$ProjectConnectionDataNodeFunction.fun(GradleProjectResolver.java:939)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver$ProjectConnectionDataNodeFunction.fun(GradleProjectResolver.java:923)
at org.jetbrains.plugins.gradle.service.execution.GradleExecutionHelper.execute(GradleExecutionHelper.java:210)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.resolveProjectInfo(GradleProjectResolver.java:140)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.resolveProjectInfo(GradleProjectResolver.java:79)
at com.intellij.openapi.externalSystem.service.remote.RemoteExternalSystemProjectResolverImpl.lambda$resolveProjectInfo$0(RemoteExternalSystemProjectResolverImpl.java:37)
at com.intellij.openapi.externalSystem.service.remote.AbstractRemoteExternalSystemService.execute(AbstractRemoteExternalSystemService.java:59)
at com.intellij.openapi.externalSystem.service.remote.RemoteExternalSystemProjectResolverImpl.resolveProjectInfo(RemoteExternalSystemProjectResolverImpl.java:37)
at com.intellij.openapi.externalSystem.service.remote.wrapper.ExternalSystemProjectResolverWrapper.resolveProjectInfo(ExternalSystemProjectResolverWrapper.java:45)
at com.intellij.openapi.externalSystem.service.internal.ExternalSystemResolveProjectTask.doExecute(ExternalSystemResolveProjectTask.java:87)
at com.intellij.openapi.externalSystem.service.internal.AbstractExternalSystemTask.execute(AbstractExternalSystemTask.java:163)
at com.intellij.openapi.externalSystem.service.internal.AbstractExternalSystemTask.execute(AbstractExternalSystemTask.java:149)
at com.intellij.openapi.externalSystem.util.ExternalSystemUtil$3.execute(ExternalSystemUtil.java:557)
at com.intellij.openapi.externalSystem.util.ExternalSystemUtil$4.run(ExternalSystemUtil.java:619)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:713)
at com.intellij.openapi.progress.impl.CoreProgressManager$5.run(CoreProgressManager.java:397)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$1(CoreProgressManager.java:157)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:543)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:488)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:94)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:144)
at com.intellij.openapi.application.impl.ApplicationImpl.lambda$null$10(ApplicationImpl.java:575)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:315)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.gradle.launcher.daemon.client.NoUsableDaemonFoundException: Unable to find a usable idle daemon. I have connected to 100 different daemons but I could not use any of them to run the build. BuildActionParameters were DefaultBuildActionParameters{, currentDir=F:\Programming\Mobile\Opensource Android Apps\LeafPic-dev, systemProperties size=94, envVariables size=40, logLevel=LIFECYCLE, useDaemon=true, continuous=false, interactive=false, injectedPluginClasspath=[]}.
at org.gradle.launcher.daemon.client.DaemonClient.execute(DaemonClient.java:151)
at org.gradle.launcher.daemon.client.DaemonClient.execute(DaemonClient.java:92)
at org.gradle.tooling.internal.provider.DaemonBuildActionExecuter.execute(DaemonBuildActionExecuter.java:60)
at org.gradle.tooling.internal.provider.DaemonBuildActionExecuter.execute(DaemonBuildActionExecuter.java:41)
at org.gradle.tooling.internal.provider.LoggingBridgingBuildActionExecuter.execute(LoggingBridgingBuildActionExecuter.java:60)
at org.gradle.tooling.internal.provider.LoggingBridgingBuildActionExecuter.execute(LoggingBridgingBuildActionExecuter.java:34)
at org.gradle.tooling.internal.provider.ProviderConnection.run(ProviderConnection.java:156)
at org.gradle.tooling.internal.provider.ProviderConnection.runClientAction(ProviderConnection.java:140)
at org.gradle.tooling.internal.provider.ProviderConnection.run(ProviderConnection.java:126)
at org.gradle.tooling.internal.provider.DefaultConnection.run(DefaultConnection.java:224)
at org.gradle.tooling.internal.consumer.connection.CancellableConsumerConnection$CancellableActionRunner.run(CancellableConsumerConnection.java:99)
at org.gradle.tooling.internal.consumer.connection.AbstractConsumerConnection.run(AbstractConsumerConnection.java:62)
at org.gradle.tooling.internal.consumer.connection.ParameterValidatingConsumerConnection.run(ParameterValidatingConsumerConnection.java:53)
at org.gradle.tooling.internal.consumer.DefaultBuildActionExecuter$1.run(DefaultBuildActionExecuter.java:71)
at org.gradle.tooling.internal.consumer.connection.LazyConsumerActionExecutor.run(LazyConsumerActionExecutor.java:84)
at org.gradle.tooling.internal.consumer.connection.CancellableConsumerActionExecutor.run(CancellableConsumerActionExecutor.java:45)
at org.gradle.tooling.internal.consumer.connection.ProgressLoggingConsumerActionExecutor.run(ProgressLoggingConsumerActionExecutor.java:58)
at org.gradle.tooling.internal.consumer.connection.RethrowingErrorsConsumerActionExecutor.run(RethrowingErrorsConsumerActionExecutor.java:38)
at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:55)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
... 1 more
Caused by: org.gradle.launcher.daemon.client.DaemonInitialConnectException: The first result from the daemon was empty. Most likely the process died immediately after connection.
at org.gradle.launcher.daemon.client.DaemonClient.executeBuild(DaemonClient.java:170)
at org.gradle.launcher.daemon.client.DaemonClient.execute(DaemonClient.java:141)
... 24 more
2018-04-10 23:11:11,606 [d thread 2] WARN - nal.AbstractExternalSystemTask - The first result from the daemon was empty. Most likely the process died immediately after connection.
com.intellij.openapi.externalSystem.model.ExternalSystemException: The first result from the daemon was empty. Most likely the process died immediately after connection.
at com.android.tools.idea.gradle.project.sync.idea.ProjectImportErrorHandler.getUserFriendlyError(ProjectImportErrorHandler.java:72)
at com.android.tools.idea.gradle.project.sync.idea.AndroidGradleProjectResolver.getUserFriendlyError(AndroidGradleProjectResolver.java:436)
at org.jetbrains.plugins.gradle.service.project.AbstractProjectResolverExtension.getUserFriendlyError(AbstractProjectResolverExtension.java:158)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver$ProjectConnectionDataNodeFunction.fun(GradleProjectResolver.java:943)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver$ProjectConnectionDataNodeFunction.fun(GradleProjectResolver.java:923)
at org.jetbrains.plugins.gradle.service.execution.GradleExecutionHelper.execute(GradleExecutionHelper.java:210)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.resolveProjectInfo(GradleProjectResolver.java:140)
at org.jetbrains.plugins.gradle.service.project.GradleProjectResolver.resolveProjectInfo(GradleProjectResolver.java:79)
at com.intellij.openapi.externalSystem.service.remote.RemoteExternalSystemProjectResolverImpl.lambda$resolveProjectInfo$0(RemoteExternalSystemProjectResolverImpl.java:37)
at com.intellij.openapi.externalSystem.service.remote.AbstractRemoteExternalSystemService.execute(AbstractRemoteExternalSystemService.java:59)
at com.intellij.openapi.externalSystem.service.remote.RemoteExternalSystemProjectResolverImpl.resolveProjectInfo(RemoteExternalSystemProjectResolverImpl.java:37)
at com.intellij.openapi.externalSystem.service.remote.wrapper.ExternalSystemProjectResolverWrapper.resolveProjectInfo(ExternalSystemProjectResolverWrapper.java:45)
at com.intellij.openapi.externalSystem.service.internal.ExternalSystemResolveProjectTask.doExecute(ExternalSystemResolveProjectTask.java:87)
at com.intellij.openapi.externalSystem.service.internal.AbstractExternalSystemTask.execute(AbstractExternalSystemTask.java:163)
at com.intellij.openapi.externalSystem.service.internal.AbstractExternalSystemTask.execute(AbstractExternalSystemTask.java:149)
at com.intellij.openapi.externalSystem.util.ExternalSystemUtil$3.execute(ExternalSystemUtil.java:557)
at com.intellij.openapi.externalSystem.util.ExternalSystemUtil$4.run(ExternalSystemUtil.java:619)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:713)
at com.intellij.openapi.progress.impl.CoreProgressManager$5.run(CoreProgressManager.java:397)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$1(CoreProgressManager.java:157)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:543)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:488)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:94)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:144)
at com.intellij.openapi.application.impl.ApplicationImpl.lambda$null$10(ApplicationImpl.java:575)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:315)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.gradle.launcher.daemon.client.DaemonInitialConnectException: The first result from the daemon was empty. Most likely the process died immediately after connection.
at org.gradle.launcher.daemon.client.DaemonClient.executeBuild(DaemonClient.java:170)
at org.gradle.launcher.daemon.client.DaemonClient.execute(DaemonClient.java:141)
at org.gradle.launcher.daemon.client.DaemonClient.execute(DaemonClient.java:92)
at org.gradle.tooling.internal.provider.DaemonBuildActionExecuter.execute(DaemonBuildActionExecuter.java:60)
at org.gradle.tooling.internal.provider.DaemonBuildActionExecuter.execute(DaemonBuildActionExecuter.java:41)
at org.gradle.tooling.internal.provider.LoggingBridgingBuildActionExecuter.execute(LoggingBridgingBuildActionExecuter.java:60)
at org.gradle.tooling.internal.provider.LoggingBridgingBuildActionExecuter.execute(LoggingBridgingBuildActionExecuter.java:34)
at org.gradle.tooling.internal.provider.ProviderConnection.run(ProviderConnection.java:156)
at org.gradle.tooling.internal.provider.ProviderConnection.runClientAction(ProviderConnection.java:140)
at org.gradle.tooling.internal.provider.ProviderConnection.run(ProviderConnection.java:126)
at org.gradle.tooling.internal.provider.DefaultConnection.run(DefaultConnection.java:224)
at org.gradle.tooling.internal.consumer.connection.CancellableConsumerConnection$CancellableActionRunner.run(CancellableConsumerConnection.java:99)
at org.gradle.tooling.internal.consumer.connection.AbstractConsumerConnection.run(AbstractConsumerConnection.java:62)
at org.gradle.tooling.internal.consumer.connection.ParameterValidatingConsumerConnection.run(ParameterValidatingConsumerConnection.java:53)
at org.gradle.tooling.internal.consumer.DefaultBuildActionExecuter$1.run(DefaultBuildActionExecuter.java:71)
at org.gradle.tooling.internal.consumer.connection.LazyConsumerActionExecutor.run(LazyConsumerActionExecutor.java:84)
at org.gradle.tooling.internal.consumer.connection.CancellableConsumerActionExecutor.run(CancellableConsumerActionExecutor.java:45)
at org.gradle.tooling.internal.consumer.connection.ProgressLoggingConsumerActionExecutor.run(ProgressLoggingConsumerActionExecutor.java:58)
at org.gradle.tooling.internal.consumer.connection.RethrowingErrorsConsumerActionExecutor.run(RethrowingErrorsConsumerActionExecutor.java:38)
at org.gradle.tooling.internal.consumer.async.DefaultAsyncConsumerActionExecutor$1$1.run(DefaultAsyncConsumerActionExecutor.java:55)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
... 1 more
2018-04-10 23:11:14,353 [d thread 2] WARN - ect.sync.idea.ProjectSetUpTask - The first result from the daemon was empty. Most likely the process died immediately after connection.
2018-04-10 23:11:14,530 [d thread 2] INFO - e.project.sync.GradleSyncState - Gradle sync failed: The first result from the daemon was empty. Most likely the process died immediately after connection.
Consult IDE log for more details (Help | Show Log) (3h 49m 59s 342ms)
2018-04-10 23:11:24,533 [d thread 2] INFO - j.ide.script.IdeStartupScripts - 0 startup script(s) found
2018-04-10 23:11:40,980 [ thread 67] INFO - tartup.impl.StartupManagerImpl - ExternalSystemStartupActivity run in 292ms under project opening modal progress
2018-04-10 23:11:41,137 [ thread 67] INFO - tartup.impl.StartupManagerImpl - ConfigProjectComponent run in 104ms under project opening modal progress
2018-04-10 23:11:41,471 [ thread 67] INFO - tartup.impl.StartupManagerImpl - OCInitialTablesBuildingActivity run in 259ms under project opening modal progress
2018-04-10 23:11:41,915 [ thread 67] INFO - tartup.impl.StartupManagerImpl - InitToolWindowsActivity run in 390ms under project opening modal progress
2018-04-10 23:11:41,915 [ thread 67] INFO - .diagnostic.PerformanceWatcher - Post-startup activities under progress took 2185ms; general responsiveness: ok; EDT responsiveness: ok
2018-04-10 23:11:44,556 [e-1024-b02] INFO - tartup.impl.StartupManagerImpl - F:/Programming/Mobile/Opensource Android Apps/LeafPic-dev/.idea case-sensitivity: expected=false actual=false
2018-04-10 23:11:45,139 [ thread 70] INFO - pl.projectlevelman.NewMappings - VCS Root: [] - [<Project>]
2018-04-10 23:11:50,476 [ thread 69] INFO - .diagnostic.PerformanceWatcher - Pushing properties took 6027ms; general responsiveness: ok; EDT responsiveness: 1/2 sluggish, 1/2 very slow
2018-04-10 23:11:53,440 [e-1024-b02] INFO - tor.impl.FileEditorManagerImpl - Project opening took 13874787 ms
2018-04-10 23:12:10,202 [ thread 69] INFO - .diagnostic.PerformanceWatcher - Indexable file iteration took 19598ms; general responsiveness: 1/18 sluggish, 7/18 very slow; EDT responsiveness: 0/16 sluggish, 10/16 very slow
2018-04-10 23:12:10,208 [ thread 69] INFO - indexing.UnindexedFilesUpdater - Unindexed files update started: 286 files to update
2018-04-10 23:13:05,540 [ thread 69] INFO - .diagnostic.PerformanceWatcher - Unindexed files update took 55332ms; general responsiveness: 1/54 sluggish, 1/54 very slow; EDT responsiveness: 0/54 sluggish, 8/54 very slow
2018-04-10 23:13:05,680 [ thread 69] INFO - #com.jetbrains.cidr.lang - Clearing symbols finished in 0 s.
2018-04-10 23:13:05,870 [ thread 69] INFO - #com.jetbrains.cidr.lang - Building symbols in FAST mode, 0 source files from total 0 project files
2018-04-10 23:13:06,317 [ thread 69] INFO - #com.jetbrains.cidr.lang - Loading Module Maps finished in 0 s.
2018-04-10 23:13:06,337 [ thread 69] INFO - #com.jetbrains.cidr.lang - Saving Module Maps finished in 0 s.
2018-04-10 23:13:06,337 [ thread 69] INFO - #com.jetbrains.cidr.lang - Saving Module Maps finished in 0 s.
2018-04-10 23:13:06,338 [ thread 69] INFO - #com.jetbrains.cidr.lang - Loaded 0 tables for 0 files (0 project files)
2018-04-10 23:13:06,346 [ thread 69] INFO - #com.jetbrains.cidr.lang - Building symbols for 0 source files
2018-04-10 23:13:06,482 [ thread 69] INFO - #com.jetbrains.cidr.lang - Building symbols for 0 unused headers
2018-04-10 23:13:06,485 [ thread 69] INFO - #com.jetbrains.cidr.lang - Building symbols finished in 0 s.
2018-04-10 23:13:06,490 [ thread 69] INFO - #com.jetbrains.cidr.lang - Saving modified symbols for 0 files (0 tables of total 0)
2018-04-10 23:13:06,564 [ thread 69] INFO - #com.jetbrains.cidr.lang - Saving symbols finished in 0 s.
2018-04-10 23:13:08,141 [e-1024-b02] INFO - tartup.impl.StartupManagerImpl - Some post-startup activities freeze UI for noticeable time. Please consider making them DumbAware to do them in background under modal progress, or just making them faster to speed up project opening.
2018-04-10 23:13:08,142 [e-1024-b02] INFO - tartup.impl.StartupManagerImpl - ProjectInspectionProfileStartUpActivity run in 1516ms on UI thread
2018-04-10 23:13:12,220 [e-1024-b02] INFO - j.ide.ui.OptionsTopHitProvider - 3038 ms spent to cache options in project
2018-04-10 23:13:15,808 [e-1024-b02] INFO - idea.project.IndexingSuspender - Starting batch update for project: Project 'F:\Programming\Mobile\Opensource Android Apps\LeafPic-dev' LeafPic-dev
2018-04-10 23:13:22,604 [d thread 2] INFO - g.FileBasedIndexProjectHandler - Reindexing refreshed files: 1 to update, calculated in 96ms
2018-04-10 23:13:22,694 [d thread 2] INFO - .diagnostic.PerformanceWatcher - Reindexing refreshed files took 89ms; general responsiveness: ok; EDT responsiveness: 1/1 sluggish
2018-04-10 23:13:25,617 [d thread 2] INFO - g.FileBasedIndexProjectHandler - Reindexing refreshed files: 0 to update, calculated in 12ms
2018-04-10 23:13:29,982 [d thread 2] INFO - CompilerWorkspaceConfiguration - Available processors: 4
2018-04-10 23:13:30,753 [d thread 2] INFO - g.FileBasedIndexProjectHandler - Reindexing refreshed files: 212 to update, calculated in 321ms
2018-04-10 23:13:39,277 [d thread 2] INFO - .diagnostic.PerformanceWatcher - Reindexing refreshed files took 8524ms; general responsiveness: 2/8 sluggish, 1/8 very slow; EDT responsiveness: 0/8 sluggish, 3/8 very slow
2018-04-10 23:24:17,350 [e-1024-b02] INFO - ide.actions.ShowFilePathAction -
Exit code 1
I've figured out the cause of the problem.
It was COMODO Firewall that was blocking some files.
I've marked them as trusted and the problem was solved.
If anyone faced my problem, please consider your AV or Firewall software as it might be the cause of the problem by blocking some files.
thanks for everyone tried to help!
Related
I try to use pyspark on jupyterhub (which on kubernetes) for interactive programming to a remote spark cluster on kubernetes. So I use sparkmagic and livy (which on kubernetes, too)
When I try to get sparkContext and sparkSession in notebook, util the livy session dead, it is still stuck on 'starting' status.
My spark-driver-pod is running, and I can see this log:
53469 [pool-8-thread-1] INFO org.apache.livy.rsc.driver.SparkEntries - Spark context finished initialization in 34532ms
53625 [pool-8-thread-1] INFO org.apache.livy.rsc.driver.SparkEntries - Created Spark session.
128775 [dispatcher-CoarseGrainedScheduler] INFO org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.83.128.194:35040) with ID 1, ResourceProfileId 0
128927 [dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager 10.83.128.194:42385 with 4.6 GiB RAM, BlockManagerId(1, 10.83.128.194, 42385, None)
131902 [dispatcher-CoarseGrainedScheduler] INFO org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint - Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.83.128.130:58232) with ID 2, ResourceProfileId 0
132041 [dispatcher-BlockManagerMaster] INFO org.apache.spark.storage.BlockManagerMasterEndpoint - Registering block manager 10.83.128.130:37991 with 4.6 GiB RAM, BlockManagerId(2, 10.83.128.130, 37991, None)
My spark-executor-pod is also running.
This is my livy-server's log:
2022-05-19 08:36:54,959 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:36:56,969 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:36:58,979 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:37:01,002 DEBUG LivySession Session 0 in state starting. Sleeping 2 seconds.
2022-05-19 08:37:03,015 ERROR LivySession Session 0 did not reach idle status in time. Current status is starting.
2022-05-19 08:37:03,016 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionCreationEnd,Timestamp: 2022-05-19 08:37:03.016038,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: starting,Success: False,ExceptionType: LivyClientTimeoutException,ExceptionMessage: Session 0 did not start up in 600 seconds.
2022-05-19 08:37:03,016 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionDeletionStart,Timestamp: 2022-05-19 08:37:03.016288,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: starting
2022-05-19 08:37:03,016 DEBUG LivySession Deleting session '0'
2022-05-19 08:37:03,037 INFO EventsHandler InstanceId: 0139a7a9-a0b5-439e-84f5-a9ca6c896360,EventName: notebookSessionDeletionEnd,Timestamp: 2022-05-19 08:37:03.036919,SessionGuid: 14da96d9-8b24-4beb-a5ad-a32009c9f772,LivyKind: pyspark,SessionId: 0,Status: dead,Success: True,ExceptionType: ,ExceptionMessage:
2022-05-19 08:37:03,037 ERROR SparkMagics Error creating session: Session 0 did not start up in 600 seconds.
Please tell me how can I solve this problem, thanks!
My spark version:3.2.1
livy version:0.8.0
When I use Spark's standalone mode to process a large number of datasets,the log said:
ERROR TaskSchedulerImpl:70 - Lost executor 1 on : Executor heartbeat timed out after 381181 ms
I search the internet, they say I should set parameters with spark submit:
[hadoop#Master spark2.4.0]$ bin/spark-submit --master spark://master:7077 --conf spark.worker.timeout 10000000 --py-files id.py id.py --name id
Error message in log:
Error: Invalid argument to --conf: spark.worker.timeout
Questions:
How to set timeout parameter?
Thanks to meniluca's answer, I lost the symbols in instructions
After adjusting the timeout, the log displays
2019-12-05 19:42:27 WARN Utils:87 - Suppressing exception in finally: broken pipe (Write failed)
java.net.SocketException: broken pipe (Write failed)
2019-12-05 21:13:09 INFO SparkContext:54 - Invoking stop() from shutdown hook
Exception in thread "serve-DataFrame" java.net.SocketException: Connection reset
Suppressed: java.net.SocketException: broken pipe (Write failed)
then,I change thessh,add ServerAliveInterval 60 while ~/.ssh/ config
ServerAliveInterval 60
the error stil exits, then I try to increase the driver memory, error still exists, and show that the connection is disconnected
[hadoop#Master spark2.4.0]$ bin/spark-submit --master spark://master:7077 --conf spark.worker.timeout=10000000 --driver-memory 1g --py-files id.py id.py --name id
2019-12-06 10:38:49 INFO ContextCleaner:54 - Cleaned accumulator 374
Exception in thread "serve-DataFrame" java.net.SocketException: broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:212)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:224)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)
at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:224)
at org.apache.spark.api.python.PythonRDD$$anonfun$serveIterator$1.apply(PythonRDD.scala:413)
at org.apache.spark.api.python.PythonRDD$$anonfun$serveIterator$1.apply(PythonRDD.scala:412)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply$mcV$sp(PythonRDD.scala:435)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply(PythonRDD.scala:435)
at org.apache.spark.api.python.PythonRDD$$anonfun$6$$anonfun$apply$1.apply(PythonRDD.scala:435)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.api.python.PythonRDD$$anonfun$6.apply(PythonRDD.scala:436)
at org.apache.spark.api.python.PythonRDD$$anonfun$6.apply(PythonRDD.scala:432)
at org.apache.spark.api.python.PythonServer$$anon$1.run(PythonRDD.scala:862)
2019-12-06 11:06:12 WARN HeartbeatReceiver:66 - Removing executor 1 with no recent heartbeats: 149103 ms exceeds timeout 120000 ms
2019-12-06 11:06:12 ERROR TaskSchedulerImpl:70 - Lost executor 1 on 219.226.109.129: Executor heartbeat timed out after 149103 ms
2019-12-06 11:06:13 INFO SparkContext:54 - Invoking stop() from shutdown hook
2019-12-06 11:06:13 INFO DAGScheduler:54 - Executor lost: 1 (epoch 6)
2019-12-06 11:06:13 WARN HeartbeatReceiver:66 - Removing executor 0 with no recent heartbeats: 155761 ms exceeds timeout 120000 ms
2019-12-06 11:06:13 ERROR TaskSchedulerImpl:70 - Lost executor 0 on 219.226.109.131: Executor heartbeat timed out after 155761 ms
2019-12-06 11:06:13 INFO StandaloneSchedulerBackend:54 - Requesting to kill executor(s) 1
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 1 from BlockManagerMaster.
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Removing block manager BlockManagerId(1, 219.226.109.129, 42501, None)
2019-12-06 11:06:13 INFO BlockManagerMaster:54 - Removed 1 successfully in removeExecutor
2019-12-06 11:06:13 INFO DAGScheduler:54 - Shuffle files lost for executor: 1 (epoch 6)
2019-12-06 11:06:13 INFO StandaloneSchedulerBackend:54 - Actual list of executor(s) to be killed is 1
2019-12-06 11:06:13 INFO DAGScheduler:54 - Host added was in lost list earlier: 219.226.109.129
2019-12-06 11:06:13 INFO DAGScheduler:54 - Executor lost: 0 (epoch 7)
2019-12-06 11:06:13 INFO AbstractConnector:318 - Stopped Spark#490228e{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Trying to remove executor 0 from BlockManagerMaster.
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Removing block manager BlockManagerId(0, 219.226.109.131, 42164, None)
2019-12-06 11:06:13 INFO BlockManagerMaster:54 - Removed 0 successfully in removeExecutor
2019-12-06 11:06:13 INFO DAGScheduler:54 - Shuffle files lost for executor: 0 (epoch 7)
2019-12-06 11:06:13 INFO DAGScheduler:54 - Host added was in lost list earlier: 219.226.109.131
2019-12-06 11:06:13 INFO SparkUI:54 - Stopped Spark web UI at http://Master:4040
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Registering block manager 219.226.109.129:42501 with 413.9 MB RAM, BlockManagerId(1, 219.226.109.129, 42501, None)
2019-12-06 11:06:13 INFO BlockManagerMasterEndpoint:54 - Registering block manager 219.226.109.131:42164 with 413.9 MB RAM, BlockManagerId(0, 219.226.109.131, 42164, None)
2019-12-06 11:06:14 INFO StandaloneSchedulerBackend:54 - Shutting down all executors
2019-12-06 11:06:14 INFO CoarseGrainedSchedulerBackend$DriverEndpoint:54 - Asking each executor to shut down
2019-12-06 11:06:14 INFO BlockManagerInfo:54 - Added broadcast_15_piece0 in memory on 219.226.109.129:42501 (size: 21.1 KB, free: 413.9 MB)
2019-12-06 11:06:15 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2019-12-06 11:06:15 INFO BlockManagerInfo:54 - Added broadcast_15_piece0 in memory on 219.226.109.131:42164 (size: 21.1 KB, free: 413.9 MB)
2019-12-06 11:06:16 INFO MemoryStore:54 - MemoryStore cleared
2019-12-06 11:06:16 INFO BlockManager:54 - BlockManager stopped
2019-12-06 11:06:16 INFO BlockManagerMaster:54 - BlockManagerMaster stopped
2019-12-06 11:06:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2019-12-06 11:06:17 ERROR TransportResponseHandler:144 - Still have 1 requests outstanding when connection from Master/219.226.109.130:7077 is closed
2019-12-06 11:06:17 INFO SparkContext:54 - Successfully stopped SparkContext
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Shutdown hook called
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-e2a29bac-7277-4476-ad23-315a27e9ccf0
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/localPyFiles-dd95954c-2e77-41ca-969d-a201269f5b5b
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-bcd56b4a-fb32-4b58-a1d5-71abc5218d32
2019-12-06 11:06:17 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-e2a29bac-7277-4476-ad23-315a27e9ccf0/pyspark-d04b799f-a116-44d5-b6a5-811cc8c03743
Question
Is SSH related to broken pipe?
Is increasing driver memory helpful to this problem?
I see the configuration posts on the Internet, but they're are highly configured. Since I use my computer to built clusters on virtual machine,
, the master has two cores , the slave has one core. How to adjust the configuration ?
Please try with
--conf spark.worker.timeout=10000000
you are missing the equal character between the configuration name and value.
java.net.SocketException: broken pipe (Write failed) occurs when something is wrong with the access port.
I suggest you to change the master which is at port 8080. The port can be changed either in the configuration file or via command-line options.
sbin/start-master.sh
Same can be tried with worker node as well if the above does not fix issue.
To see which ports are being used you can use :
sudo netstat -ltup
I have a PySpark job which works successfully with a small cluster, but starts to get a lot of the following errors in the first few minutes when it starts up. Any idea on how I can solve it? This is with PySpark 2.2.0 and mesos.
17/09/29 18:54:26 INFO Executor: Running task 5717.0 in stage 0.0 (TID 5717)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5813
17/09/29 18:54:26 INFO Executor: Running task 5813.0 in stage 0.0 (TID 5813)
17/09/29 18:54:26 INFO CoarseGrainedExecutorBackend: Got assigned task 5909
17/09/29 18:54:26 INFO Executor: Running task 5909.0 in stage 0.0 (TID 5909)
17/09/29 18:54:56 ERROR TransportClientFactory: Exception while bootstrapping client after 30001 ms
java.lang.RuntimeException: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.base.Throwables.propagate(Throwables.java:160)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:275)
at org.apache.spark.network.sasl.SaslClientBootstrap.doBootstrap(SaslClientBootstrap.java:70)
at org.apache.spark.network.crypto.AuthClientBootstrap.doSaslAuth(AuthClientBootstrap.java:117)
at org.apache.spark.network.crypto.AuthClientBootstrap.doBootstrap(AuthClientBootstrap.java:76)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:244)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:182)
at org.apache.spark.rpc.netty.NettyRpcEnv.downloadClient(NettyRpcEnv.scala:366)
at org.apache.spark.rpc.netty.NettyRpcEnv.openChannel(NettyRpcEnv.scala:332)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:654)
at org.apache.spark.util.Utils$.fetchFile(Utils.scala:467)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:684)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$3.apply(Executor.scala:681)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:681)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:308)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Timeout waiting for task.
at org.spark_project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at org.spark_project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:96)
at org.apache.spark.network.client.TransportClient.sendRpcSync(TransportClient.java:271)
... 23 more
17/09/29 18:54:56 ERROR TransportResponseHandler: Still have 1 requests outstanding when connection from /10.0.1.25:37314 is closed
17/09/29 18:54:56 INFO Executor: Fetching spark://10.0.1.25:37314/files/djinn.spark.zip with timestamp 1506711239350
We are experiencing an issue whereby upon the restart of IIS, our Umbraco site is reverting to old content/branding. The log reads that umbraco.content is loading XML from file
Only 1 of the 7 sites in IIS is affected by this behaviour. We are running the full version of MS-SQL Server (not SQL CE)
Where is Umbraco getting this content from and how can we prevent it happening when IIS restarts (a nightly task). Is there an Umbraco setting that is only present on this specific database/config?
The logs read:
2016-05-11 12:50:37,715 [35] INFO Umbraco.Core.CoreBootManager - [T29/D5] Umbraco application starting
2016-05-11 12:50:37,762 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Determining hash of code files on disk
2016-05-11 12:50:37,793 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Hash determined (took 18ms)
2016-05-11 12:50:37,793 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of umbraco.interfaces.IApplicationStartupHandler
2016-05-11 12:50:37,809 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of umbraco.interfaces.IApplicationStartupHandler, found 38 (took 10ms)
2016-05-11 12:50:37,949 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Core.PropertyEditors.IPropertyEditorValueConverter
2016-05-11 12:50:37,949 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Core.PropertyEditors.IPropertyEditorValueConverter, found 0 (took 0ms)
2016-05-11 12:50:37,949 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Core.PropertyEditors.IPropertyValueConverter
2016-05-11 12:50:37,949 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Core.PropertyEditors.IPropertyValueConverter, found 16 (took 1ms)
2016-05-11 12:50:37,965 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Web.Mvc.SurfaceController
2016-05-11 12:50:37,965 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Web.Mvc.SurfaceController, found 14 (took 1ms)
2016-05-11 12:50:37,965 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Web.WebApi.UmbracoApiController
2016-05-11 12:50:37,980 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Web.WebApi.UmbracoApiController, found 61 (took 10ms)
2016-05-11 12:50:38,043 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Core.Media.IThumbnailProvider
2016-05-11 12:50:38,043 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Core.Media.IThumbnailProvider, found 3 (took 0ms)
2016-05-11 12:50:38,043 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Starting resolution types of Umbraco.Core.Media.IImageUrlProvider
2016-05-11 12:50:38,059 [35] INFO Umbraco.Core.PluginManager - [T29/D5] Completed resolution of types of Umbraco.Core.Media.IImageUrlProvider, found 1 (took 6ms)
2016-05-11 12:50:39,809 [35] INFO Umbraco.Web.Search.ExamineEvents - [T29/D5] Initializing Examine and binding to business logic events
2016-05-11 12:50:39,996 [35] INFO Umbraco.Web.Search.ExamineEvents - [T29/D5] Adding examine event handlers for index providers: 3
2016-05-11 12:50:39,996 [35] INFO Umbraco.Core.CoreBootManager - [T29/D5] Umbraco application startup complete (took 2282ms)
2016-05-11 12:50:41,293 [35] INFO Umbraco.Web.UmbracoModule - [T34/D5] Setting OriginalRequestUrl: xxx.xxx.com/umbraco
2016-05-11 12:50:41,465 [35] INFO umbraco.content - [T34/D5] Load Xml from file...
2016-05-11 12:50:41,465 [35] INFO umbraco.content - [T34/D5] Loaded Xml from file.
Here is my versions:
Hive: 1.2
Hadoop: CDH5.3
Spark: 1.4.1
I succeeded with hive on spark with hive client, but after I started hiveserver2 and tried a sql using beeline, it failed.
The error is:
2015-11-29 21:49:42,786 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:42 INFO spark.SparkContext: Added JAR file:/root/cdh/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at http://10.96.30.51:10318/jars/hive-exec-1.2.1.jar with timestamp 1448804982784
2015-11-29 21:49:43,336 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm297
2015-11-29 21:49:43,356 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm297 after 1 fail over attempts. Trying to fail over immediately.
2015-11-29 21:49:43,357 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm280
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm280 after 2 fail over attempts. Trying to fail over after sleeping for 477ms.
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - java.net.ConnectException: Call From hd-master-001/10.96.30.51 to hd-master-001:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2015-11-29 21:49:43,359 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
My yarn's status is that hd-master-002 is active resourcemanager and hd-master-001 is backup. 8032 port on hd-master-001 is not open. So of course, connection error occurs when trying to connect to hd-master-001's 8032 port.
But why she tried to connect a backup resourcemanager.
If I use hive client command shell on spark on yarn, everything is ok.
PS: I didn't rebuild the spark assembly jar without hive, I only removed 'org.apache.hive' and 'org.apache.hadoop.hive' from built assembly jar. But I do not think it is the problem because I succeeded with hive client on spark on yarn.