I followed the steps mentioned here https://gist.github.com/codspire/7b0955b9e67fe73f6118dad9539cbaa2
When entered "localhost:8080" in a browser nothing happens
Hadoop version -- 3.1.3
Spark version -- 3.0.0-preview pre-built for hadoop2.7
Zeppelin version -- 0.9.0-preview1
C:\Zeppelin>bin\zeppelin.cmd
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
WARN [2020-04-07 16:29:59,113] ({main}ZeppelinConfiguration.java[create]:159) - Failed to load configuration, proceeding with a default
INFO [2020-04-07 16:29:59,177] ({main} ZeppelinConfiguration.java[create]:171) - Server Host: 127.0.0.1
INFO [2020-04-07 16:29:59,177] ({main} ZeppelinConfiguration.java[create]:173) - Server Port: 8080
INFO [2020-04-07 16:29:59,178] ({main} ZeppelinConfiguration.java[create]:177) - Context Path: /
INFO [2020-04-07 16:29:59,178] ({main} ZeppelinConfiguration.java[create]:178) - Zeppelin Version: 0.9.0-preview1
INFO [2020-04-07 16:29:59,205] ({main} Log.java[initialized]:193) - Logging initialized #810ms to org.eclipse.jetty.util.log.Slf4jLog
WARN [2020-04-07 16:29:59,516] ({main} ZeppelinConfiguration.java[getConfigFSDir]:631) - zeppelin.config.fs.dir is not specified, fall back to local conf directory zeppelin.conf.dir
INFO [2020-04-07 16:29:59,554] ({main} Credentials.java[loadFromFile]:121) - C:\Zeppelin\conf\credentials.json
INFO [2020-04-07 16:29:59,594] ({ImmediateThread-1586257199511} PluginManager.java[loadNotebookRepo]:60) - Loading NotebookRepo Plugin: org.apache.zeppelin.notebook.repo.GitNotebookRepo
INFO [2020-04-07 16:29:59,658] ({main} ZeppelinServer.java[setupWebAppContext]:488) - warPath is: C:\Zeppelin\zeppelin-web-angular-0.9.0-preview1.war
INFO [2020-04-07 16:29:59,659] ({main} ZeppelinServer.java[setupWebAppContext]:501) - ZeppelinServer Webapp path: C:\Zeppelin\webapps
INFO [2020-04-07 16:29:59,729] ({ImmediateThread-1586257199511} VFSNotebookRepo.java[setNotebookDirectory]:70) - Using notebookDir: C:\Zeppelin\notebook
INFO [2020-04-07 16:29:59,746] ({main} ZeppelinServer.java[setupWebAppContext]:488) - warPath is: zeppelin-web-angular/dist
INFO [2020-04-07 16:29:59,747] ({main} ZeppelinServer.java[setupWebAppContext]:501) - ZeppelinServer Webapp path: C:\Zeppelin\webapps\next
INFO [2020-04-07 16:29:59,811] ({main} NotebookServer.java[<init>]:153) - NotebookServer instantiated: org.apache.zeppelin.socket.NotebookServer#683dbc2c
INFO [2020-04-07 16:29:59,812] ({main} NotebookServer.java[setServiceLocator]:158) - Injected ServiceLocator: ServiceLocatorImpl(shared-locator,0,246550802)
INFO [2020-04-07 16:29:59,814] ({main} NotebookServer.java[setAuthorizationServiceProvider]:178) - Injected NotebookAuthorizationServiceProvider
INFO [2020-04-07 16:29:59,814] ({main} NotebookServer.java[setNotebookService]:171) - Injected NotebookServiceProvider
INFO [2020-04-07 16:29:59,814] ({main} NotebookServer.java[setConnectionManagerProvider]:184) - Injected ConnectionManagerProvider
INFO [2020-04-07 16:29:59,815] ({main} NotebookServer.java[setNotebook]:164) - Injected NotebookProvider
INFO [2020-04-07 16:29:59,816] ({main} ZeppelinServer.java[setupClusterManagerServer]:439) - Cluster mode is disabled
INFO [2020-04-07 16:29:59,827] ({main} ZeppelinServer.java[main]:251) - Starting zeppelin server
INFO [2020-04-07 16:29:59,829] ({main} Server.java[doStart]:370) - jetty-9.4.18.v20190429; built: 2019-04-29T20:42:08.989Z; git: e1bc35120a6617ee3df052294e433f3a25ce7097; jvm 1.8.0_241-b07
INFO [2020-04-07 16:29:59,858] ({ImmediateThread-1586257199511} GitNotebookRepo.java[init]:77) - Opening a git repo at '/C:/Zeppelin/notebook'
INFO [2020-04-07 16:29:59,967] ({main} StandardDescriptorProcessor.java[visitServlet]:283) - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
INFO [2020-04-07 16:29:59,982] ({main} DefaultSessionIdManager.java[doStart]:365) - DefaultSessionIdManager workerName=node0
INFO [2020-04-07 16:29:59,982] ({main} DefaultSessionIdManager.java[doStart]:370) - No SessionScavenger set, using defaults
INFO [2020-04-07 16:29:59,985] ({main} HouseKeeper.java[startScavenging]:149) - node0 Scavenging every 600000ms
INFO [2020-04-07 16:30:02,046] ({main} ContextHandler.java[doStart]:855) - Started o.e.j.w.WebAppContext#6150c3ec{zeppelin-web-angular,/,jar:file:///C:/Zeppelin/zeppelin-web-angular-0.9.0-preview1.war!/,AVAILABLE}{C:\Zeppelin\zeppelin-web-angular-0.9.0-preview1.war}
WARN [2020-04-07 16:30:02,051] ({main} WebInfConfiguration.java[unpack]:675) - Web application not found C:\Zeppelin\zeppelin-web-angular\dist
WARN [2020-04-07 16:30:02,052] ({main} WebAppContext.java[doStart]:554) - Failed startup of context o.e.j.w.WebAppContext#229f66ed{/next,null,UNAVAILABLE}{C:\Zeppelin\zeppelin-web-angular\dist}
java.io.FileNotFoundException: C:\Zeppelin\zeppelin-web-angular\dist
at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:676)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:152)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:506)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:544)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:119)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:167)
at org.eclipse.jetty.server.Server.start(Server.java:418)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:382)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:253)
INFO [2020-04-07 16:30:02,906] ({main} AbstractConnector.java[doStart]:292) - Started ServerConnector#4493d195{HTTP/1.1,[http/1.1]}{127.0.0.1:8080}
INFO [2020-04-07 16:30:02,906] ({main} Server.java[doStart]:410) - Started #4519ms
INFO [2020-04-07 16:30:07,924] ({main} ZeppelinServer.java[main]:265) - Done, zeppelin server started
To install Zeppelin 9.0 on Windows 10:
Unzip the wan folders (with Winrar) into bin/zeppelin-web-angular/dist and bin/zeppelin-web/dist.
Then modify the config file shiro.ini to change the user and pass (The default are admin and password1)
Possibly this workaround might help. "zeppelin-web-angular-0.9.0-preview1.war" comes along with the package as shown in below image, create required folders and then unzip the content. Post restarting the server, the web pages should be available.
Following links will be also useful to setup Zeppelin for windows 10,
https://gist.github.com/codspire/7b0955b9e67fe73f6118dad9539cbaa2
https://hernandezpaul.wordpress.com/2016/11/14/apache-zeppelin-installation-on-windows-10/
I also tried with Zepplin 0.9.0 on windows 10. working fine.
Go to bin -> mkdir -p zeppelin-web-angular/dist and mkdir -p zeppelin-web/dist
Copy zeppelin-web-angular-0.9.0-preview2.war and zeppelin-web-0.9.0-preview2.war into dist folder respectively
unzip it
remove war files from dist folder
start Zepplin.cmd
I got the same issue. I posted same query on official website of zeppelin on which some one replied that zeppelin is not supported on window 10.
https://issues.apache.org/jira/browse/ZEPPELIN-4749?filter=-2
I tried with 0.9, 0.8, 0.7 version of zeppelin, none of them working[on window 10] and show same blank url on localhost:8080. But recently now I have downloaded 0.6.2 version to find that this version is running fine. So I think we have to work on lower version of zeppelin or if you are using ubuntu all the version will run fine.
I tried this version http://www.apache.org/dyn/closer.cgi/zeppelin/zeppelin-0.8.1/zeppelin-0.8.1-bin-netinst.tgz and it successful
Observed same issue with v0.10.0.
Followed below steps and it worked.
Create dist directories at home path(D:\Zeppline\zeppelin-0.10.1-bin-all\zeppelin-0.10.1-bin-all) as below ->
mkdir -p zeppelin-web-angular/dist and mkdir -p zeppelin-web/dist
Copy zeppelin-web-angular-0.10.1.war and zeppelin-web-0.10.1.war into dist folder respectively
unzip it
remove war files from dist folder
start Zepplin.cmd
enter image description here
Try to set ZEPPELIN_ANGULAR_WAR env variable to location of war file. E.g.
c:\zeppelin-home\zeppelin-web-angular-0.10.1.war
The current answer by the developers is that zeppelin 0.9 will not work on windows 10.
On windows 10, you can run zeppelin 0.9 in a docker container as a workaround
Refer to the instructions on this page
http://zeppelin.apache.org/download.html
If you have docker installed, use this command to launch Apache Zeppelin in a container.
docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.9.0
Related
Newbie here, please be gentle.
The computer in question uses Windows 10 and Apache Zeppelin(zeppelin-0.9.0-bin-all.tgz) refuses to start.
I've tried removing .template from the config files and still nothing.
It doesn't pass the line: INFO [2021-01-29 17:46:29,310] ({main} LuceneSearch.java[]:93) - Use C:\Zeppelin\tmp\zeppelin-index for storing lucene search index.
Not entirely sure what to do at this point, tried the solutions here: Apache Zeppelin not loading in a browser in windows 10 and here: https://gist.github.com/codspire/7b0955b9e67fe73f6118dad9539cbaa2 and I just cannot figure out what is wrong.
Of note, I have Spark (spark-3.0.1-bin-hadoop3.2.tgz) installed separately and it seems to work even though it throws this warning: 21/01/29 18:16:39 WARN ProcfsMetricsGetter: Exception when trying to compute pagesize, as a result reporting of ProcessTree metrics is stopped
ANY suggestions would be greatly appreciated because I'm not sure what else to try.
Microsoft Windows [Version 10.0.19042.746]
(c) 2020 Microsoft Corporation. All rights reserved.
C:\WINDOWS\system32>zeppelin.cmd
INFO [2021-01-29 17:46:18,140] ({main} ZeppelinConfiguration.java[create]:172) - Load configuration from file:/C:/Zeppelin/conf/zeppelin-site.xml
INFO [2021-01-29 17:46:18,229] ({main} ZeppelinConfiguration.java[create]:180) - Server Host: 127.0.0.1
INFO [2021-01-29 17:46:18,229] ({main} ZeppelinConfiguration.java[create]:182) - Server Port: 8080
INFO [2021-01-29 17:46:18,229] ({main} ZeppelinConfiguration.java[create]:186) - Context Path: /
INFO [2021-01-29 17:46:18,229] ({main} ZeppelinConfiguration.java[create]:187) - Zeppelin Version: 0.9.0
INFO [2021-01-29 17:46:18,258] ({main} Log.java[initialized]:169) - Logging initialized #811ms to org.eclipse.jetty.util.log.Slf4jLog
WARN [2021-01-29 17:46:18,682] ({main} ZeppelinConfiguration.java[getConfigFSDir]:694) - zeppelin.config.fs.dir is not specified, fall back to local conf directory zeppelin.conf.dir
WARN [2021-01-29 17:46:18,682] ({main} ZeppelinConfiguration.java[getConfigFSDir]:694) - zeppelin.config.fs.dir is not specified, fall back to local conf directory zeppelin.conf.dir
WARN [2021-01-29 17:46:18,683] ({main} ZeppelinConfiguration.java[getConfigFSDir]:694) - zeppelin.config.fs.dir is not specified, fall back to local conf directory zeppelin.conf.dir
WARN [2021-01-29 17:46:18,722] ({main} LocalConfigStorage.java[loadCredentials]:88) - Credential file C:\Zeppelin\conf\credentials.json is not existed
INFO [2021-01-29 17:46:18,782] ({ImmediateThread-1611960378676} PluginManager.java[loadNotebookRepo]:78) - Loading NotebookRepo Plugin: org.apache.zeppelin.notebook.repo.GitNotebookRepo
INFO [2021-01-29 17:46:18,948] ({ImmediateThread-1611960378676} VFSNotebookRepo.java[setNotebookDirectory]:69) - Using notebookDir: C:\Zeppelin\notebook
INFO [2021-01-29 17:46:18,957] ({main} ZeppelinServer.java[setupWebAppContext]:575) - warPath is: C:\Zeppelin\zeppelin-web-angular-0.9.0.war
INFO [2021-01-29 17:46:18,975] ({main} ZeppelinServer.java[setupWebAppContext]:588) - ZeppelinServer Webapp path: C:\Zeppelin\webapps
INFO [2021-01-29 17:46:19,016] ({main} ZeppelinServer.java[setupWebAppContext]:575) - warPath is: zeppelin-web-angular/dist
INFO [2021-01-29 17:46:19,016] ({main} ZeppelinServer.java[setupWebAppContext]:588) - ZeppelinServer Webapp path: C:\Zeppelin\webapps\next
INFO [2021-01-29 17:46:19,051] ({ImmediateThread-1611960378676} GitNotebookRepo.java[init]:77) - Opening a git repo at '/C:/Zeppelin/notebook'
INFO [2021-01-29 17:46:19,111] ({main} NotebookServer.java[<init>]:157) - NotebookServer instantiated: org.apache.zeppelin.socket.NotebookServer#1b2abca6
INFO [2021-01-29 17:46:19,112] ({main} NotebookServer.java[setNotebook]:168) - Injected NotebookProvider
INFO [2021-01-29 17:46:19,114] ({main} NotebookServer.java[setNotebookService]:175) - Injected NotebookServiceProvider
INFO [2021-01-29 17:46:19,115] ({main} NotebookServer.java[setAuthorizationServiceProvider]:182) - Injected NotebookAuthorizationServiceProvider
INFO [2021-01-29 17:46:19,115] ({main} NotebookServer.java[setConnectionManagerProvider]:188) - Injected ConnectionManagerProvider
INFO [2021-01-29 17:46:19,116] ({main} NotebookServer.java[setServiceLocator]:162) - Injected ServiceLocator: ServiceLocatorImpl(shared-locator,0,891095110)
INFO [2021-01-29 17:46:19,118] ({main} ZeppelinServer.java[setupClusterManagerServer]:465) - Cluster mode is disabled
INFO [2021-01-29 17:46:19,118] ({main} ZeppelinServer.java[main]:249) - Starting zeppelin server
INFO [2021-01-29 17:46:19,121] ({main} Server.java[doStart]:360) - jetty-9.4.31.v20200723; built: 2020-07-23T17:57:36.812Z; git: 450ba27947e13e66baa8cd1ce7e85a4461cacc1d; jvm 1.8.0_281-b09
INFO [2021-01-29 17:46:19,288] ({main} StandardDescriptorProcessor.java[visitServlet]:276) - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
INFO [2021-01-29 17:46:19,309] ({main} DefaultSessionIdManager.java[doStart]:334) - DefaultSessionIdManager workerName=node0
INFO [2021-01-29 17:46:19,310] ({main} DefaultSessionIdManager.java[doStart]:339) - No SessionScavenger set, using defaults
INFO [2021-01-29 17:46:19,313] ({main} HouseKeeper.java[startScavenging]:140) - node0 Scavenging every 660000ms
INFO [2021-01-29 17:46:19,327] ({main} ContextHandler.java[log]:2303) - Initializing Shiro environment
INFO [2021-01-29 17:46:19,327] ({main} EnvironmentLoader.java[initEnvironment]:133) - Starting Shiro environment initialization.
INFO [2021-01-29 17:46:21,130] ({main} EnvironmentLoader.java[initEnvironment]:147) - Shiro environment initialized in 1802 ms.
INFO [2021-01-29 17:46:23,071] ({main} ContextHandler.java[doStart]:860) - Started o.e.j.w.WebAppContext#4a668b6e{zeppelin-web-angular,/,jar:file:///C:/Zeppelin/zeppelin-web-angular-0.9.0.war!/,AVAILABLE}{C:\Zeppelin\zeppelin-web-angular-0.9.0.war}
WARN [2021-01-29 17:46:23,083] ({main} WebInfConfiguration.java[unpack]:662) - Web application not found C:\WINDOWS\system32\zeppelin-web-angular\dist
WARN [2021-01-29 17:46:23,085] ({main} WebAppContext.java[doStart]:533) - Failed startup of context o.e.j.w.WebAppContext#4e268090{/next,null,UNAVAILABLE}{C:\WINDOWS\system32\zeppelin-web-angular\dist}
java.io.FileNotFoundException: C:\WINDOWS\system32\zeppelin-web-angular\dist
at org.eclipse.jetty.webapp.WebInfConfiguration.unpack(WebInfConfiguration.java:663)
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:141)
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:488)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:523)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at io.micrometer.core.instrument.binder.jetty.TimedHandler.doStart(TimedHandler.java:162)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at org.eclipse.jetty.server.Server.start(Server.java:408)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
at org.eclipse.jetty.server.Server.doStart(Server.java:372)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.apache.zeppelin.server.ZeppelinServer.main(ZeppelinServer.java:251)
INFO [2021-01-29 17:46:23,119] ({main} AbstractConnector.java[doStart]:331) - Started ServerConnector#7ce6a65d{HTTP/1.1, (http/1.1)}{127.0.0.1:8080}
INFO [2021-01-29 17:46:23,120] ({main} Server.java[doStart]:400) - Started #5677ms
INFO [2021-01-29 17:46:28,129] ({main} ZeppelinServer.java[main]:263) - Done, zeppelin server started
WARN [2021-01-29 17:46:28,151] ({main} VFSNotebookRepo.java[listFolder]:107) - Skip hidden folder: /Zeppelin/notebook/.git
WARN [2021-01-29 17:46:28,154] ({main} LocalConfigStorage.java[loadNotebookAuthorization]:77) - NotebookAuthorization file C:\Zeppelin\conf\notebook-authorization.json is not existed
INFO [2021-01-29 17:46:28,369] ({Thread-12} RemoteInterpreterEventServer.java[run]:105) - InterpreterEventServer is starting at 192.168.0.3:55104
INFO [2021-01-29 17:46:28,870] ({main} RemoteInterpreterEventServer.java[start]:133) - RemoteInterpreterEventServer is started
INFO [2021-01-29 17:46:28,880] ({main} InterpreterSettingManager.java[<init>]:197) - Using RecoveryStorage: org.apache.zeppelin.interpreter.recovery.NullRecoveryStorage
INFO [2021-01-29 17:46:28,929] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: alluxio
INFO [2021-01-29 17:46:28,932] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: angular
INFO [2021-01-29 17:46:28,936] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: beam
INFO [2021-01-29 17:46:28,941] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: bigquery
INFO [2021-01-29 17:46:28,947] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: cassandra
INFO [2021-01-29 17:46:28,950] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: elasticsearch
INFO [2021-01-29 17:46:28,954] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: file
INFO [2021-01-29 17:46:28,959] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: flink
INFO [2021-01-29 17:46:28,962] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: geode
INFO [2021-01-29 17:46:28,964] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: groovy
INFO [2021-01-29 17:46:28,966] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: hazelcastjet
INFO [2021-01-29 17:46:28,969] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: hbase
INFO [2021-01-29 17:46:28,971] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: ignite
INFO [2021-01-29 17:46:28,975] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: influxdb
INFO [2021-01-29 17:46:28,977] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: java
INFO [2021-01-29 17:46:28,980] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: jdbc
INFO [2021-01-29 17:46:28,985] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: jupyter
INFO [2021-01-29 17:46:28,988] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: kotlin
INFO [2021-01-29 17:46:28,993] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: ksql
INFO [2021-01-29 17:46:28,996] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: kylin
INFO [2021-01-29 17:46:28,999] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: lens
INFO [2021-01-29 17:46:29,004] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: livy
INFO [2021-01-29 17:46:29,010] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: md
INFO [2021-01-29 17:46:29,015] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: mongodb
INFO [2021-01-29 17:46:29,019] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: neo4j
INFO [2021-01-29 17:46:29,020] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: pig
INFO [2021-01-29 17:46:29,024] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: python
INFO [2021-01-29 17:46:29,027] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: r
INFO [2021-01-29 17:46:29,030] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: sap
INFO [2021-01-29 17:46:29,033] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: scalding
WARN [2021-01-29 17:46:29,069] ({main} InterpreterSettingManager.java[loadInterpreterSettingFromDefaultDir]:437) - No interpreter-setting.json found in C:\Zeppelin\interpreter\scio
INFO [2021-01-29 17:46:29,071] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: sh
INFO [2021-01-29 17:46:29,078] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: spark
INFO [2021-01-29 17:46:29,081] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: sparql
INFO [2021-01-29 17:46:29,084] ({main} InterpreterSettingManager.java[registerInterpreterSetting]:540) - Register InterpreterSettingTemplate: submarine
INFO [2021-01-29 17:46:29,086] ({main} LocalConfigStorage.java[loadInterpreterSettings]:63) - Load Interpreter Setting from file: C:\Zeppelin\conf\interpreter.json
INFO [2021-01-29 17:46:29,175] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting beam from interpreter.json
INFO [2021-01-29 17:46:29,177] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting ignite from interpreter.json
INFO [2021-01-29 17:46:29,177] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting geode from interpreter.json
INFO [2021-01-29 17:46:29,178] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting jdbc from interpreter.json
INFO [2021-01-29 17:46:29,179] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting influxdb from interpreter.json
INFO [2021-01-29 17:46:29,180] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting lens from interpreter.json
INFO [2021-01-29 17:46:29,180] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting pig from interpreter.json
INFO [2021-01-29 17:46:29,181] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting file from interpreter.json
INFO [2021-01-29 17:46:29,182] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting java from interpreter.json
INFO [2021-01-29 17:46:29,183] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting jupyter from interpreter.json
INFO [2021-01-29 17:46:29,183] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting elasticsearch from interpreter.json
INFO [2021-01-29 17:46:29,184] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting submarine from interpreter.json
INFO [2021-01-29 17:46:29,185] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting sh from interpreter.json
INFO [2021-01-29 17:46:29,186] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting spark from interpreter.json
INFO [2021-01-29 17:46:29,187] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting md from interpreter.json
INFO [2021-01-29 17:46:29,187] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting hazelcastjet from interpreter.json
INFO [2021-01-29 17:46:29,188] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting alluxio from interpreter.json
INFO [2021-01-29 17:46:29,189] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting bigquery from interpreter.json
INFO [2021-01-29 17:46:29,190] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting mongodb from interpreter.json
INFO [2021-01-29 17:46:29,192] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting hbase from interpreter.json
INFO [2021-01-29 17:46:29,192] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting python from interpreter.json
INFO [2021-01-29 17:46:29,193] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting sap from interpreter.json
INFO [2021-01-29 17:46:29,194] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting kotlin from interpreter.json
INFO [2021-01-29 17:46:29,194] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting scalding from interpreter.json
INFO [2021-01-29 17:46:29,195] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting flink from interpreter.json
INFO [2021-01-29 17:46:29,196] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting angular from interpreter.json
INFO [2021-01-29 17:46:29,196] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting ksql from interpreter.json
INFO [2021-01-29 17:46:29,197] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting livy from interpreter.json
INFO [2021-01-29 17:46:29,197] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting r from interpreter.json
INFO [2021-01-29 17:46:29,198] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting groovy from interpreter.json
INFO [2021-01-29 17:46:29,199] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting neo4j from interpreter.json
INFO [2021-01-29 17:46:29,203] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting cassandra from interpreter.json
INFO [2021-01-29 17:46:29,204] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting sparql from interpreter.json
INFO [2021-01-29 17:46:29,205] ({main} InterpreterSettingManager.java[loadFromFile]:294) - Create interpreter setting kylin from interpreter.json
INFO [2021-01-29 17:46:29,211] ({main} LocalConfigStorage.java[save]:53) - Save Interpreter Setting to C:\Zeppelin\conf\interpreter.json
INFO [2021-01-29 17:46:29,310] ({main} LuceneSearch.java[<init>]:93) - Use C:\Zeppelin\tmp\zeppelin-index for storing lucene search index
I suggest you stop running Zeppelin under Windows. Besides that old versions (like 0.8.0) had Windows in the README, the new version of Zeppelin - 0.9.0 - doesn't even mention Windows in the list of supported platforms.
As far as I understand, the Zeppelin team is not as large to provide Windows support, and you will encounter a lot of bugs, and the only one who can fix it is you.
In ancient times, messengers who delivered bad news were killed. I hope something has changed since then!
You can try to run Zeppelin under Docker, WSL2, or inside the full virtual machine with Linux inside.
The easiest way to run Zeppelin in Windows is to install Docker for Windows, and then run something like docker run -p 8080:8080 --rm --name zeppelin apache/zeppelin:0.9.0
I am trying to deploy a Flink job in Kubernetes cluster (Azure AKS). The Job Cluster is getting aborted just after starting but Task manager is running fine.
The docker image is created successfully without any exception. I am able to run the docker image as well as able to SSH to docker image.
I have followed steps mentioned in the below link:
https://github.com/apache/flink/tree/release-1.9/flink-container/kubernetes
While creating image I have provided Job jar and it has been copied on "/opt/artifacts" inside the image. But still not getting why getting below exception in Job Cluster pod log.
Caused by: org.apache.flink.util.FlinkException: Failed to find job JAR on class path. Please provide the job class name explicitly.
I am new in Kubernetes, Could you please give me some clue to debug this issue.
Please find below complete logs:
A. flink-job-cluster Pod Log
develk#ACIDLAELKV01:~/cntx_eng$ kubectl logs flink-job-cluster-kszwf
Starting the job-cluster
Starting standalonejob as a console application on host flink-job-cluster-kszwf.
2019-12-12 10:37:17,170 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --------------------------------------------------------------------------------
2019-12-12 10:37:17,172 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneJobClusterEntryPoint (Version: 1.8.0, Rev:4caec0d, Date:03.04.2019 # 13:25:54 PDT)
2019-12-12 10:37:17,172 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - OS current user: flink
2019-12-12 10:37:17,173 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Current Hadoop/Kerberos user: <no hadoop dependency found>
2019-12-12 10:37:17,173 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM: OpenJDK 64-Bit Server VM - IcedTea - 1.8/25.212-b04
2019-12-12 10:37:17,173 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Maximum heap size: 989 MiBytes
2019-12-12 10:37:17,173 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JAVA_HOME: /usr/lib/jvm/java-1.8-openjdk/jre
2019-12-12 10:37:17,174 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - No Hadoop Dependency available
2019-12-12 10:37:17,174 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - JVM Options:
2019-12-12 10:37:17,174 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
2019-12-12 10:37:17,174 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
2019-12-12 10:37:17,174 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlog4j.configuration=file:/opt/flink-1.8.0/conf/log4j-console.properties
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dlogback.configurationFile=file:/opt/flink-1.8.0/conf/logback-console.xml
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Program Arguments:
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - /opt/flink-1.8.0/conf
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Djobmanager.rpc.address=flink-job-cluster
2019-12-12 10:37:17,175 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dparallelism.default=1
2019-12-12 10:37:17,176 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dblob.server.port=6124
2019-12-12 10:37:17,176 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Dqueryable-state.server.ports=6125
2019-12-12 10:37:17,176 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Classpath: /opt/flink-1.8.0/lib/log4j-1.2.17.jar:/opt/flink-1.8.0/lib/slf4j-log4j12-1.7.15.jar:/opt/flink-1.8.0/lib/flink-dist_2.11-1.8.0.jar:::
2019-12-12 10:37:17,176 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --------------------------------------------------------------------------------
2019-12-12 10:37:17,178 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX signal handlers for [TERM, HUP, INT]
2019-12-12 10:37:17,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.address, localhost
2019-12-12 10:37:17,306 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.rpc.port, 6123
2019-12-12 10:37:17,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: jobmanager.heap.size, 1024m
2019-12-12 10:37:17,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.heap.size, 1024m
2019-12-12 10:37:17,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: taskmanager.numberOfTaskSlots, 1
2019-12-12 10:37:17,307 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: parallelism.default, 1
2019-12-12 10:37:17,336 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting StandaloneJobClusterEntryPoint.
2019-12-12 10:37:17,336 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install default filesystem.
2019-12-12 10:37:17,343 INFO org.apache.flink.core.fs.FileSystem - Hadoop is not in the classpath/dependencies. The extended set of supported File Systems via Hadoop is not available.
2019-12-12 10:37:17,352 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Install security context.
2019-12-12 10:37:17,362 INFO org.apache.flink.runtime.security.modules.HadoopModuleFactory - Cannot create Hadoop Security Module because Hadoop cannot be found in the Classpath.
2019-12-12 10:37:17,381 INFO org.apache.flink.runtime.security.SecurityUtils - Cannot install HadoopSecurityContext because Hadoop cannot be found in the Classpath.
2019-12-12 10:37:17,382 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Initializing cluster services.
2019-12-12 10:37:17,638 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Trying to start actor system at flink-job-cluster:6123
2019-12-12 10:37:18,163 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2019-12-12 10:37:18,237 INFO akka.remote.Remoting - Starting remoting
2019-12-12 10:37:18,366 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink#flink-job-cluster:6123]
2019-12-12 10:37:18,375 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils - Actor system started at akka.tcp://flink#flink-job-cluster:6123
2019-12-12 10:37:18,398 INFO org.apache.flink.configuration.Configuration - Config uses fallback configuration key 'jobmanager.rpc.address' instead of key 'rest.address'
2019-12-12 10:37:18,407 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /tmp/blobStore-63338044-67c1-4872-a3d9-c94563b3a7c3
2019-12-12 10:37:18,412 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:6124 - max concurrent requests: 50 - max backlog: 1000
2019-12-12 10:37:18,428 INFO org.apache.flink.runtime.metrics.MetricRegistryImpl - No metrics reporter configured, no metrics will be exposed/reported.
2019-12-12 10:37:18,430 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Trying to start actor system at flink-job-cluster:0
2019-12-12 10:37:18,464 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
2019-12-12 10:37:18,472 INFO akka.remote.Remoting - Starting remoting
2019-12-12 10:37:18,480 INFO akka.remote.Remoting - Remoting started; listening on addresses :[akka.tcp://flink-metrics#flink-job-cluster:33529]
2019-12-12 10:37:18,482 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Actor system started at akka.tcp://flink-metrics#flink-job-cluster:33529
2019-12-12 10:37:18,490 INFO org.apache.flink.runtime.blob.TransientBlobCache - Created BLOB cache storage directory /tmp/blobStore-ba64dcdb-5095-41fc-9c98-0f1528d95c40
2019-12-12 10:37:18,514 INFO org.apache.flink.configuration.Configuration - Config uses fallback configuration key 'jobmanager.rpc.address' instead of key 'rest.address'
2019-12-12 10:37:18,515 WARN org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Upload directory /tmp/flink-web-f6be0c2d-5099-4bd6-bc72-a0ae1fc6448e/flink-web-upload does not exist, or has been deleted externally. Previously uploaded files are no longer available.
2019-12-12 10:37:18,516 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Created directory /tmp/flink-web-f6be0c2d-5099-4bd6-bc72-a0ae1fc6448e/flink-web-upload for file uploads.
2019-12-12 10:37:18,603 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Starting rest endpoint.
2019-12-12 10:37:18,872 WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - Log file environment variable 'log.file' is not set.
2019-12-12 10:37:18,872 WARN org.apache.flink.runtime.webmonitor.WebMonitorUtils - JobManager log files are unavailable in the web dashboard. Log file location not found in environment variable 'log.file' or configuration key 'Key: 'web.log.path' , default: null (fallback keys: [{key=jobmanager.web.log.path, isDeprecated=true}])'.
2019-12-12 10:37:19,115 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Rest endpoint listening at flink-job-cluster:8081
2019-12-12 10:37:19,116 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - http://flink-job-cluster:8081 was granted leadership with leaderSessionID=00000000-0000-0000-0000-000000000000
2019-12-12 10:37:19,116 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Web frontend listening at http://flink-job-cluster:8081.
2019-12-12 10:37:19,239 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for org.apache.flink.runtime.resourcemanager.StandaloneResourceManager at akka://flink/user/resourcemanager .
2019-12-12 10:37:19,262 INFO org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever - Scanning class path for job JAR
2019-12-12 10:37:19,270 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Shutting down rest endpoint.
2019-12-12 10:37:19,295 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Removing cache directory /tmp/flink-web-f6be0c2d-5099-4bd6-bc72-a0ae1fc6448e/flink-web-ui
2019-12-12 10:37:19,299 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - http://flink-job-cluster:8081 lost leadership
2019-12-12 10:37:19,299 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Shut down complete.
2019-12-12 10:37:19,302 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneJobClusterEntryPoint down with application status FAILED. Diagnostics org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:257)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:224)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:172)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:171)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:535)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:105)
Caused by: org.apache.flink.util.FlinkException: Failed to find job JAR on class path. Please provide the job class name explicitly.
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.getJobClassNameOrScanClassPath(ClassPathJobGraphRetriever.java:131)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:114)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.retrieveJobGraph(ClassPathJobGraphRetriever.java:96)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:62)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:41)
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:184)
... 6 more
Caused by: java.util.NoSuchElementException: No JAR with manifest attribute for entry class
at org.apache.flink.container.entrypoint.JarManifestParser.findOnlyEntryClass(JarManifestParser.java:80)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.scanClassPathForJobJar(ClassPathJobGraphRetriever.java:137)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.getJobClassNameOrScanClassPath(ClassPathJobGraphRetriever.java:129)
... 11 more
.
2019-12-12 10:37:19,305 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:6124
2019-12-12 10:37:19,305 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
2019-12-12 10:37:19,315 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service.
2019-12-12 10:37:19,320 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2019-12-12 10:37:19,321 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2019-12-12 10:37:19,323 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
2019-12-12 10:37:19,325 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
2019-12-12 10:37:19,354 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
2019-12-12 10:37:19,356 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
2019-12-12 10:37:19,378 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service.
2019-12-12 10:37:19,382 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneJobClusterEntryPoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneJobClusterEntryPoint.
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:535)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:105)
Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:257)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:224)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:172)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:171)
... 2 more
Caused by: org.apache.flink.util.FlinkException: Failed to find job JAR on class path. Please provide the job class name explicitly.
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.getJobClassNameOrScanClassPath(ClassPathJobGraphRetriever.java:131)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:114)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.retrieveJobGraph(ClassPathJobGraphRetriever.java:96)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:62)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:41)
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:184)
... 6 more
Caused by: java.util.NoSuchElementException: No JAR with manifest attribute for entry class
at org.apache.flink.container.entrypoint.JarManifestParser.findOnlyEntryClass(JarManifestParser.java:80)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.scanClassPathForJobJar(ClassPathJobGraphRetriever.java:137)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.getJobClassNameOrScanClassPath(ClassPathJobGraphRetriever.java:129)
... 11 more
develk#ACIDLAELKV01:~/cntx_eng$
Now, I have added Job class name as in argument section of "job-cluster-job.yaml.template" file.
Like below:
args: ["job-cluster",
"--job-classname", "com.flink.wordCountSimple",
"-Djobmanager.rpc.address=flink-job-cluster",
But after that I am getting below exception:
Caused by: org.apache.flink.util.FlinkException: Could not load the provided entrypoint class.
Please see below detail log.
2019-12-13 19:08:34,323 INFO org.apache.flink.runtime.jobmaster.MiniDispatcherRestEndpoint - Shut down complete.
2019-12-13 19:08:34,329 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting StandaloneJobClusterEntryPoint down with application status FAILED. Diagnostics org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:257)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:224)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:172)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:171)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:535)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:105)
Caused by: org.apache.flink.util.FlinkException: Could not load the provided entrypoint class.
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:119)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.retrieveJobGraph(ClassPathJobGraphRetriever.java:96)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:62)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:41)
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:184)
... 6 more
Caused by: java.lang.ClassNotFoundException: com.flink.wordCountSimple
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:116)
... 10 more
.
2019-12-13 19:08:34,337 INFO org.apache.flink.runtime.blob.BlobServer - Stopped BLOB server at 0.0.0.0:6124
2019-12-13 19:08:34,338 INFO org.apache.flink.runtime.blob.TransientBlobCache - Shutting down BLOB cache
2019-12-13 19:08:34,364 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopping Akka RPC service.
2019-12-13 19:08:34,368 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2019-12-13 19:08:34,372 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
2019-12-13 19:08:34,392 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2019-12-13 19:08:34,392 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
2019-12-13 19:08:34,406 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
2019-12-13 19:08:34,410 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
2019-12-13 19:08:34,434 INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService - Stopped Akka RPC service.
2019-12-13 19:08:34,443 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not start cluster entrypoint StandaloneJobClusterEntryPoint.
org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to initialize the cluster entrypoint StandaloneJobClusterEntryPoint.
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:190)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:535)
at org.apache.flink.container.entrypoint.StandaloneJobClusterEntryPoint.main(StandaloneJobClusterEntryPoint.java:105)
Caused by: org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:257)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:224)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:172)
at org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:171)
... 2 more
Caused by: org.apache.flink.util.FlinkException: Could not load the provided entrypoint class.
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:119)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.retrieveJobGraph(ClassPathJobGraphRetriever.java:96)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:62)
at org.apache.flink.runtime.dispatcher.JobDispatcherFactory.createDispatcher(JobDispatcherFactory.java:41)
at org.apache.flink.runtime.entrypoint.component.AbstractDispatcherResourceManagerComponentFactory.create(AbstractDispatcherResourceManagerComponentFactory.java:184)
... 6 more
Caused by: java.lang.ClassNotFoundException: com.flink.wordCountSimple
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.flink.container.entrypoint.ClassPathJobGraphRetriever.createPackagedProgram(ClassPathJobGraphRetriever.java:116)
... 10 more
There's a complete, working example of creating and running a flink job cluster on kubernetes in https://github.com/alpinegizmo/flink-containers-example. Maybe that will help. See also https://www.youtube.com/watch?v=ceZtUDgh2TE.
version: "2.1"
services:
jobmanager:
build:
context: ./
args:
JAR_FILE: flink-event-tracker-bundled-1.6.0.jar
image: test/flink-event-tracker
expose:
- "6123"
ports:
- "8081:8081"
- "6123:6123"
command: job-cluster --job-classname com.company.test.flink.pipelines.KafkaPipelineConsumer -Djobmanager.rpc.address=jobmanager --runner=FlinkRunner --streaming=true --checkpointingInterval=30000
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
- JOB_MANAGER=jobmanager
volumes:
- data-volume:/docker/volumes
taskmanager:
image: test/flink-event-tracker
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: task-manager -Djobmanager.rpc.address=jobmanager
links:
- "jobmanager:jobmanager"
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
- JOB_MANAGER=jobmanager
volumes:
- data-volume:/docker/volumes
volumes:
data-volume:
driver: local
driver_opts:
o: bind
type: none
device: /Users/home/Development/docker/volumes/flink
Docker file
FROM flink:1.9
ARG JAR_FILE=""
ENV APP_OPTS ""
ENV JAVA_OPTS ""
ENV JOB_MANAGER=""
# Build arg allows passing the version at runtime
ARG VERSION=unset-version
COPY flink-conf.yml $FLINK_HOME/conf/flink-conf.yaml
COPY target/$JAR_FILE $FLINK_HOME/lib/event-tracker.jar
COPY docker-cluster-entrypoint.sh /docker-cluster-entrypoint.sh
RUN apt-get update && apt-get install procps -y && apt-get install curl -y
RUN echo "root:root" | chpasswd
RUN chmod 777 /docker-cluster-entrypoint.sh
RUN chmod 777 $FLINK_HOME/lib/event-tracker.jar
ENTRYPOINT [ "bash","/docker-cluster-entrypoint.sh" ]
docker-cluster-entrypoint.sh
FLINK_HOME=${FLINK_HOME:-"/opt/flink/bin"}
JOB_CLUSTER="job-cluster"
TASK_MANAGER="task-manager"
CMD="$1"
shift;
if [ "${CMD}" = "--help" -o "${CMD}" = "-h" ]; then
echo "Usage: $(basename $0) (${JOB_CLUSTER}|${TASK_MANAGER})"
exit 0
elif [ "${CMD}" = "${JOB_CLUSTER}" -o "${CMD}" = "${TASK_MANAGER}" ]; then
echo "Starting the ${CMD}"
if [ "${CMD}" = "${TASK_MANAGER}" ]; then
exec $FLINK_HOME/bin/taskmanager.sh start-foreground "$#"
else
exec $FLINK_HOME/bin/standalone-job.sh start-foreground "$#"
fi
fi
How to run:-
mvn clean install
docker-compose -f docker-compose.local.yml up --scale taskmanager=2 > exceptionlog.log
docker-compose -f docker-compose.local.yml build
this is the entire conf that runs your docker. but if you want to run in kube, just convert the docker-compose file to its corresponding kube files...remaining can stay same.. may be do a helm that way kube maintenance is better.
Note:- we are using apache beam to code the job
This question might seem repeated, in fact, I've seen a couple of questions related to this but not exactly with the same error, so I'm asking to see if anyone has a clue.
I've set up a Spark Thrift Server running with default settings. Spark version is 2.1 and it runs on YARN (Hadoop 2.7.3)
The fact is that I'm not able to setup either the Simba hive ODBC driver nor the Microsoft one so that the Test in the ODBC setup succeeds.
This is the config I'm using for the Microsoft Hive ODBC driver:
When I hit the Test button, the error message shown is the following:
While in the Spark Thrift Server logs the following is seen:
17/09/15 17:31:36 INFO ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V1
17/09/15 17:31:36 INFO SessionState: Created local directory: /tmp/00abf145-2928-4995-81f2-fea578280c42_resources
17/09/15 17:31:36 INFO SessionState: Created HDFS directory: /tmp/hive/test/00abf145-2928-4995-81f2-fea578280c42
17/09/15 17:31:36 INFO SessionState: Created local directory: /tmp/vagrant/00abf145-2928-4995-81f2-fea578280c42
17/09/15 17:31:36 INFO SessionState: Created HDFS directory: /tmp/hive/test/00abf145-2928-4995-81f2-fea578280c42/_tmp_space.db
17/09/15 17:31:36 INFO HiveSessionImpl: Operation log session directory is created: /tmp/vagrant/operation_logs/00abf145-2928-4995-81f2-fea578280c42
17/09/15 17:31:36 INFO SparkExecuteStatementOperation: Running query 'set -v' with 82d7f9a6-f2a6-4ebd-93bb-5c8da1611f84
17/09/15 17:31:36 INFO SparkSqlParser: Parsing command: set -v
17/09/15 17:31:36 INFO SparkExecuteStatementOperation: Result Schema: StructType(StructField(key,StringType,false), StructField(value,StringType,false), StructField(meaning,StringType,false))
If I connect using the JDBC driver by means of Beeline (which works ok), these are the logs:
17/09/15 17:04:24 INFO ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
17/09/15 17:04:24 INFO SessionState: Created HDFS directory: /tmp/hive/test
17/09/15 17:04:24 INFO SessionState: Created local directory: /tmp/c0681d6f-cc0f-40ae-970d-e3ea366aa414_resources
17/09/15 17:04:24 INFO SessionState: Created HDFS directory: /tmp/hive/test/c0681d6f-cc0f-40ae-970d-e3ea366aa414
17/09/15 17:04:24 INFO SessionState: Created local directory: /tmp/vagrant/c0681d6f-cc0f-40ae-970d-e3ea366aa414
17/09/15 17:04:24 INFO SessionState: Created HDFS directory: /tmp/hive/test/c0681d6f-cc0f-40ae-970d-e3ea366aa414/_tmp_space.db
17/09/15 17:04:24 INFO HiveSessionImpl: Operation log session directory is created: /tmp/vagrant/operation_logs/c0681d6f-cc0f-40ae-970d-e3ea366aa414
17/09/15 17:04:24 INFO SparkSqlParser: Parsing command: use default
17/09/15 17:04:25 INFO HiveMetaStore: 1: get_database: default
17/09/15 17:04:25 INFO audit: ugi=vagrant ip=unknown-ip-addr cmd=get_database: default
17/09/15 17:04:25 INFO HiveMetaStore: 1: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/09/15 17:04:25 INFO ObjectStore: ObjectStore, initialize called
17/09/15 17:04:25 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery#0" since the connection used is closing
17/09/15 17:04:25 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/09/15 17:04:25 INFO ObjectStore: Initialized ObjectStore
Well I managed to connect successfully by installing the Microsoft Spark ODBC driver instead of the Hive one.
It looked like the problem had to do with the driver rejecting to connect to Spark Thrift Server when discovering it was not a Hive2 server based on some server property. I doubt there are actual differences at the wire level between Hive2 and Spark thrift server because the latter is a port of the former without changes at the protocol level (Thrift), but in any case, the solution is to move to this driver and configuring it the same way as the Hive2 one:
Microsoft® Spark ODBC Driver
I am trying to use Zeppelin with the following code:
val dataText = sc.parallelize(IOUtils.toString(new URL("http://XXX.XX.XXX.121:8090/my_data.txt"),Charset.forName("utf8")).split("\n"))
case class Data(id: string, time: long, value1: Double, value2: int, mode: int)
val dat = dataText .map(s => s.split("\t")).filter(s => s(0) != "Header:").map(
s => Data(s(0),
s(1).toLong,
s(2).toDouble,
s(3).toInt,
s(4).toInt
)
).toDF()
dat.registerTempTable("mydatatable")
this keeps throwing me following error :
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2367)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:535)
at java.lang.StringBuilder.append(StringBuilder.java:204)
at org.apache.commons.io.output.StringBuilderWriter.write(StringBuilderWriter.java:138)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:2002)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1980)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1957)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1907)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:778)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:896)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC.<init>(<console>:53)
at $iwC.<init>(<console>:55)
at <init>(<console>:57)
at .<init>(<console>:61)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
I have already set the following in the zeppelin-env.sh
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.0.0-2557 -Dspark.executor.memory=4g"
any idea what I may be missing. File I am parsing my_data.txt is about 200MB
BTW I am using the Hortonworks Sandbox if that matters
EDIT 1
Here is my zeppelin-env.sh
export HADOOP_CONF_DIR=/etc/hadoop/conf
export ZEPPELIN_PORT=9995
export ZEPPELIN_JAVA_OPTS="-Dhdp.version=2.3.0.0-2557 -Dspark.executor.memory=4g"
export SPARK_SUBMIT_OPTIONS="--driver-java-options -Xmx4g"
export ZEPPELIN_INT_MEM="-Xmx4g"
export SPARK_HOME=/usr/hdp/2.3.0.0-2557/spark
Regards
Kiran
Can you try increasing the memory in SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh:
export SPARK_SUBMIT_OPTIONS="--driver-java-options -Xmx20g"
This thread may help
http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Can-not-configure-driver-memory-size-td1513.html
Increasing the memory for the following zeppelin-env.sh var, did the trick for me. The default is 1/0.5GB, I increased it to 10/5GB
ZEPPELIN_MEM": "-Xmx10024m -XX:MaxPermSize=5120m
I was getting below error while trying to bring up Zeppelin notebook
INFO [2021-05-04 15:16:22,015] ({main} Folder.java[addNote]:185) - Add note 2G7CAFXX7 to folder /
INFO [2021-05-04 15:16:22,016] ({main} Notebook.java[<init>]:127) - Notebook indexing started...
WARN [2021-05-04 15:16:32,045] ({main} ContextHandler.java[log]:2355) - unavailable
MultiException stack 1 of 1
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:80)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:53)
To resolve this issue I tuned the ZEPPELIN_MEM parameter in zeppelin-env.sh file like this,
export ZEPPELIN_MEM="-Xmx5024m -XX:MaxPermSize=5120m"
Then restart zeppelin
sudo systemctl stop zeppelin; sudo systemctl start zeppelin
Result
INFO [2021-05-04 18:51:02,939] ({main} Folder.java[addNote]:185) - Add note 2G7CAFXX7 to folder /
INFO [2021-05-04 18:51:02,940] ({main} Notebook.java[<init>]:127) - Notebook indexing started...
INFO [2021-05-04 18:51:05,793] ({main} LuceneSearch.java[addIndexDocs]:305) - Indexing 905 notebooks took 2853ms
INFO [2021-05-04 18:51:05,793] ({main} Notebook.java[<init>]:129) - Notebook indexing finished: 905 indexed in -2s
INFO [2021-05-04 18:51:05,795] ({main} Helium.java[loadConf]:103) - Add helium local registry /usr/lib/zeppelin/helium
INFO [2021-05-04 18:51:05,797] ({main} Helium.java[loadConf]:100) - Add helium
INFO [2021-05-04 18:51:06,631] ({main} Server.java[doStart]:407) - Started #131632ms
INFO [2021-05-04 18:51:06,631] ({main} ZeppelinServer.java[main]:249) - Done, zeppelin server started
The only thing that worked for me (using Spark 2) was to add to conf/zeppelin-env.sh:
export SPARK_SUBMIT_OPTIONS="... --driver-memory 4g ..."
Then restart Zeppelin interpreter (In the Zeppelin for Spark 2, click on the settings button on the top right and then click on the Interpreter link, scroll down and click the Restart button of the Spark section).
I'm trying to simulate a multi-node Mesos cluster using Docker and Zookeeper and trying to run a simple (py)Spark job on top of it. These Docker containers and the pyspark script are all run on the same machine. However, when I execute my Spark script, it hangs at:
No credentials provided. Attempting to register without authentication
The Mesos slave constantly outputs:
I0929 14:59:32.925915 62 slave.cpp:1959] Asked to shut down framework 20150929-143802-1224741292-5050-33-0060 by master#172.17.0.73:5050
W0929 14:59:32.926035 62 slave.cpp:1974] Cannot shut down unknown framework 20150929-143802-1224741292-5050-33-0060
And the Mesos master constantly outputs:
I0929 14:38:15.169683 39 master.cpp:2094] Received SUBSCRIBE call for framework 'test' at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.169845 39 master.cpp:2164] Subscribing framework test with checkpointing disabled and capabilities [ ]
E0929 14:38:15.170361 42 socket.hpp:174] Shutdown failed on fd=15: Transport endpoint is not connected [107]
I0929 14:38:15.170409 36 hierarchical.hpp:391] Added framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.170534 39 master.cpp:1051] Framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693 disconnected
I0929 14:38:15.170549 39 master.cpp:2370] Disconnecting framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.170555 39 master.cpp:2394] Deactivating framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
E0929 14:38:15.170560 42 socket.hpp:174] Shutdown failed on fd=16: Transport endpoint is not connected [107]
I0929 14:38:15.170593 39 master.cpp:1075] Giving framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693 0ns to failover
W0929 14:38:15.170835 41 master.cpp:4482] Master returning resources offered to framework 20150929-143802-1224741292-5050-33-0000 because the framework has terminated or is inactive
I0929 14:38:15.170855 36 hierarchical.hpp:474] Deactivated framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.170990 37 hierarchical.hpp:814] Recovered cpus(*):8; mem(*):31092; disk(*):443036; ports(*):[31000-32000] (total: cpus(*):8; mem(*):31092; disk(*):443036; ports(*):[31000-32000
], allocated: ) on slave 20150929-051336-1224741292-5050-19-S0 from framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.171820 41 master.cpp:4469] Framework failover timeout, removing framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0
.1.1:46693
I0929 14:38:15.171835 41 master.cpp:5112] Removing framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.172130 41 hierarchical.hpp:428] Removed framework 20150929-143802-1224741292-5050-33-0000
The Mesos master Docker image is built with the following Dockerfile
FROM ubuntu:14.04
ENV MESOS_V 0.24.0
# update
RUN apt-get update
RUN apt-get upgrade -y
# dependencies
RUN apt-get install -y wget openjdk-7-jdk build-essential python-dev python-boto libcurl4-nss-dev libsasl2-dev maven libapr1-dev libsvn-dev
# mesos
RUN wget http://www.apache.org/dist/mesos/${MESOS_V}/mesos-${MESOS_V}.tar.gz
RUN tar -zxf mesos-*.tar.gz
RUN rm mesos-*.tar.gz
RUN mv mesos-* mesos
WORKDIR mesos
RUN mkdir build
RUN ./configure
RUN make
RUN make install
RUN ldconfig
EXPOSE 5050
ENTRYPOINT ["/bin/bash"]
And I manually execute the mesos-master command:
LIBPROCESS_IP=${MASTER_IP} mesos-master --registry=in_memory --ip=${MASTER_IP} --zk=zk://172.17.0.75:2181/mesos --advertise_ip=${MASTER_IP}
The Mesos slave Docker image is built using the same Dockerfile except port 5051 is exposed instead. Then I run the following command in its container:
LIBPROCESS_IP=172.17.0.72 mesos-slave --master=zk://172.17.0.75:2181/mesos
The pyspark script is:
import os
import pyspark
src = 'file:///{}/README.md'.format(os.environ['SPARK_HOME'])
leader_ip = '172.17.0.75'
conf = pyspark.SparkConf()
conf.setMaster('mesos://zk://{}:2181/mesos'.format(leader_ip))
conf.set('spark.executor.uri', 'http://d3kbcqa49mib13.cloudfront.net/spark-1.5.0-bin-hadoop2.6.tgz')
conf.setAppName('my_test_app')
sc = pyspark.SparkContext(conf=conf)
lines = sc.textFile(src)
words = lines.flatMap(lambda x: x.split(' '))
word_count = (words.map(lambda x: (x, 1)).reduceByKey(lambda x, y: x+y))
print(word_count.collect())
Here is the complete output of the pyspark script:
15/09/29 11:07:59 INFO SparkContext: Running Spark version 1.5.0
15/09/29 11:07:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/09/29 11:07:59 WARN Utils: Your hostname, hubble resolves to a loopback address: 127.0.1.1; using 192.168.1.2 instead (on interface em1)
15/09/29 11:07:59 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/09/29 11:07:59 INFO SecurityManager: Changing view acls to: ftseng
15/09/29 11:07:59 INFO SecurityManager: Changing modify acls to: ftseng
15/09/29 11:07:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ftseng); users with modify permissions: Set(ftseng)
15/09/29 11:08:00 INFO Slf4jLogger: Slf4jLogger started
15/09/29 11:08:00 INFO Remoting: Starting remoting
15/09/29 11:08:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#192.168.1.2:38860]
15/09/29 11:08:00 INFO Utils: Successfully started service 'sparkDriver' on port 38860.
15/09/29 11:08:00 INFO SparkEnv: Registering MapOutputTracker
15/09/29 11:08:00 INFO SparkEnv: Registering BlockManagerMaster
15/09/29 11:08:00 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-28695bd2-fc83-45f4-b0a0-eefcfb80a3b5
15/09/29 11:08:00 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
15/09/29 11:08:00 INFO HttpFileServer: HTTP File server directory is /tmp/spark-89444c7a-725a-4454-87db-8873f4134580/httpd-341c3da9-16d5-43a4-93ee-0e8b47389fdb
15/09/29 11:08:00 INFO HttpServer: Starting HTTP Server
15/09/29 11:08:00 INFO Utils: Successfully started service 'HTTP file server' on port 51405.
15/09/29 11:08:00 INFO SparkEnv: Registering OutputCommitCoordinator
15/09/29 11:08:00 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/09/29 11:08:00 INFO SparkUI: Started SparkUI at http://192.168.1.2:4040
15/09/29 11:08:00 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#716: Client environment:host.name=hubble
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#723: Client environment:os.name=Linux
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#724: Client environment:os.arch=3.19.0-25-generic
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#725: Client environment:os.version=#26-Ubuntu SMP Fri Jul 24 21:17:31 UTC 2015
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#733: Client environment:user.name=ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#741: Client environment:user.home=/home/ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#753: Client environment:user.dir=/home/ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#zookeeper_init#786: Initiating client connection, host=172.17.0.75:2181 sessionTimeout=10000 watcher=0x7fc0962b7176 sessionId=0 sessionPasswd=<null> context=0x7fc078001860 flags=0
I0929 11:08:00.651923 32328 sched.cpp:164] Version: 0.24.0
2015-09-29 11:08:00,652:32221(0x7fc06bfff700):ZOO_INFO#check_events#1703: initiated connection to server [172.17.0.75:2181]
2015-09-29 11:08:00,657:32221(0x7fc06bfff700):ZOO_INFO#check_events#1750: session establishment complete on server [172.17.0.75:2181], sessionId=0x150177fcfc40014, negotiated timeout=10000
I0929 11:08:00.658051 32322 group.cpp:331] Group process (group(1)#127.0.1.1:48692) connected to ZooKeeper
I0929 11:08:00.658083 32322 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0929 11:08:00.658100 32322 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0929 11:08:00.659600 32326 detector.cpp:156] Detected a new leader: (id='2')
I0929 11:08:00.659904 32325 group.cpp:674] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
I0929 11:08:00.661052 32326 detector.cpp:481] A new leading master (UPID=master#172.17.0.73:5050) is detected
I0929 11:08:00.661201 32320 sched.cpp:262] New master detected at master#172.17.0.73:5050
I0929 11:08:00.661798 32320 sched.cpp:272] No credentials provided. Attempting to register without authentication
After a lot more experimentation, it looks like it was an issue with the IP address of the host machine (using its local network address, 192.168.xx.xx) when it should have been using its Docker host IP (172.17.xx.xx).
I managed to get things running with:
LIBPROCESS_IP=172.17.xx.xx python test_spark.py
I'm now hitting a different error, but it seems unrelated, so I think this command solves my problem.
I'm not familiar enough with Mesos/Spark yet to understand why this fixes things, so if someone wants to add an explanation, that would be very helpful.