Jboss can't start web app and no errors in log - jsf

I'm trying to get a web app up and running on jboss 5 in eclipse and i'm not getting any errors in the log and when i hit http://localhost:8080/WebDataViewer, i just get the default http 404 error page.
When i hit http://localhost:8080/ i get the default jboss page with some links on it.
I've not worked with jboss before, but can anyone help figure out how i can access my app or how i can troubleshoot this?
10:07:39,706 INFO [ServerImpl] Starting JBoss (Microcontainer)...
10:07:39,707 INFO [ServerImpl] Release ID: JBoss [The Oracle] 5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)
10:07:39,707 INFO [ServerImpl] Bootstrap URL: null
10:07:39,707 INFO [ServerImpl] Home Dir: C:\jboss-5.1.0.GA
10:07:39,708 INFO [ServerImpl] Home URL: file:/C:/jboss-5.1.0.GA/
10:07:39,708 INFO [ServerImpl] Library URL: file:/C:/jboss-5.1.0.GA/lib/
10:07:39,708 INFO [ServerImpl] Patch URL: null
10:07:39,708 INFO [ServerImpl] Common Base URL: file:/C:/jboss-5.1.0.GA/common/
10:07:39,708 INFO [ServerImpl] Common Library URL: file:/C:/jboss-5.1.0.GA/common/lib/
10:07:39,709 INFO [ServerImpl] Server Name: default
10:07:39,709 INFO [ServerImpl] Server Base Dir: C:\jboss-5.1.0.GA\server
10:07:39,709 INFO [ServerImpl] Server Base URL: file:/C:/jboss-5.1.0.GA/server/
10:07:39,709 INFO [ServerImpl] Server Config URL: file:/C:/jboss-5.1.0.GA/server/default/conf/
10:07:39,709 INFO [ServerImpl] Server Home Dir: C:\jboss-5.1.0.GA\server\default
10:07:39,709 INFO [ServerImpl] Server Home URL: file:/C:/jboss-5.1.0.GA/server/default/
10:07:39,709 INFO [ServerImpl] Server Data Dir: C:\jboss-5.1.0.GA\server\default\data
10:07:39,709 INFO [ServerImpl] Server Library URL: file:/C:/jboss-5.1.0.GA/server/default/lib/
10:07:39,710 INFO [ServerImpl] Server Log Dir: C:\jboss-5.1.0.GA\server\default\log
10:07:39,710 INFO [ServerImpl] Server Native Dir: C:\jboss-5.1.0.GA\server\default\tmp\native
10:07:39,710 INFO [ServerImpl] Server Temp Dir: C:\jboss-5.1.0.GA\server\default\tmp
10:07:39,710 INFO [ServerImpl] Server Temp Deploy Dir: C:\jboss-5.1.0.GA\server\default\tmp\deploy
10:07:40,279 INFO [ServerImpl] Starting Microcontainer, bootstrapURL=file:/C:/jboss-5.1.0.GA/server/default/conf/bootstrap.xml
10:07:40,898 INFO [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.plugins.cache.CombinedVFSCache]
10:07:40,900 INFO [VFSCacheFactory] Using VFSCache [CombinedVFSCache[real-cache: null]]
10:07:41,132 INFO [CopyMechanism] VFS temp dir: C:\jboss-5.1.0.GA\server\default\tmp
10:07:41,133 INFO [ZipEntryContext] VFS force nested jars copy-mode is enabled.
10:07:42,399 INFO [ServerInfo] Java version: 1.6.0_35,Sun Microsystems Inc.
10:07:42,399 INFO [ServerInfo] Java Runtime: Java(TM) SE Runtime Environment (build 1.6.0_35-b10)
10:07:42,399 INFO [ServerInfo] Java VM: Java HotSpot(TM) 64-Bit Server VM 20.10-b01,Sun Microsystems Inc.
10:07:42,399 INFO [ServerInfo] OS-System: Windows 7 6.1,amd64
10:07:42,399 INFO [ServerInfo] VM arguments: -Dprogram.name=run.bat -Xms128m -Xmx512m -XX:MaxPermSize=256m -Dfile.encoding=Cp1252
10:07:42,425 INFO [JMXKernel] Legacy JMX core initialized
10:07:44,183 INFO [ProfileServiceBootstrap] Loading profile: ProfileKey#2ee634bf[domain=default, server=default, name=default]
10:07:45,435 INFO [WebService] Using RMI server codebase: http://127.0.0.1:8083/
10:07:50,839 INFO [NativeServerConfig] JBoss Web Services - Stack Native Core
10:07:50,839 INFO [NativeServerConfig] 3.1.2.GA
10:07:51,473 INFO [AttributeCallbackItem] Owner callback not implemented.
10:07:52,334 INFO [LogNotificationListener] Adding notification listener for logging mbean "jboss.system:service=Logging,type=Log4jService" to server org.jboss.mx.server.MBeanServerImpl#5de82b72[ defaultDomain='jboss' ]
10:08:06,536 INFO [Ejb3DependenciesDeployer] Encountered deployment AbstractVFSDeploymentContext#1493227242{vfsfile:/C:/jboss-5.1.0.GA/server/default/deploy/profileservice-secured.jar/}
10:08:06,536 INFO [Ejb3DependenciesDeployer] Encountered deployment AbstractVFSDeploymentContext#1493227242{vfsfile:/C:/jboss-5.1.0.GA/server/default/deploy/profileservice-secured.jar/}
10:08:06,537 INFO [Ejb3DependenciesDeployer] Encountered deployment AbstractVFSDeploymentContext#1493227242{vfsfile:/C:/jboss-5.1.0.GA/server/default/deploy/profileservice-secured.jar/}
10:08:06,537 INFO [Ejb3DependenciesDeployer] Encountered deployment AbstractVFSDeploymentContext#1493227242{vfsfile:/C:/jboss-5.1.0.GA/server/default/deploy/profileservice-secured.jar/}
10:08:08,928 INFO [JMXConnectorServerService] JMX Connector server: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:1090/jmxconnector
10:08:09,029 INFO [MailService] Mail Service bound to java:/Mail
10:08:11,006 WARN [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RISK. It has been detected that the MessageSucker component which sucks messages from one node to another has not had its password changed from the installation default. Please see the JBoss Messaging user guide for instructions on how to do this.
10:08:11,020 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent
10:08:11,073 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent
10:08:11,111 INFO [TransactionManagerService] JBossTS Transaction Service (JTA version - tag:JBOSSTS_4_6_1_GA) - JBoss Inc.
10:08:11,111 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer
10:08:11,312 INFO [TransactionManagerService] Initializing recovery manager
10:08:11,436 INFO [TransactionManagerService] Recovery manager configured
10:08:11,436 INFO [TransactionManagerService] Binding TransactionManager JNDI Reference
10:08:11,457 INFO [TransactionManagerService] Starting transaction recovery manager
10:08:12,016 INFO [AprLifecycleListener] The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jdk1.6.0_35\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:/Program Files/Java/jdk1.6.0_35/bin/../jre/bin/server;C:/Program Files/Java/jdk1.6.0_35/bin/../jre/bin;C:/Program Files/Java/jdk1.6.0_35/bin/../jre/lib/amd64;c:\program files\apache-maven-3.0.x\bin;C:\Ruby192\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\sybase15_0_7\OCS-15_0\bin;C:\sybase15_0_7\OCS-15_0\dll;C:\sybase15_0_7\OCS-15_0\lib3p;C:\sybase15_0_7\DataAccess\ADONET\dll;C:\sybase15_0_7\DataAccess\ODBC\dll;C:\sybase15_0_7\DataAccess\OLEDB\dll;C:\WINDOWS\system32\WindowsPowerShell\v1.0;C:\Program Files (x86)\QuickTime\QTSystem\;C:\Program Files\MySQL\MySQL Server 5.5\bin;c:\Program Files\Stellent\IBPM;C:\PROGRA~2\IBM\SQLLIB\BIN;C:\PROGRA~2\IBM\SQLLIB\FUNCTION;C:\PROGRA~2\IBM\SQLLIB\bin;%JAVA_HOME%\bin;C:\Program Files\apache-maven-3.0.4\bin;C:\Oracle_Instant_Client(32bit)\instantclient_11_2;C:\Program Files (x86)\Git\cmd;C:\Program Files\TortoiseSVN\bin;C:\Program Files\eclipse;;.
10:08:12,087 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on http-127.0.0.1-8080
10:08:12,089 INFO [AjpProtocol] Initializing Coyote AJP/1.3 on ajp-127.0.0.1-8009
10:08:12,122 INFO [StandardService] Starting service jboss.web
10:08:12,130 INFO [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.3.GA
10:08:12,235 INFO [Catalina] Server startup in 145 ms
10:08:12,258 INFO [TomcatDeployment] deploy, ctxPath=/web-console
10:08:12,993 INFO [TomcatDeployment] deploy, ctxPath=/jbossws
10:08:13,068 INFO [TomcatDeployment] deploy, ctxPath=/invoker
10:08:13,242 INFO [RARDeployment] Required license terms exist, view vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml
10:08:13,285 INFO [RARDeployment] Required license terms exist, view vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml
10:08:13,310 INFO [RARDeployment] Required license terms exist, view vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml
10:08:13,324 INFO [RARDeployment] Required license terms exist, view vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml
10:08:13,353 INFO [RARDeployment] Required license terms exist, view vfszip:/C:/jboss-5.1.0.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml
10:08:13,422 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: main
10:08:13,442 INFO [QuartzScheduler] Quartz Scheduler v.1.5.2 created.
10:08:13,445 INFO [RAMJobStore] RAMJobStore initialized.
10:08:13,446 INFO [StdSchedulerFactory] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
10:08:13,446 INFO [StdSchedulerFactory] Quartz scheduler version: 1.5.2
10:08:13,446 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
10:08:13,769 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS'
10:08:14,145 INFO [ServerPeer] JBoss Messaging 1.4.3.GA server [0] started
10:08:14,220 INFO [QueueService] Queue[/queue/ExpiryQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000
10:08:14,269 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4457 has leasing enabled, lease period 10000 milliseconds
10:08:14,269 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#4bb8e77c started
10:08:14,271 INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pageSize=2000, downCacheSize=2000
10:08:14,272 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4457 has leasing enabled, lease period 10000 milliseconds
10:08:14,272 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#28062593 started
10:08:14,272 INFO [ConnectionFactoryJNDIMapper] supportsFailover attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support failover
10:08:14,282 INFO [ConnectionFactoryJNDIMapper] supportsLoadBalancing attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory will *not* support load balancing
10:08:14,283 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4457 has leasing enabled, lease period 10000 milliseconds
10:08:14,283 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#30726eca started
10:08:14,419 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA'
10:08:14,705 INFO [JBossASKernel] Created KernelDeployment for: profileservice-secured.jar
10:08:14,712 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3
10:08:14,712 INFO [JBossASKernel] with dependencies:
10:08:14,712 INFO [JBossASKernel] and demands:
10:08:14,712 INFO [JBossASKernel] jndi:SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView
10:08:14,712 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService
10:08:14,713 INFO [JBossASKernel] and supplies:
10:08:14,713 INFO [JBossASKernel] Class:org.jboss.profileservice.spi.ProfileService
10:08:14,713 INFO [JBossASKernel] jndi:SecureProfileService/remote
10:08:14,713 INFO [JBossASKernel] jndi:SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService
10:08:14,713 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3) to KernelDeployment of: profileservice-secured.jar
10:08:14,714 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3
10:08:14,714 INFO [JBossASKernel] with dependencies:
10:08:14,714 INFO [JBossASKernel] and demands:
10:08:14,714 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService
10:08:14,714 INFO [JBossASKernel] and supplies:
10:08:14,714 INFO [JBossASKernel] jndi:SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager
10:08:14,714 INFO [JBossASKernel] Class:org.jboss.deployers.spi.management.deploy.DeploymentManager
10:08:14,714 INFO [JBossASKernel] jndi:SecureDeploymentManager/remote
10:08:14,714 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3) to KernelDeployment of: profileservice-secured.jar
10:08:14,715 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3
10:08:14,715 INFO [JBossASKernel] with dependencies:
10:08:14,715 INFO [JBossASKernel] and demands:
10:08:14,715 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService
10:08:14,715 INFO [JBossASKernel] and supplies:
10:08:14,715 INFO [JBossASKernel] jndi:SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView
10:08:14,715 INFO [JBossASKernel] Class:org.jboss.deployers.spi.management.ManagementView
10:08:14,715 INFO [JBossASKernel] jndi:SecureManagementView/remote
10:08:14,715 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3) to KernelDeployment of: profileservice-secured.jar
10:08:14,722 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData#6cdc5f76{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true}
10:08:14,723 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData#240c5895{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true}
10:08:14,723 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData#1cfd0695{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true}
10:08:14,877 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3
10:08:14,887 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureDeploymentManager ejbName: SecureDeploymentManager
10:08:15,010 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface
SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface
10:08:15,055 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3
10:08:15,056 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView
10:08:15,064 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureManagementView/remote - EJB3.x Default Remote Business Interface
SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface
10:08:15,107 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3
10:08:15,108 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService
10:08:15,150 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureProfileService/remote - EJB3.x Default Remote Business Interface
SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface
10:08:15,778 INFO [TomcatDeployment] deploy, ctxPath=/admin-console
10:08:15,930 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console'
10:08:19,678 INFO [TomcatDeployment] deploy, ctxPath=/
10:08:23,577 INFO [TomcatDeployment] deploy, ctxPath=/WebDataViewer
10:08:23,825 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/WebDataViewer'
10:08:24,812 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console
10:08:25,069 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8080
10:08:25,093 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009
10:08:25,099 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 45s:387ms
Snippet from web.xml
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>*.xhtml</url-pattern>
</servlet-mapping>
<welcome-file-list>
<welcome-file>index.html</welcome-file>
</welcome-file-list>

The address http://localhost:8080/WebDataViewer just points to the context root of the application, so it will just work if you've a default page defined (such as index.jsp). If you've no default resource for the context root WebDataViewer, you'll have to especify a resource name (a resource can be a servlet, a jsp, jsf ...), for example: http://localhost:8080/WebDataViewer/myservlet.
Usually you can look at the resources defined for your application in the WEB-INF/web.xml of your application, for example:
<servlet-mapping>
<servlet-name>MyServlet</servlet-name>
<url-pattern>/myservlet</url-pattern>
</servlet-mapping>
In your case, according to your web.xml, it seems you're using JSF. So I guess that you should have some xhtml files in the root of your war (or at least in any subdirectory). For example if your war is like:
WebDataViewer.war
|
|--page.xhtml
|
|--WEB-INF
| |
| |- web.xml
| |- ...
|- ...
A valid address would be: http://localhost:8080/WebDataViewer/page.xhtml.

Related

Spark SBT compilation issue

in my compilation even though i am placing twitter jar files in the src/main/resources folder ,but SBT compilation is not picking them up and compiles and package without errors but at run time gives me error as "class not found twitterUtils"
my question is why SBT is not including the jar files from resource folder in the compilation ?
people are telling me to do all these complex steps of getting the Git utility and then doing a sbt assembly which I did but since iam behind proxy Git is not working even though all the http_proxy setup.
I have also tried putting these twitter jar files in the CLASSPATH with no luck.
I am stuck with this issue so any help is highly appreciated.
please see the details below
[root#hadoop1 TwitterPopularTags]# pwd
/root/TwitterPopularTags
[root#hadoop1 TwitterPopularTags]# sbt compile
[info] Set current project to TwitterPopularTags (in build file:/root/TwitterPopularTags/)
[info] Updating {file:/root/TwitterPopularTags/}twitterpopulartags...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.
[info] Compiling 2 Scala sources to /root/TwitterPopularTags/target/scala-2.11/classes...
[success] Total time: 14 s, completed Sep 16, 2016 9:55:20 AM
[root#hadoop1 TwitterPopularTags]# sbt package
[info] Set current project to TwitterPopularTags (in build file:/root/TwitterPopularTags/)
[info] Packaging /root/TwitterPopularTags/target/scala-2.11/twitterpopulartags_2.11-1.0.jar ...
[info] Done packaging.
[success] Total time: 1 s, completed Sep 16, 2016 9:56:20 AM
[root#hadoop1 TwitterPopularTags]# spark-submit /root/TwitterPopularTags/target/scala-2.11/twitterpopulartags_2.11-1.0.jar
16/09/16 09:57:06 INFO SparkContext: Running Spark version 1.6.2
16/09/16 09:57:06 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/16 09:57:06 INFO SecurityManager: Changing view acls to: root
16/09/16 09:57:06 INFO SecurityManager: Changing modify acls to: root
16/09/16 09:57:06 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/09/16 09:57:07 INFO Utils: Successfully started service 'sparkDriver' on port 53967.
16/09/16 09:57:07 INFO Slf4jLogger: Slf4jLogger started
16/09/16 09:57:07 INFO Remoting: Starting remoting
16/09/16 09:57:07 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem#10.100.44.17:57877]
16/09/16 09:57:07 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 57877.
16/09/16 09:57:07 INFO SparkEnv: Registering MapOutputTracker
16/09/16 09:57:07 INFO SparkEnv: Registering BlockManagerMaster
16/09/16 09:57:07 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-47a89077-0926-447c-ada7-fdb4a9aa1b83
16/09/16 09:57:07 INFO MemoryStore: MemoryStore started with capacity 511.5 MB
16/09/16 09:57:07 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/16 09:57:08 INFO Server: jetty-8.y.z-SNAPSHOT
16/09/16 09:57:08 INFO AbstractConnector: Started SelectChannelConnector#0.0.0.0:4040
16/09/16 09:57:08 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/09/16 09:57:08 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.100.44.17:4040
16/09/16 09:57:08 INFO HttpFileServer: HTTP File server directory is /tmp/spark-d56628b6-fdbf-4d89-bbd2-a96603000607/httpd-ee499eb3-00ae-4276-b163-423e3b81f0b4
16/09/16 09:57:08 INFO HttpServer: Starting HTTP Server
16/09/16 09:57:08 INFO Server: jetty-8.y.z-SNAPSHOT
16/09/16 09:57:08 INFO AbstractConnector: Started SocketConnector#0.0.0.0:56067
16/09/16 09:57:08 INFO Utils: Successfully started service 'HTTP file server' on port 56067.
16/09/16 09:57:08 INFO SparkContext: Added JAR file:/root/TwitterPopularTags/target/scala-2.11/twitterpopulartags_2.11-1.0.jar at http://10.100.44.17:56067/jars/twitterpopulartags_2.11-1.0.jar with timestamp 1474034228091
16/09/16 09:57:08 INFO Executor: Starting executor ID driver on host localhost
16/09/16 09:57:08 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49715.
16/09/16 09:57:08 INFO NettyBlockTransferService: Server created on 49715
16/09/16 09:57:08 INFO BlockManagerMaster: Trying to register BlockManager
16/09/16 09:57:08 INFO BlockManagerMasterEndpoint: Registering block manager localhost:49715 with 511.5 MB RAM, BlockManagerId(driver, localhost, 49715)
16/09/16 09:57:08 INFO BlockManagerMaster: Registered BlockManager
16/09/16 09:57:08 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
16/09/16 09:57:08 INFO EventLoggingListener: Logging events to hdfs:///spark-history/local-1474034228122
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/twitter/TwitterUtils$
at dot.state.fl.us.PrintTweets$.main(PrintTweets.scala:29)
at dot.state.fl.us.PrintTweets.main(PrintTweets.scala)
my question is why SBT is not including the jar files from resource folder in the compilation ?
Because that's not what resource folder is for. If you want to manage the dependencies manually, put them into lib folder instead. But in this case you also need to do the same with all dependencies of those dependencies, their dependencies, etc. Using managed dependencies, as described in the linked documentation, is a much better idea in general.

Spark Cluster, failed to connect to master. (WARN Worker: Failed to connect to master)

I have a spark cluster with 2 nodes, master(172.17.0.229) and slave(172.17.0.228). I have edited spark-env.sh, added SPARK_MASTER_IP=127.17.0.229 and slaves, added 172.17.0.228.
I am starting my master node using start-master.sh and slave node using start-slaves.sh.
I can see the webUI with a master node with no worker, but the log of worker node is as:
Spark Command: /usr/lib/jvm/java-7-oracle/jre/bin/java -cp /usr/local/src/spark-1.5.2-bin-hadoop2.6/sbin/../conf/:/usr/local/src/spark-1.5.2-bin-hadoop$
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/18 14:17:25 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/12/18 14:17:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/18 14:17:26 INFO SecurityManager: Changing view acls to: ujjwal
15/12/18 14:17:26 INFO SecurityManager: Changing modify acls to: ujjwal
15/12/18 14:17:26 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ujjwal); users wit$
15/12/18 14:17:27 INFO Slf4jLogger: Slf4jLogger started
15/12/18 14:17:27 INFO Remoting: Starting remoting
15/12/18 14:17:27 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#172.17.0.228:47599]
15/12/18 14:17:27 INFO Utils: Successfully started service 'sparkWorker' on port 47599.
15/12/18 14:17:27 INFO Worker: Starting Spark worker 172.17.0.228:47599 with 2 cores, 2.7 GB RAM
15/12/18 14:17:27 INFO Worker: Running Spark version 1.5.2
15/12/18 14:17:27 INFO Worker: Spark home: /usr/local/src/spark-1.5.2-bin-hadoop2.6
15/12/18 14:17:27 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/12/18 14:17:27 INFO WorkerWebUI: Started WorkerWebUI at http://172.17.0.228:8081
15/12/18 14:17:27 INFO Worker: Connecting to master 127.17.0.229:7077...
15/12/18 14:17:27 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster#127.17.0.229:7077] has failed, address is now$
15/12/18 14:17:27 WARN Worker: Failed to connect to master 127.17.0.229:7077
akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://sparkMaster#127.17.0.229:7077/), Path(/user/Master)]
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:533)
at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:569)
at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:559)
at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
at akka.remote.EndpointWriter.postStop(Endpoint.scala:557)
at akka.actor.Actor$class.aroundPostStop(Actor.scala:477)
at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:411)
at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
at akka.actor.ActorCell.terminate(ActorCell.scala:369)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
Thanks for your suggestion.
Generally, checking the IP that your worker is trying to connect to against the reported spark://...:7077 address on the web UI at 172.17.0.229 port 8080 will help identify whether the address is correct.
In this particular case, it looks like you have a typo; change
SPARK_MASTER_IP=127.17.0.229
to read:
SPARK_MASTER_IP=172.17.0.229
(you seem to have 127/172 inverted).
My issue was a version mismatch between the spark java library I was using (2.0.0) and the version of the spark cluster (2.2.1)

Spark hangs on authentication with a Docker Mesos cluster

I'm trying to simulate a multi-node Mesos cluster using Docker and Zookeeper and trying to run a simple (py)Spark job on top of it. These Docker containers and the pyspark script are all run on the same machine. However, when I execute my Spark script, it hangs at:
No credentials provided. Attempting to register without authentication
The Mesos slave constantly outputs:
I0929 14:59:32.925915 62 slave.cpp:1959] Asked to shut down framework 20150929-143802-1224741292-5050-33-0060 by master#172.17.0.73:5050
W0929 14:59:32.926035 62 slave.cpp:1974] Cannot shut down unknown framework 20150929-143802-1224741292-5050-33-0060
And the Mesos master constantly outputs:
I0929 14:38:15.169683 39 master.cpp:2094] Received SUBSCRIBE call for framework 'test' at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.169845 39 master.cpp:2164] Subscribing framework test with checkpointing disabled and capabilities [ ]
E0929 14:38:15.170361 42 socket.hpp:174] Shutdown failed on fd=15: Transport endpoint is not connected [107]
I0929 14:38:15.170409 36 hierarchical.hpp:391] Added framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.170534 39 master.cpp:1051] Framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693 disconnected
I0929 14:38:15.170549 39 master.cpp:2370] Disconnecting framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.170555 39 master.cpp:2394] Deactivating framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
E0929 14:38:15.170560 42 socket.hpp:174] Shutdown failed on fd=16: Transport endpoint is not connected [107]
I0929 14:38:15.170593 39 master.cpp:1075] Giving framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693 0ns to failover
W0929 14:38:15.170835 41 master.cpp:4482] Master returning resources offered to framework 20150929-143802-1224741292-5050-33-0000 because the framework has terminated or is inactive
I0929 14:38:15.170855 36 hierarchical.hpp:474] Deactivated framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.170990 37 hierarchical.hpp:814] Recovered cpus(*):8; mem(*):31092; disk(*):443036; ports(*):[31000-32000] (total: cpus(*):8; mem(*):31092; disk(*):443036; ports(*):[31000-32000
], allocated: ) on slave 20150929-051336-1224741292-5050-19-S0 from framework 20150929-143802-1224741292-5050-33-0000
I0929 14:38:15.171820 41 master.cpp:4469] Framework failover timeout, removing framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0
.1.1:46693
I0929 14:38:15.171835 41 master.cpp:5112] Removing framework 20150929-143802-1224741292-5050-33-0000 (test) at scheduler-2f4e1e52-a04a-401f-b9aa-1253554fe73b#127.0.1.1:46693
I0929 14:38:15.172130 41 hierarchical.hpp:428] Removed framework 20150929-143802-1224741292-5050-33-0000
The Mesos master Docker image is built with the following Dockerfile
FROM ubuntu:14.04
ENV MESOS_V 0.24.0
# update
RUN apt-get update
RUN apt-get upgrade -y
# dependencies
RUN apt-get install -y wget openjdk-7-jdk build-essential python-dev python-boto libcurl4-nss-dev libsasl2-dev maven libapr1-dev libsvn-dev
# mesos
RUN wget http://www.apache.org/dist/mesos/${MESOS_V}/mesos-${MESOS_V}.tar.gz
RUN tar -zxf mesos-*.tar.gz
RUN rm mesos-*.tar.gz
RUN mv mesos-* mesos
WORKDIR mesos
RUN mkdir build
RUN ./configure
RUN make
RUN make install
RUN ldconfig
EXPOSE 5050
ENTRYPOINT ["/bin/bash"]
And I manually execute the mesos-master command:
LIBPROCESS_IP=${MASTER_IP} mesos-master --registry=in_memory --ip=${MASTER_IP} --zk=zk://172.17.0.75:2181/mesos --advertise_ip=${MASTER_IP}
The Mesos slave Docker image is built using the same Dockerfile except port 5051 is exposed instead. Then I run the following command in its container:
LIBPROCESS_IP=172.17.0.72 mesos-slave --master=zk://172.17.0.75:2181/mesos
The pyspark script is:
import os
import pyspark
src = 'file:///{}/README.md'.format(os.environ['SPARK_HOME'])
leader_ip = '172.17.0.75'
conf = pyspark.SparkConf()
conf.setMaster('mesos://zk://{}:2181/mesos'.format(leader_ip))
conf.set('spark.executor.uri', 'http://d3kbcqa49mib13.cloudfront.net/spark-1.5.0-bin-hadoop2.6.tgz')
conf.setAppName('my_test_app')
sc = pyspark.SparkContext(conf=conf)
lines = sc.textFile(src)
words = lines.flatMap(lambda x: x.split(' '))
word_count = (words.map(lambda x: (x, 1)).reduceByKey(lambda x, y: x+y))
print(word_count.collect())
Here is the complete output of the pyspark script:
15/09/29 11:07:59 INFO SparkContext: Running Spark version 1.5.0
15/09/29 11:07:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/09/29 11:07:59 WARN Utils: Your hostname, hubble resolves to a loopback address: 127.0.1.1; using 192.168.1.2 instead (on interface em1)
15/09/29 11:07:59 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/09/29 11:07:59 INFO SecurityManager: Changing view acls to: ftseng
15/09/29 11:07:59 INFO SecurityManager: Changing modify acls to: ftseng
15/09/29 11:07:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ftseng); users with modify permissions: Set(ftseng)
15/09/29 11:08:00 INFO Slf4jLogger: Slf4jLogger started
15/09/29 11:08:00 INFO Remoting: Starting remoting
15/09/29 11:08:00 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver#192.168.1.2:38860]
15/09/29 11:08:00 INFO Utils: Successfully started service 'sparkDriver' on port 38860.
15/09/29 11:08:00 INFO SparkEnv: Registering MapOutputTracker
15/09/29 11:08:00 INFO SparkEnv: Registering BlockManagerMaster
15/09/29 11:08:00 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-28695bd2-fc83-45f4-b0a0-eefcfb80a3b5
15/09/29 11:08:00 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
15/09/29 11:08:00 INFO HttpFileServer: HTTP File server directory is /tmp/spark-89444c7a-725a-4454-87db-8873f4134580/httpd-341c3da9-16d5-43a4-93ee-0e8b47389fdb
15/09/29 11:08:00 INFO HttpServer: Starting HTTP Server
15/09/29 11:08:00 INFO Utils: Successfully started service 'HTTP file server' on port 51405.
15/09/29 11:08:00 INFO SparkEnv: Registering OutputCommitCoordinator
15/09/29 11:08:00 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/09/29 11:08:00 INFO SparkUI: Started SparkUI at http://192.168.1.2:4040
15/09/29 11:08:00 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#716: Client environment:host.name=hubble
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#723: Client environment:os.name=Linux
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#724: Client environment:os.arch=3.19.0-25-generic
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#725: Client environment:os.version=#26-Ubuntu SMP Fri Jul 24 21:17:31 UTC 2015
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#733: Client environment:user.name=ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#741: Client environment:user.home=/home/ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#log_env#753: Client environment:user.dir=/home/ftseng
2015-09-29 11:08:00,651:32221(0x7fc09e17c700):ZOO_INFO#zookeeper_init#786: Initiating client connection, host=172.17.0.75:2181 sessionTimeout=10000 watcher=0x7fc0962b7176 sessionId=0 sessionPasswd=<null> context=0x7fc078001860 flags=0
I0929 11:08:00.651923 32328 sched.cpp:164] Version: 0.24.0
2015-09-29 11:08:00,652:32221(0x7fc06bfff700):ZOO_INFO#check_events#1703: initiated connection to server [172.17.0.75:2181]
2015-09-29 11:08:00,657:32221(0x7fc06bfff700):ZOO_INFO#check_events#1750: session establishment complete on server [172.17.0.75:2181], sessionId=0x150177fcfc40014, negotiated timeout=10000
I0929 11:08:00.658051 32322 group.cpp:331] Group process (group(1)#127.0.1.1:48692) connected to ZooKeeper
I0929 11:08:00.658083 32322 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0929 11:08:00.658100 32322 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0929 11:08:00.659600 32326 detector.cpp:156] Detected a new leader: (id='2')
I0929 11:08:00.659904 32325 group.cpp:674] Trying to get '/mesos/json.info_0000000002' in ZooKeeper
I0929 11:08:00.661052 32326 detector.cpp:481] A new leading master (UPID=master#172.17.0.73:5050) is detected
I0929 11:08:00.661201 32320 sched.cpp:262] New master detected at master#172.17.0.73:5050
I0929 11:08:00.661798 32320 sched.cpp:272] No credentials provided. Attempting to register without authentication
After a lot more experimentation, it looks like it was an issue with the IP address of the host machine (using its local network address, 192.168.xx.xx) when it should have been using its Docker host IP (172.17.xx.xx).
I managed to get things running with:
LIBPROCESS_IP=172.17.xx.xx python test_spark.py
I'm now hitting a different error, but it seems unrelated, so I think this command solves my problem.
I'm not familiar enough with Mesos/Spark yet to understand why this fixes things, so if someone wants to add an explanation, that would be very helpful.

JBoss 7.1.1 + JSF doesn't render the page

I'm developing with IntelliJ IDEA 14 the JSF application which has to be run under JBoss 7.1.1
JBoss deploys and starts correctly(running in IDE):
=========================================================================
[2015-03-11 09:44:31,277] Artifact bas_jsf:war exploded: Server is not connected. Deploy is not available.
Detected server admin port: 9999
JBoss Bootstrap Environment
JBOSS_HOME: /Users/Danil/IdeaProjects/jboss-as-7.1.1.Final
JAVA: /Library/Java/JavaVirtualMachines/jdk1.7.0_75.jdk/Contents/Home/bin/java
JAVA_OPTS: -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.server.default.config=standalone.xml
Detected server http port: 8080
=========================================================================
21:44:30,521 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA
21:44:31,582 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA
21:44:31,671 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final "Brontes" starting
21:44:33,068 INFO [org.xnio] XNIO Version 3.0.3.GA
21:44:33,070 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http)
21:44:33,082 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA
21:44:33,095 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA
21:44:33,125 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers
21:44:33,129 INFO [org.jboss.as.configadmin] (ServerService Thread Pool -- 26) JBAS016200: Activating ConfigAdmin Subsystem
21:44:33,160 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 31) JBAS010280: Activating Infinispan subsystem.
21:44:33,169 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
21:44:33,199 INFO [org.jboss.as.connector] (MSC service thread 1-6) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final)
21:44:33,214 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 38) JBAS011800: Activating Naming Subsystem
21:44:33,215 INFO [org.jboss.as.osgi] (ServerService Thread Pool -- 39) JBAS011940: Activating OSGi Subsystem
21:44:33,245 INFO [org.jboss.as.naming] (MSC service thread 1-4) JBAS011802: Starting Naming Service
21:44:33,259 INFO [org.jboss.as.security] (ServerService Thread Pool -- 44) JBAS013101: Activating Security Subsystem
21:44:33,266 INFO [org.jboss.as.security] (MSC service thread 1-3) JBAS013100: Current PicketBox version=4.0.7.Final
21:44:33,276 INFO [org.jboss.as.mail.extension] (MSC service thread 1-7) JBAS015400: Bound mail session [java:jboss/mail/Default]
21:44:33,305 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension
21:44:33,481 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-4) JBoss Web Services - Stack CXF Server 4.0.2.GA
21:44:33,676 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-7) Starting Coyote HTTP/1.1 on http--127.0.0.1-8080
log4j:WARN No appenders could be found for logger (org.jboss.logging).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
21:44:34,213 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
21:44:34,457 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-5) JBAS015012: Started FileSystemDeploymentService for directory /Users/Danil/IdeaProjects/jboss-as-7.1.1.Final/standalone/deployments
21:44:34,471 INFO [org.jboss.as.remoting] (MSC service thread 1-1) JBAS017100: Listening on /127.0.0.1:4447
21:44:34,472 INFO [org.jboss.as.remoting] (MSC service thread 1-3) JBAS017100: Listening on /127.0.0.1:9999
21:44:34,623 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
21:44:34,624 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final "Brontes" started in 4696ms - Started 133 of 208 services (74 services are passive or on-demand)
Connected to server
[2015-03-11 09:44:35,152] Artifact bas_jsf:war exploded: Artifact is being deployed, please wait...
21:44:35,295 INFO [org.jboss.as.server.deployment] (MSC service thread 1-5) JBAS015876: Starting deployment of "bas_jsf_war_exploded.war"
21:44:35,595 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-7) Initializing Mojarra 2.1.7-jbossorg-1 (20120227-1401) for context '/bas_jsf_war_exploded'
21:44:36,416 INFO [org.hibernate.validator.util.Version] (MSC service thread 1-7) Hibernate Validator 4.2.0.Final
21:44:36,612 INFO [org.jboss.web] (MSC service thread 1-7) JBAS018210: Registering web context: /bas_jsf_war_exploded
21:44:36,642 INFO [org.jboss.as.server] (management-handler-thread - 2) JBAS018559: Deployed "bas_jsf_war_exploded.war"
[2015-03-11 09:44:36,677] Artifact bas_jsf:war exploded: Artifact is deployed successfully
[2015-03-11 09:44:36,678] Artifact bas_jsf:war exploded: Deploy took 1,526 milliseconds
But page opens in browser is blank, because it is'nt rendered:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml"
xmlns:h="http://xmlns.jcp.org/jsf/html"
xmlns:ui="http://xmlns.jcp.org/jsf/facelets"
xmlns:f="http://xmlns.jcp.org/jsf/core">
<f:view>
<h:outputLabel value="Hello, world"></h:outputLabel>
</f:view>
</html>
In project setting library Mojarra-2.2.1 scoped as compile.
In JBoss configuration JSF library jsf-api-1.2_15-jbossorg-2.jar is included
web.xml:
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd"
version="3.1">
<servlet>
<servlet-name>Faces Servlet</servlet-name>
<servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>Faces Servlet</servlet-name>
<url-pattern>*.xhtml</url-pattern>
</servlet-mapping>
</web-app>

SPARK + Standalone Cluster: Cannot start worker from another machine

I've been setting up a Spark standalone cluster setup following this link. I have 2 machines; The first one (ubuntu0) serve as both the master and a worker, and the second one (ubuntu1) is just a worker. Password-less ssh has been properly configured for both machines already and was tested by doing SSH manually on both sides.
Now when I tried to ./start-all.ssh, both master and worker on the master machine (ubuntu0) were started properly. This is signified by (1)WebUI being accessible (localhost:8081 on my part) and (2) Worker registered/displayed on the WebUI.
However, the other worker on the second machine (ubuntu1), was not started. The error displayed was:
ubuntu1: ssh: connect to host ubuntu1 port 22: Connection timed out
Now this is quite weird already given that I've properly configured the ssh to be password-less on both sides. Given this, I accessed the second machine and tried to start the worker manually using these commands:
./spark-class org.apache.spark.deploy.worker.Worker spark://ubuntu0:7707
./spark-class org.apache.spark.deploy.worker.Worker spark://<ip>:7707
However, below is the result:
14/05/23 13:49:08 INFO Utils: Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
14/05/23 13:49:08 WARN Utils: Your hostname, ubuntu1 resolves to a loopback address:
127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
14/05/23 13:49:08 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
14/05/23 13:49:09 INFO Slf4jLogger: Slf4jLogger started
14/05/23 13:49:09 INFO Remoting: Starting remoting
14/05/23 13:49:09 INFO Remoting: Remoting started; listening on addresses :
[akka.tcp://sparkWorker#ubuntu1.local:42739]
14/05/23 13:49:09 INFO Worker: Starting Spark worker ubuntu1.local:42739 with 8 cores,
4.8 GB RAM
14/05/23 13:49:09 INFO Worker: Spark home: /home/ubuntu1/jaysonp/spark/spark-0.9.1
14/05/23 13:49:09 INFO WorkerWebUI: Started Worker web UI at http://ubuntu1.local:8081
14/05/23 13:49:09 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:49:29 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:49:49 INFO Worker: Connecting to master spark://ubuntu0:7707...
14/05/23 13:50:09 ERROR Worker: All masters are unresponsive! Giving up.
Below are the contents of my master and slave\worker spark-env.ssh:
SPARK_MASTER_IP=192.168.3.222
STANDALONE_SPARK_MASTER_HOST=`hostname -f`
How should I resolve this? Thanks in advance!
For those who are still encountering error(s) when it comes to starting workers on different machines, I just want to share that using IP addresses in conf/slaves worked for me.
Hope this helps!
I have add similar issues today running spark 1.5.1 on RHEL 6.7.
I have 2 machines, their hostname being
- master.domain.com
- slave.domain.com
I installed a standalone version of spark (pre-build against hadoop 2.6) and installed my Oracle jdk-8u66.
Spark download:
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.1-bin-hadoop2.6.tgz
Java download
wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u66-b17/jdk-8u66-linux-x64.tar.gz"
after spark and java are unpacked in my home directory I did the following:
on 'master.domain.com' I ran:
./sbin/start-master.sh
The webUI become available at http://master.domain.com:8080 (no slave running)
on 'slave.domain.com' I did try:
./sbin/start-slave.sh spark://master.domain.com:7077 FAILED AS FOLLOW
Spark Command: /root/java/bin/java -cp /root/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/root/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master.domain.com:7077
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/06 11:03:51 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/11/06 11:03:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/06 11:03:51 INFO SecurityManager: Changing view acls to: root
15/11/06 11:03:51 INFO SecurityManager: Changing modify acls to: root
15/11/06 11:03:51 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/11/06 11:03:52 INFO Slf4jLogger: Slf4jLogger started
15/11/06 11:03:52 INFO Remoting: Starting remoting
15/11/06 11:03:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#10.80.70.38:50573]
15/11/06 11:03:52 INFO Utils: Successfully started service 'sparkWorker' on port 50573.
15/11/06 11:03:52 INFO Worker: Starting Spark worker 10.80.70.38:50573 with 8 cores, 6.7 GB RAM
15/11/06 11:03:52 INFO Worker: Running Spark version 1.5.1
15/11/06 11:03:52 INFO Worker: Spark home: /root/spark-1.5.1-bin-hadoop2.6
15/11/06 11:03:53 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/11/06 11:03:53 INFO WorkerWebUI: Started WorkerWebUI at http://10.80.70.38:8081
15/11/06 11:03:53 INFO Worker: Connecting to master master.domain.com:7077...
15/11/06 11:04:05 INFO Worker: Retrying connection to master (attempt # 1)
15/11/06 11:04:05 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[sparkWorker-akka.actor.default-dispatcher-4,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#48711bf5 rejected from java.util.concurrent.ThreadPoolExecutor#14db705b[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 1]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters(Worker.scala:210)
at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:234)
at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:521)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:177)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaRpcEnv.scala:126)
at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:197)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
at akka.dispatch.Mailbox.run(Mailbox.scala:220)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
15/11/06 11:04:05 INFO ShutdownHookManager: Shutdown hook called
start-slave spark://<master-IP>:7077 also FAILED as above.
start-slave spark://master:7077 WORKED and the worker shows in the master webUI
Spark Command: /root/java/bin/java -cp /root/spark-1.5.1-bin-hadoop2.6/sbin/../conf/:/root/spark-1.5.1-bin-hadoop2.6/lib/spark-assembly-1.5.1-hadoop2.6.0.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/root/spark-1.5.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar -Xms1g -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/11/06 11:08:15 INFO Worker: Registered signal handlers for [TERM, HUP, INT]
15/11/06 11:08:15 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/06 11:08:15 INFO SecurityManager: Changing view acls to: root
15/11/06 11:08:15 INFO SecurityManager: Changing modify acls to: root
15/11/06 11:08:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
15/11/06 11:08:16 INFO Slf4jLogger: Slf4jLogger started
15/11/06 11:08:16 INFO Remoting: Starting remoting
15/11/06 11:08:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkWorker#10.80.70.38:40780]
15/11/06 11:08:17 INFO Utils: Successfully started service 'sparkWorker' on port 40780.
15/11/06 11:08:17 INFO Worker: Starting Spark worker 10.80.70.38:40780 with 8 cores, 6.7 GB RAM
15/11/06 11:08:17 INFO Worker: Running Spark version 1.5.1
15/11/06 11:08:17 INFO Worker: Spark home: /root/spark-1.5.1-bin-hadoop2.6
15/11/06 11:08:17 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
15/11/06 11:08:17 INFO WorkerWebUI: Started WorkerWebUI at http://10.80.70.38:8081
15/11/06 11:08:17 INFO Worker: Connecting to master master:7077...
15/11/06 11:08:17 INFO Worker: Successfully registered with master spark://master:7077
Note: I haven't added any extra config in conf/spark-env.sh
Note2: when looking in the master webUI, the spark master URL at the top is actually the one that worked for me, so I'd say in doubts just use that one.
I hope this helps ;)
Using hostname in /cong/slaves worked well for me.
Here are some steps I would take it,
Checked SSH public key
scp /etc/spark/conf.dist/spark-env.sh to your workers
My part of setting in spark-env.sh
export STANDALONE_SPARK_MASTER_HOST=hostname
export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST
I guess you missed something in your configuration, that's what I learned from your log.
Check your /etc/hosts, make sure ubuntu1 in your master's host list and its Ip is match the slave's IP address.
Add export SPARK_LOCAL_IP='ubuntu1' in the spark-env.sh file of your slave

Resources