I'm trying to deploy to AWS an application developed with JHipster, but I get an error when trying to do so. It looks like I'm unable to connect to the database.
I created a simple JSP page that connects directly to the database via jdbc and it works fine, so it seems I'm doing something wrong when configuring the application.
This is my current application.yml:
spring:
profiles:
active: prod
datasource:
dataSourceClassName: com.mysql.jdbc.jdbc2.optional.MysqlDataSource
url: jdbc:mysql://rds_url:3306/db?user=user&password=password
databaseName: db
serverName: rds_url
username: user
password: password
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
And this is what I see in the logs:
Caused by: com.zaxxer.hikari.pool.PoolInitializationException: Exception during pool initialization
at com.zaxxer.hikari.pool.BaseHikariPool.initializeConnections(BaseHikariPool.java:542)
at com.zaxxer.hikari.pool.BaseHikariPool.<init>(BaseHikariPool.java:171)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:60)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:48)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:80)
at com.test.config.DatabaseConfiguration.dataSource(DatabaseConfiguration.java:83)
at com.test.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$b2394f0e.CGLIB$dataSource$1(<generated>)
at com.test.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$b2394f0e$$FastClassBySpringCGLIB$$e28d8821.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:309)
at com.test.config.DatabaseConfiguration$$EnhancerBySpringCGLIB$$b2394f0e.dataSource(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 137 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:389)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1038)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:338)
at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2237)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2270)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2069)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:794)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:389)
at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:399)
at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:325)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlDataSource.java:422)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlDataSource.java:134)
at com.mysql.jdbc.jdbc2.optional.MysqlDataSource.getConnection(MysqlDataSource.java:105)
at com.zaxxer.hikari.pool.BaseHikariPool.addConnection(BaseHikariPool.java:438)
at com.zaxxer.hikari.pool.BaseHikariPool.initializeConnections(BaseHikariPool.java:540)
... 152 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:213)
at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:297)
... 169 more
Here's a link to more detailed logs.
Any help will be deeply appreciated.
Thanks,
Jodevan.
Alright. I made it work.
In order to test if there was some problem in the yml configuration file, I decided hard-coding the database properties directly in the DatabaseConfiguration.java file and it simply worked!
When the home page was rendered, my surprise: development mode!
I have no idea how this happened, since the package was supposed to be generated with the production profile activated. This is how the package was being generated as described here:
mvn -Pprod package
But I forgot that I also need to set this environment property in Tomcat.
By the way, since this is an Tomcat environment property, it's needed to restart your EBS environment.
Last but not least, make sure your RDS has a security group that allows connection from the EC2 where your EBS is configured.
Related
On a fresh install of Redhawk version 3 everything appears to be working and interacting with the domain works fine through Python. I appreciate the IDE has been deprecated but I am wondering if there is a fix to an issue when inspecting the domain. When connecting to a domain the IDE is producing the following error
Failed to fetch profile object from profile path: 'sca:///domain/DomainManager.dmd.xml?domain=REDHAWK_DEV&fs=IOR%3A000000000000001749444C3A43462F46696C654D616E616765723A312E300000000000010000000000000070010102000B00000031302E31312E322E3132000022CC00001C000000FF446F6D61696E4D616E61676572FEDD11376201003E5C00000000000200000000000000080000000100000000545441010000001C00000001000000010001000100000001000105090101000100000009010100#/'
The full stack indicates it is trying to access a file that doesn't exist (DomainManager.dmd.xml, which doesn't exist), full stack
org.eclipse.emf.ecore.resource.impl.ResourceSetImpl$1DiagnosticWrappedException: org.eclipse.core.runtime.CoreException: File cache error: /home/centos/workspace/.metadata/.plugins/gov.redhawk.sca.efs/fileCache/sdr_REDHAWK_DEV/dom/domain/DomainManager.dmd.xml at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.handleDemandLoadException(ResourceSetImpl.java:319) at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.demandLoadHelper(ResourceSetImpl.java:278) at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.getResource(ResourceSetImpl.java:406) at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.getEObject(ResourceSetImpl.java:220) at gov.redhawk.model.sca.ProfileObjectWrapper$Util.fetchProfileObject(ProfileObjectWrapper.java:225) at gov.redhawk.model.sca.impl.ScaPropertyContainerImpl.fetchProfileObject(ScaPropertyContainerImpl.java:379) at gov.redhawk.model.sca.impl.ScaPropertyContainerImpl.fetchPropertyDefinitions(ScaPropertyContainerImpl.java:708) at gov.redhawk.model.sca.impl.ScaPropertyContainerImpl.fetchProperties(ScaPropertyContainerImpl.java:611) at gov.redhawk.model.sca.impl.ScaDomainManagerImpl.fetchAttributes(ScaDomainManagerImpl.java:2187) at gov.redhawk.model.sca.impl.CorbaObjWrapperImpl.refresh(CorbaObjWrapperImpl.java:761) at gov.redhawk.sca.model.provider.refresh.internal.RefresherSwitch$1.refresh(RefresherSwitch.java:90) at gov.redhawk.sca.model.provider.refresh.internal.RefresherSwitch$1.refresh(RefresherSwitch.java:84) at gov.redhawk.sca.model.provider.refresh.internal.RefreshTasker.refresh(RefreshTasker.java:264) at gov.redhawk.sca.model.provider.refresh.internal.RefreshTasker$RefreshTask.run(RefreshTasker.java:214) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.eclipse.core.runtime.CoreException: File cache error: /home/centos/workspace/.metadata/.plugins/gov.redhawk.sca.efs/fileCache/sdr_REDHAWK_DEV/dom/domain/DomainManager.dmd.xml at gov.redhawk.efs.sca.internal.cache.FileCache.downloadFile(FileCache.java:159) at gov.redhawk.efs.sca.internal.cache.FileCache.update(FileCache.java:74) at gov.redhawk.efs.sca.internal.cache.FileCache.openInputStream(FileCache.java:84) at gov.redhawk.efs.sca.internal.ScaFileStore.openInputStream(ScaFileStore.java:306) at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.eclipse.emf.ecore.resource.impl.EFSURIHandlerImpl.createInputStream(EFSURIHandlerImpl.java:249) at org.eclipse.emf.ecore.resource.impl.ExtensibleURIConverterImpl.createInputStream(ExtensibleURIConverterImpl.java:360) at org.eclipse.emf.ecore.resource.impl.ResourceImpl.load(ResourceImpl.java:1314) at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.demandLoad(ResourceSetImpl.java:259) at org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.demandLoadHelper(ResourceSetImpl.java:274) ... 19 more Caused by: java.util.concurrent.ExecutionException: org.omg.CORBA.MARSHAL: Server-side Exception: null vmcid: 0x41540000 minor code: 10 completed: No at mil.jpeojtrs.sca.util.ProtectedThreadExecutor.submit(ProtectedThreadExecutor.java:86) at gov.redhawk.efs.sca.internal.cache.FileCache.readProtected(FileCache.java:199) at gov.redhawk.efs.sca.internal.cache.FileCache.copyLarge(FileCache.java:187) at gov.redhawk.efs.sca.internal.cache.FileCache.downloadFile(FileCache.java:137) ... 30 more Caused by: org.omg.CORBA.MARSHAL: Server-side Exception: null vmcid: 0x41540000 minor code: 10 completed: No at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.jacorb.orb.SystemExceptionHelper.read(SystemExceptionHelper.java:222) at org.jacorb.orb.ReplyReceiver.getReply(ReplyReceiver.java:456) at org.jacorb.orb.Delegate._invoke_internal(Delegate.java:1419) at org.jacorb.orb.Delegate.invoke_internal(Delegate.java:1188) at org.jacorb.orb.Delegate.invoke(Delegate.java:1176) at org.omg.CORBA.portable.ObjectImpl._invoke(ObjectImpl.java:475) at CF._FileStub.read(_FileStub.java:66) at gov.redhawk.efs.sca.internal.ScaFileInputStream.read(ScaFileInputStream.java:84) at java.io.InputStream.read(InputStream.java:101) at gov.redhawk.efs.sca.internal.cache.FileCache$1.call(FileCache.java:203) at gov.redhawk.efs.sca.internal.cache.FileCache$1.call(FileCache.java:1) at mil.jpeojtrs.sca.util.ProtectedThreadExecutor.submit(ProtectedThreadExecutor.java:84) ... 33 more
Confirmed files exist on Redhawk V2 system, removed and re-added workspace directory
It seems that the issue was fixed in v3.0.1 - https://github.com/RedhawkSDR/redhawk/releases/tag/3.0.1.
The effort in REDHAWK 3.0.1 focused on:
Replace log4j with reload4j in the Core Framework and in the IDE.
Resolved bug preventing fei3-generator from generating python code
Resolved bug preventing IDE from connecting to a running redhawk-3.0.0 domain
After running the command
spark-submit --class org.apache.spark.examples.SparkPi --proxy-user yarn --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue default ./examples/jars/spark-examples_2.11-2.3.0.jar 10000
I get this in the output and it keeps on retrying. Where am I going wrong? Am I missing some configuration?
I have created a new user for yarn and running that user.
WARN Utils:66 - Your hostname, ukaleem-HP-EliteBook-850-G3 resolves to a loopback address: 127.0.1.1; using 10.XX.XX.XX instead (on interface enp0s31f6)
2018-06-14 16:50:41 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
Warning: Local jar /home/yarn/Documents/Scala-Examples/./examples/jars/spark-examples_2.11-2.3.0.jar does not exist, skipping.
2018-06-14 16:50:42 INFO RMProxy:98 - Connecting to ResourceManager at /0.0.0.0:8032
2018-06-14 16:50:44 INFO Client:871 - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
And in the end, it gives the exception
Exception in thread "main" java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.GeneratedConstructorAccessor4.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy8.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy9.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 28 more
2018-06-14 17:10:53 INFO ShutdownHookManager:54 - Shutdown hook called
2018-06-14 17:10:53 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-5bddb7f3-165f-451c-8ab4-bb7729f4237c
EDIT : After adding config files to my spark/conf dir, I get this error now.
The files I added are
*core-site.xml
dfs.hosts
masters
slaves
yarn-site.xml*
And some more. What I understand is that I only need yarn-site.xml to tell spark the location of the yarn cluster. (ids, address, hostname etc).
All this time I had been thinking that even we want to submit a job on Yarn these config need to go in /etc/Hadoop dir not in Spark/conf. Whats the purpose of installing hadoop then (other than communicating)?
And following this question. If the config need to go in spark/conf then HADOOP_CONF_DIR & YARN_CONF_DIR should point to etc/hadoop dir or spark/conf?
INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
18/06/19 11:04:50 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm2 after 1 fail over attempts. Trying to fail over after sleeping for 38176ms.
java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to svc-hadoop-mgnt-pre-c2-01.jamba.net:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1479)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
at org.apache.hadoop.ipc.Client.call(Client.java:1451)
... 29 more
Assuming you have a fully distributed yarn cluster: your spark-submit script is unable to find the configuration for the yarn resourcemanager (basically the yarn master node). Ensure you have HADOOP_CONF_DIR properly set in your environment, and that it points to your cluster's configuration. Specifically your yarn-site.xml.
Edit: more detail
The hadoop package comes with both server and client software. The server software would be the many daemons that run that make up the cluster. If your workstation is acting as a client (using that term loosely, not fully related to sparks --deploy-mode), then the hadoop client software must know the network locations of the server daemons running in the cluster. If your yarn-site.xml is empty, then it is pulling it's default values from yarn-defauls.xml (which is hard-coded, I believe).
Assuming your cluster is not running in HA mode, and is a mostly default configuration, then your workstation's yarn-site.xml should contain at least an entry like the following:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>rm-host.yourdomain.com</value>
</property>
Obviously, replace the hostname with the hostname where your actual resource manager is running. Of course, any spark interaction with HDFS will require a properly configured hdfs-site.xml, etc.
Some cluster managing software will have something like "generate client configs" (thinking of my cloudera experience specifically), which will give you a .tar.gz with all of the config files correctly populated to access the cluster from an external workstation.
Further recommendations:
If you plan to do spark on yarn a lot in this cluster, spark recommends making sure that you have the external shuffle service configured to launch with your yarn node managers. (Please bear in mind, this config directive would have to be present in the yarn-site.xml where yarn's node manager services are running, not on your workstation.
If you are running this on your local machine,
Update your /etc/hosts file, Enter 127.0.0.1 against your hostname.
I am trying to run a Spark job on a cluster that creates a JanusGraph.
I have an instance of JanusGraph server, Cassandra, ES running on a single machine, only the Spark computation occurs on the cluster. (Basically, I did a janusgraph.sh start on the machine
My configuration is as follows (x is the IP of the machine I am running the above instances on):
def getGraph(): JanusGraph = {
val config = JanusGraphFactory.build()
config.set("storage.backend", "cassandrathrift")
config.set("storage.cassandrathrift.keyspace", "jgex")
config.set("storage.hostname", "x")
config.set("index.jgex.backend", "elasticsearch")
config.set("index.jgex.index-name", "jgex")
config.set("jgex.hostname", "x")
config.open()
}
But when I do a spark-submit of the fat jar on the cluster, I get this:
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:69)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:409)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1376)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:164)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:133)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:123)
at org.janusgraph.core.JanusGraphFactory$Builder.open(JanusGraphFactory.java:264)
at janus_create$.getGraph(janus_create.scala:66)
at janus_create$.makePropertiesandIndexes(janus_create.scala:830)
at janus_create$.main(janus_create.scala:921)
at janus_create.main(janus_create.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:58)
... 16 more
Caused by: org.janusgraph.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.getCassandraPartitioner(CassandraThriftStoreManager.java:219)
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.<init>(CassandraThriftStoreManager.java:198)
... 21 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeRawConnection(CTConnectionFactory.java:110)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeObject(CTConnectionFactory.java:74)
at org.janusgraph.diskstorage.cassandra.thrift.thriftpool.CTConnectionFactory.makeObject(CTConnectionFactory.java:43)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1179)
at org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager.getCassandraPartitioner(CassandraThriftStoreManager.java:216)
... 22 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.thrift.transport.TSocket.open(TSocket.java:182)
... 28 more
I tried switching between cassandra and cassandrathrift but both did not work. Also, where do I specify where my gremlin is running. Is that relevant?
The prepackaged distribution assumes a single, localhost node of each Cassandra, Elasticsearch, and Gremlin Server. Notice the java.net.ConnectException: Connection refused at the bottom of your stack trace. If you have Spark running on a remote cluster, you need to ensure that the servers are available on a non-localhost address.
Stop the distribution first with bin/janusgraph.sh stop
Update the listen_address and rpc_address in $JANUSGRAPH_HOME/conf/cassandra/cassandra.yaml using the IP address of the machine (Cassandra docs)
Add network.host to $JANUSGRAPH_HOME/elasticsearch/config/elasticsearch.yml using the IP address of the machine (Elasticsearch docs)
Update the host in $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml using the IP address of the machine (JanusGraph docs)
Assuming you are using the default Gremlin Server configuration gremlin-server.yaml, you need to update the properties file in $JANUSGRAPH_HOME/conf/gremlin-server/janusgraph-cassandra-es-server.properties using the IP address of the machine. Update storage.hostname and index.search.hostname using the IP address of the machine, matching the server settings above.
Update: It seems that there are errors in your graph connection properties that might be contributing to the problem:
config.set("storage.cassandrathrift.keyspace", "jgex") -- should be "storage.cassandra.keyspace"
config.set("jgex.hostname", "x") -- should be "index.jgex.hostname"
I'm trying to configure the Mylyn Gitlab Connector for Eclipse Oxygen v4.7.1a but when I try to add a new task it throws me an exception and it does not let me continue with the creation of the new task.
Enter correctly my data and the url address of the gitlab repository and even probe with several url and with all of them throw me the same exception.
Exception Stack Trace:
java.lang.reflect.InvocationTargetException
at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:398)
at org.eclipse.jface.wizard.WizardDialog.run(WizardDialog.java:980)
at org.eclipse.mylyn.tasks.ui.wizards.NewTaskWizard.performFinish(NewTaskWizard.java:113)
at org.eclipse.mylyn.internal.tasks.ui.wizards.SelectRepositoryPage.performFinish(SelectRepositoryPage.java:295)
at org.eclipse.mylyn.internal.tasks.ui.wizards.MultiRepositoryAwareWizard.performFinish(MultiRepositoryAwareWizard.java:53)
at org.eclipse.jface.wizard.WizardDialog.finishPressed(WizardDialog.java:778)
at org.eclipse.jface.wizard.WizardDialog.buttonPressed(WizardDialog.java:417)
at org.eclipse.jface.dialogs.Dialog.lambda$0(Dialog.java:619)
at org.eclipse.swt.events.SelectionListener$1.widgetSelected(SelectionListener.java:81)
at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:249)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:86)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:5268)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1348)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4522)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:4107)
at org.eclipse.jface.window.Window.runEventLoop(Window.java:818)
at org.eclipse.jface.window.Window.open(Window.java:794)
at org.eclipse.mylyn.tasks.ui.TasksUiUtil.openNewTaskEditor(TasksUiUtil.java:237)
at org.eclipse.mylyn.tasks.ui.TasksUiUtil.openNewTaskEditor(TasksUiUtil.java:253)
at org.eclipse.mylyn.internal.tasks.ui.actions.NewTaskAction.run(NewTaskAction.java:95)
at org.eclipse.mylyn.internal.tasks.ui.actions.NewTaskAction.run(NewTaskAction.java:102)
at org.eclipse.ui.internal.PluginAction.runWithEvent(PluginAction.java:247)
at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:565)
at org.eclipse.jface.action.ActionContributionItem.lambda$4(ActionContributionItem.java:397)
at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:86)
at org.eclipse.swt.widgets.Display.sendEvent(Display.java:5268)
at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1348)
at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4522)
at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:4107)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$5.run(PartRenderingEngine.java:1150)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:336)
at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1039)
at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:153)
at org.eclipse.ui.internal.Workbench.lambda$3(Workbench.java:680)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:336)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:594)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:148)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:151)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:653)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:590)
at org.eclipse.equinox.launcher.Main.run(Main.java:1499)
at org.eclipse.equinox.launcher.Main.main(Main.java:1472)
Caused by: java.lang.Error: de.weingardt.mylyn.gitlab.core.exceptions.GitlabException: Invalid path in host
at de.weingardt.mylyn.gitlab.core.GitlabTaskDataHandler.getAttributeMapper(GitlabTaskDataHandler.java:67)
at org.eclipse.mylyn.internal.tasks.ui.util.TasksUiInternal.createTaskData(TasksUiInternal.java:872)
at org.eclipse.mylyn.tasks.ui.wizards.NewTaskWizard$1.run(NewTaskWizard.java:103)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:119)
Caused by: de.weingardt.mylyn.gitlab.core.exceptions.GitlabException: Invalid path in host
at de.weingardt.mylyn.gitlab.core.exceptions.GitlabExceptionHandler.handle(GitlabExceptionHandler.java:20)
at de.weingardt.mylyn.gitlab.core.ConnectionManager.validate(ConnectionManager.java:139)
at de.weingardt.mylyn.gitlab.core.ConnectionManager.get(ConnectionManager.java:159)
at de.weingardt.mylyn.gitlab.core.ConnectionManager.get(ConnectionManager.java:45)
at de.weingardt.mylyn.gitlab.core.GitlabTaskDataHandler.getAttributeMapper(GitlabTaskDataHandler.java:65)
... 3 more
Any suggestions on how to solve this problem?
Right click on Mylyn repository and select Repository -> Properties
Expand Additional Settings and check "Use private token"
works well for me.
Eclipse Oxygen.2 Release (4.7.2)
I have installed Appdynamics lite on my server and it worked fine when I used to run my tomcat instance with root user. But from the time I have created a new user "Tomcat" and start executing my apache tomcat with this user, I am not able to run appdynamics. I have copied the javaagent at this location with all rights(read,write,execute) to tomcat "/home/tomcat/profiler/AppServerLite". It throws an exception as follows :
Install Directory resolved to[/home/tomcat/profiler/AppServerLite]
java.lang.RuntimeException: Invalid Agent Installation Directory [/home/tomcat/profiler/AppServerLite]
at com.singularity.ee.agent.appagent.AgentEntryPoint.addThirdPartyURLs(AgentEntryPoint.java:190)
at com.singularity.ee.agent.appagent.AgentEntryPoint.premain(AgentEntryPoint.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:343)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:358)
Install Directory resolved to[/home/tomcat/profiler/AppServerLite]
java.lang.RuntimeException: Invalid Agent Installation Directory [/home/tomcat/profiler/AppServerLite]
at com.singularity.ee.agent.appagent.AgentEntryPoint.addThirdPartyURLs(AgentEntryPoint.java:190)
at com.singularity.ee.agent.appagent.AgentEntryPoint.premain(AgentEntryPoint.java:144)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:343)
at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:358)
18 Apr, 2013 1:57:03 PM org.apache.catalina.core.AprLifecycleListener init
You should copy all files to run the agent, not just javaagent.jar.
This is a thread about it. http://appsphere.appdynamics.com/t5/Lite-for-Java/Keep-getting-a-quot-Invalid-Agent-Installation-Directory-quot-on/td-p/1151
A great place to ask this question would be on the AppDynamics discussion forums so that AppDynamics support can answer you directly... http://appsphere.appdynamics.com/t5/Discussions/ct-p/Discussions
I'm guessing there is a permissions issue somewhere.