There is one SFTP server and I have read/write access to a particular folder on this server. I am not allowed/able to access any other location (folder) on this server.
I am able to connect to this server using WinSCP client or putty and can perform read/write to the folder assigned to me successfully.
I am using user/password based authentication.
When i try to connect to same server -location using spring integration i get following exception.
Caused by: java.lang.IllegalStateException: failed to create SFTP Session
at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:354)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:300)
... 13 more
Caused by: java.lang.IllegalStateException: failed to connect
at org.springframework.integration.sftp.session.SftpSession.connect(SftpSession.java:250)
at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:349)
... 14 more
Caused by: com.jcraft.jsch.JSchException: SSH_MSG_DISCONNECT: 2 Too many authentication failures for root
at com.jcraft.jsch.Session.read(Session.java:987)
at com.jcraft.jsch.UserAuthPassword.start(UserAuthPassword.java:91)
at com.jcraft.jsch.Session.connect(Session.java:463)
at com.jcraft.jsch.Session.connect(Session.java:183)
at org.springframework.integration.sftp.session.SftpSession.connect(SftpSession.java:241)
configuration i am using is:
<bean id="sftpServerFactory" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${serverLocation}"/>
<property name="port" value="${port}"/>
<property name="user" value = "${username}"/>
<property name="password" value="${userpassword}"/>
</bean>
I request to guide me to solve this issue.
Related
I'm trying to store Spark in-memory table into Ignite. When I try to do that I gets an error message as
Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL, aliveSrvNodes=[]].
I'm running it in a HDP cluster setup in a Ec2 machine but when I do the same in the cluster machine here it works perfectly but not in the EC2 machine.
Thanks in advance.
UPDATE:
I'm using Spark shell. Here's the code.
val df = sqlContext.read.json("~/responses")
val s = df.select("response.id","response.name")
s.write.format(IgniteDataFrameSettings.FORMAT_IGNITE).option(IgniteDataFrameSettings.OPTION_CONFIG_FILE, "~/apache-ignite-fabric-2.6.0-bin/examples/config/spark/example-shared-rdd.xml").option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS,"id").option(IgniteDataFrameSettings.OPTION_TABLE, "json_table").save()
Here is the xml config file that I use for my single Ignite server:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<!-- SharedRDD cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="sharedRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Integer</value>
<value>java.lang.Integer</value>
</list>
</property>
<!-- Set atomicity mode. -->
<property name="atomicityMode" value="ATOMIC"/>
<!-- Configure a number of backups. -->
<property name="backups" value="1"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Here's the full log.
18/10/30 12:32:54 WARN GridDiagnostic: Initial heap size is 252MB (should be no less than 512MB, use -Xms512m -Xmx512m).
18/10/30 12:32:54 WARN TcpCommunicationSpi: Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
18/10/30 12:32:55 WARN NoopCheckpointSpi: Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
18/10/30 12:32:55 WARN GridCollisionManager: Collision resolution is disabled (all jobs will be activated upon arrival).
18/10/30 12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
18/10/30 12:32:57 WARN IgniteAuthenticationProcessor: Cannot find the server coordinator node. Possible a client is started with forceServerMode=true. Security warning: user authentication will be disabled on the client.
18/10/30 12:32:58 ERROR ClusterCachesInfo: Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL6, aliveSrvNodes=[]]
18/10/30 12:32:58 WARN CacheAffinitySharedManager: No server nodes found for cache client: SQL_PUBLIC_JSON_TBL
The log contains the following record:
12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
It says, that a discovery message cannot be deserialized because HadoopAttributes class is not on the classpath. It may lead to connectivity problems and affect ability of nodes to see each other.
Make sure, that all nodes have ignite-hadoop module on their classpath, or get rid of this dependency.
Im using gemfire as metadatastore
<gfe:cache/>
<gfe:replicated-region id="region" />
<bean id="metadataStore" class="org.springframework.integration.metadata.PropertiesPersistingMetadataStore"/>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean id="filterAllFiles" class="id.lsa.scb.spring.integration.filter.EntireFileFilter">
<property name="adrUtil" ref="pojoUtil"/>
</bean>
<bean id="acceptOnceFilter"
class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg name="store" ref="metadataStore"/>
<constructor-arg name="prefix" value="test-"/>
<property name="flushOnUpdate" value="true"/>
</bean>
</list>
</constructor-arg>
</bean>
but got error :
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'gemfireCache': FactoryBean threw exception on object creation; nested exception is com.gemstone.gemfire.SystemConnectException: Unable to find a free port in the membership-port-range
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1590)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:254)
at org
How to solve that? Can i set the port manualy?Can i set the port manualy?
How to read metadata from gemfire manualy, can i use something like sql query?
That is more question to Gemfire. Not sure where you can find their forum, mailing list or any other support.
The message
Unable to find a free port in the membership-port-range
is for this membership-port-range property, which you can configure in the gemfire.properties: http://gemfire702.docs.pivotal.io/7.0.2/userguide/reference/topics/gemfire_properties.html#gemfire_properties
I don't see by your config that you really use GemfireMetadataStore.
I think you can read that metadata manually and even use queries. GemfireMetadataStore is fully based on the Region<String, String> injected: http://docs.spring.io/spring-integration/reference/html/gemfire.html#gemfire-metadata-store
Installed CassandraVM-2.0.7.ova. Which version of Kundera works properly with this version of Cassandra? Facing too many issues trying to get this information. Could be my mistake in configuration. Is this information documented somewhere?
I tried with this maven dependency:
<dependency>
<groupId>com.impetus.kundera.client</groupId>
<artifactId>kundera-cassandra</artifactId>
<version>3.2</version>
</dependency>
Properties in persistence.xml is as follows:
<persistence-unit name="cassandra_pu">
<provider>com.impetus.kundera.KunderaPersistence</provider>
<properties>
<property name="kundera.nodes" value="a.b.com" />
<property name="kundera.port" value="9042"/>
<property name="kundera.keyspace" value="KunderaExamples" />
<property name="kundera.dialect" value="cassandra" />
<property name="kundera.client.lookup.class" value="com.impetus.client.cassandra.thrift.ThriftClientFactory" />
<property name="kundera.ddl.auto.prepare" value="create" />
</properties>
</persistence-unit>
While trying to execute the below statement,
EntityManagerFactory emf = Persistence.createEntityManagerFactory("cassandra_pu");
this exception occurred:
Exception in thread "main" com.impetus.kundera.configure.schema.SchemaGenerationException: org.apache.thrift.transport.TTransportException: Read a negative frame size (-2113929216)!
at com.impetus.client.cassandra.schemamanager.CassandraSchemaManager.create(CassandraSchemaManager.java:264)
at com.impetus.kundera.configure.schema.api.AbstractSchemaManager.handleOperations(AbstractSchemaManager.java:264)
at com.impetus.kundera.configure.schema.api.AbstractSchemaManager.exportSchema(AbstractSchemaManager.java:115)
at com.impetus.client.cassandra.schemamanager.CassandraSchemaManager.exportSchema(CassandraSchemaManager.java:166)
at com.impetus.kundera.configure.SchemaConfiguration.configure(SchemaConfiguration.java:188)
at com.impetus.kundera.configure.ClientMetadataBuilder.buildClientFactoryMetadata(ClientMetadataBuilder.java:48)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.configureClientFactories(EntityManagerFactoryImpl.java:408)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.configure(EntityManagerFactoryImpl.java:161)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.(EntityManagerFactoryImpl.java:135)
at com.impetus.kundera.KunderaPersistence.createEntityManagerFactory(KunderaPersistence.java:85)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:79)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
at Main.main(Main.java:16)
I then tried with a much older release of Kundera (2.4) and faced the same issue. I guess I am missing something fundamental.
Removed the automatic schema generation option in persistence.xml and am still facing the same issue (Unable to create a new Cassandra connection. org.apache.thrift.transport.TTransportException: Read a negative frame size (-2113929216)!.)
On the server, this is the exception:
java.lang.ArrayIndexOutOfBoundsException: 47at org.apache.cassandra.transport.Message$Type.fromOpcode(Message.java:106)
at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:168) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
Thrift clients need to used port 9160 and not 9042. Now it is working fine.
I am using spring-integration kafka to talk to kafka.
The following is the configuration that I did for using the messagedrivenchannel adapter.
<!-- kafka MessageDriven Channel Adapter for ProcessEvent -->
<int-kafka:message-driven-channel-adapter
listener-container="listnerContainer" payload-decoder="kafkaReflectionDecoder"
key-decoder="kafkaReflectionDecoder" channel="storeOffsetsChannel"
auto-startup="true"/>
<bean id="zkConfiguration"
class="org.springframework.integration.kafka.core.ZookeeperConfiguration">
<constructor-arg ref="zookeeperConnect"></constructor-arg>
</bean>
<bean id="kafkaConnectionFactory"
class="org.springframework.integration.kafka.core.DefaultConnectionFactory">
<constructor-arg ref="zkConfiguration"></constructor-arg>
</bean>
<bean id="listnerContainer"
class="org.springframework.integration.kafka.listener.KafkaMessageListenerContainer">
<constructor-arg ref="kafkaConnectionFactory"></constructor-arg>
<constructor-arg value="${listed.accounts.topic}"></constructor-arg>
</bean>
<!-- Zookeeper connect needed for Kafka Consumer -->
<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="${app.zookeeper.servers}" zk-connection-timeout="6000"
zk-session-timeout="6000" zk-sync-time="2000" />
When I am starting my application sometime the application wont start with the following error. But sometimes the application starts fine
2015-09-03 11:53:32.647 ERROR 28883 --- [ main] o.s.boot.SpringApplication : Application startup failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter#0'; nested exception is kafka.common.KafkaException: fetching topic metadata for topics [Set(fulfillment.payments.autopay.listeddueaccounts)] from broker [List()] failed
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:770)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:140)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)
at com.capitalone.payments.autopay.autopayprocesstransrecon.AutopayProcessTransactionRecon.main(AutopayProcessTransactionRecon.java:26)
Caused by: kafka.common.KafkaException: fetching topic metadata for topics [Set(fulfillment.payments.autopay.listeddueaccounts)] from broker [List()] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at org.springframework.integration.kafka.core.DefaultConnectionFactory.refreshMetadata(DefaultConnectionFactory.java:178)
at org.springframework.integration.kafka.core.DefaultConnectionFactory.getPartitions(DefaultConnectionFactory.java:221)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer$GetPartitionsForTopic.safeValueOf(KafkaMessageListenerContainer.java:611)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer$GetPartitionsForTopic.safeValueOf(KafkaMessageListenerContainer.java:600)
at com.gs.collections.impl.block.function.checked.CheckedFunction.valueOf(CheckedFunction.java:30)
at com.gs.collections.impl.utility.ArrayIterate.flatCollect(ArrayIterate.java:933)
at com.gs.collections.impl.utility.ArrayIterate.flatCollect(ArrayIterate.java:919)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer.getPartitionsForTopics(KafkaMessageListenerContainer.java:332)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer.start(KafkaMessageListenerContainer.java:294)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.doStart(KafkaMessageDrivenChannelAdapter.java:137)
at org.springframework.integration.endpoint.AbstractEndpoint.start(AbstractEndpoint.java:94)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
... 13 common frames omitted
Is my message driven adapater configuration wrong ?
Also to add , I have int-kafka:inbound-channel-adapter configured in my application which looks into different topic the configuration for that adapter is as below:
<int-kafka:inbound-channel-adapter
id="kafkaInboundChannelAdapter" channel="domainEventChannel"
kafka-consumer-context-ref="consumerContext" auto-startup="true">
<int:poller fixed-delay="10" time-unit="MILLISECONDS"></int:poller>
</int-kafka:inbound-channel-adapter>
<int-kafka:consumer-context id="consumerContext"
zookeeper-connect="zookeeperConnect" consumer-properties="consumerProperties"
consumer-timeout="1000">
<int-kafka:consumer-configurations>
<int-kafka:consumer-configuration
group-id="autopayProcessTransactionRecon" value-decoder="kafkaReflectionDecoder"
key-decoder="kafkaReflectionDecoder" max-messages="1">
<int-kafka:topic streams="2" id="${domain.event.topic}" />
</int-kafka:consumer-configuration>
</int-kafka:consumer-configurations>
</int-kafka:consumer-context>
<!-- Zookeeper connect needed for Kafka Consumer -->
<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="${app.zookeeper.servers}" zk-connection-timeout="6000"
zk-session-timeout="6000" zk-sync-time="2000" />
Based on the
broker [List()]
part of the trace, it looks like you are not configuring the host/broker ip. You might consider using a different Configuration implementation as the constructor arg of the kafkaConnectionFactory, e.g.
<bean id="brokerConfiguration" class="org.springframework.integration.kafka.core.BrokerAddressListConfiguration">
<constructor-arg>
<bean class="org.springframework.integration.kafka.core.BrokerAddress">
<constructor-arg type="java.lang.String" value="localhost"/>
</bean>
</constructor-arg>
</bean>
<bean id="kafkaConnectionFactory" class="org.springframework.integration.kafka.core.DefaultConnectionFactory">
<constructor-arg ref="brokerConfiguration"></constructor-arg>
</bean>
Issue: Getting resource exception at runtime attempting to connect to a websphere mq to get a jms message using spring. I just can't determine what I am missing?
Description:
Attempting to set up the example here. Spring MDP Activation Spec for Websphere MQ.
http://stackoverflow.com/questions/14523572/spring-jms-and-websphere-mq
Maven Dependencies
Note: Version numbers for the ibm jars looks odd because I created a local repo in my project to add the 3rd party libraries. I'm taking the ibm jars from my local Websphere SDP version for Websphere 7.5 . I also tried directly adding the jar dependencies on the STS spring package and had the same error.
Spring Config XML
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
xmlns:jms="http://www.springframework.org/schema/jms"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-4.1.xsd
http://www.springframework.org/schema/integration/jms http://www.springframework.org/schema/integration/jms/spring-integration-jms-4.1.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-4.1.xsd">
<bean id="messageListener" class="myproject.spring.integration.mq.SpringMdp" />
<bean id="messageListener" class="com.rohid.samples.SpringMdp" />
<bean class="org.springframework.jms.listener.endpoint.JmsMessageEndpointManager">
<property name="activationSpec">
<bean class="com.ibm.mq.connector.inbound.ActivationSpecImpl">
<property name="destinationType" value="javax.jms.Queue"/>
<property name="destination" value="QUEUE1"/>
<property name="hostName" value="A.B.C"/>
<property name="queueManager" value="QM_"/>
<property name="port" value="1414"/>
<property name="channel" value="SYSTEM.ADMIN.SVNNN"/>
<property name="transportType" value="CLIENT"/>
<property name="userName" value="abc"/>
<property name="password" value="jabc"/>
</bean>
</property>
<property name="messageListener" ref="messageListener"/>
<property name="resourceAdapter" ref="myResourceAdapterBean"/>
</bean>
<bean id="myResourceAdapterBean" org.springframework.jca.support.ResourceAdapterFactoryBean">
<property name="resourceAdapter">
<bean class="com.ibm.mq.connector.ResourceAdapterImpl">
<property name="maxConnections" value="50"/>
</bean>
</property>
<property name="workManager">
<bean class="org.springframework.jca.work.SimpleTaskWorkManager"/>
</property>
</bean>
</beans>
Stack Trace:
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.jms.listener.endpoint.JmsMessageEndpointManager#0' defined in class path resource [context.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: javax/resource/ResourceException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1101)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1046)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83)
at myproject.spring.integration.mq.Main.main(Main.java:9)
Caused by: java.lang.NoClassDefFoundError: javax/resource/ResourceException
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Unknown Source)
at java.lang.Class.getConstructor0(Unknown Source)
at java.lang.Class.getDeclaredConstructor(Unknown Source)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:80)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1094)
... 13 more
Caused by: java.lang.ClassNotFoundException: javax.resource.ResourceException
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 19 more
Updated- Solution: Required IBM Jar dependencies required.
Generally the easiest way to get the classpath right in RAD or other IBM tools for applications that are ultimately going to be deployed to WAS is to load the server runtime library of the WAS version into your app.
It is very easy. Go to the app properties page, go to build path, libraries tab and click add Library. You will see the screen shot below. Choose server runtimes and as long as you have installed the correct WAS versions as part of RAD, you will see their runtimes.
This is generally the best way to go since it allows you to keep you WAS libraries separate from your app, but to still make them available for compilation. The worst thing you can do is embed WAS libraries as part of your app. If you do this and you deploy your app to different versions of WAS, then you will get weird classpath or weird runtime errors. On top of that, things that were working might stop working after fixpacks get applied or other software adjustments.
If you add the server runtime library, then this is what your app will look like in RAD.
Looks like you are missing some dependencies here. Can you try to add javaee-api to your pom file?
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>6.0</version> <!-- or take version 7.0 if needed -->
</dependency>