Error using Kafkamessagedriven channel adapter - spring-integration

I am using spring-integration kafka to talk to kafka.
The following is the configuration that I did for using the messagedrivenchannel adapter.
<!-- kafka MessageDriven Channel Adapter for ProcessEvent -->
<int-kafka:message-driven-channel-adapter
listener-container="listnerContainer" payload-decoder="kafkaReflectionDecoder"
key-decoder="kafkaReflectionDecoder" channel="storeOffsetsChannel"
auto-startup="true"/>
<bean id="zkConfiguration"
class="org.springframework.integration.kafka.core.ZookeeperConfiguration">
<constructor-arg ref="zookeeperConnect"></constructor-arg>
</bean>
<bean id="kafkaConnectionFactory"
class="org.springframework.integration.kafka.core.DefaultConnectionFactory">
<constructor-arg ref="zkConfiguration"></constructor-arg>
</bean>
<bean id="listnerContainer"
class="org.springframework.integration.kafka.listener.KafkaMessageListenerContainer">
<constructor-arg ref="kafkaConnectionFactory"></constructor-arg>
<constructor-arg value="${listed.accounts.topic}"></constructor-arg>
</bean>
<!-- Zookeeper connect needed for Kafka Consumer -->
<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="${app.zookeeper.servers}" zk-connection-timeout="6000"
zk-session-timeout="6000" zk-sync-time="2000" />
When I am starting my application sometime the application wont start with the following error. But sometimes the application starts fine
2015-09-03 11:53:32.647 ERROR 28883 --- [ main] o.s.boot.SpringApplication : Application startup failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter#0'; nested exception is kafka.common.KafkaException: fetching topic metadata for topics [Set(fulfillment.payments.autopay.listeddueaccounts)] from broker [List()] failed
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:176)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:346)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:149)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:112)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:770)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:140)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:483)
at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:118)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:686)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:320)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:957)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:946)
at com.capitalone.payments.autopay.autopayprocesstransrecon.AutopayProcessTransactionRecon.main(AutopayProcessTransactionRecon.java:26)
Caused by: kafka.common.KafkaException: fetching topic metadata for topics [Set(fulfillment.payments.autopay.listeddueaccounts)] from broker [List()] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at org.springframework.integration.kafka.core.DefaultConnectionFactory.refreshMetadata(DefaultConnectionFactory.java:178)
at org.springframework.integration.kafka.core.DefaultConnectionFactory.getPartitions(DefaultConnectionFactory.java:221)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer$GetPartitionsForTopic.safeValueOf(KafkaMessageListenerContainer.java:611)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer$GetPartitionsForTopic.safeValueOf(KafkaMessageListenerContainer.java:600)
at com.gs.collections.impl.block.function.checked.CheckedFunction.valueOf(CheckedFunction.java:30)
at com.gs.collections.impl.utility.ArrayIterate.flatCollect(ArrayIterate.java:933)
at com.gs.collections.impl.utility.ArrayIterate.flatCollect(ArrayIterate.java:919)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer.getPartitionsForTopics(KafkaMessageListenerContainer.java:332)
at org.springframework.integration.kafka.listener.KafkaMessageListenerContainer.start(KafkaMessageListenerContainer.java:294)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.doStart(KafkaMessageDrivenChannelAdapter.java:137)
at org.springframework.integration.endpoint.AbstractEndpoint.start(AbstractEndpoint.java:94)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:173)
... 13 common frames omitted
Is my message driven adapater configuration wrong ?
Also to add , I have int-kafka:inbound-channel-adapter configured in my application which looks into different topic the configuration for that adapter is as below:
<int-kafka:inbound-channel-adapter
id="kafkaInboundChannelAdapter" channel="domainEventChannel"
kafka-consumer-context-ref="consumerContext" auto-startup="true">
<int:poller fixed-delay="10" time-unit="MILLISECONDS"></int:poller>
</int-kafka:inbound-channel-adapter>
<int-kafka:consumer-context id="consumerContext"
zookeeper-connect="zookeeperConnect" consumer-properties="consumerProperties"
consumer-timeout="1000">
<int-kafka:consumer-configurations>
<int-kafka:consumer-configuration
group-id="autopayProcessTransactionRecon" value-decoder="kafkaReflectionDecoder"
key-decoder="kafkaReflectionDecoder" max-messages="1">
<int-kafka:topic streams="2" id="${domain.event.topic}" />
</int-kafka:consumer-configuration>
</int-kafka:consumer-configurations>
</int-kafka:consumer-context>
<!-- Zookeeper connect needed for Kafka Consumer -->
<int-kafka:zookeeper-connect id="zookeeperConnect"
zk-connect="${app.zookeeper.servers}" zk-connection-timeout="6000"
zk-session-timeout="6000" zk-sync-time="2000" />

Based on the
broker [List()]
part of the trace, it looks like you are not configuring the host/broker ip. You might consider using a different Configuration implementation as the constructor arg of the kafkaConnectionFactory, e.g.
<bean id="brokerConfiguration" class="org.springframework.integration.kafka.core.BrokerAddressListConfiguration">
<constructor-arg>
<bean class="org.springframework.integration.kafka.core.BrokerAddress">
<constructor-arg type="java.lang.String" value="localhost"/>
</bean>
</constructor-arg>
</bean>
<bean id="kafkaConnectionFactory" class="org.springframework.integration.kafka.core.DefaultConnectionFactory">
<constructor-arg ref="brokerConfiguration"></constructor-arg>
</bean>

Related

Failed to find affinity server node with data storage configuration for starting cache

I'm trying to store Spark in-memory table into Ignite. When I try to do that I gets an error message as
Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL, aliveSrvNodes=[]].
I'm running it in a HDP cluster setup in a Ec2 machine but when I do the same in the cluster machine here it works perfectly but not in the EC2 machine.
Thanks in advance.
UPDATE:
I'm using Spark shell. Here's the code.
val df = sqlContext.read.json("~/responses")
val s = df.select("response.id","response.name")
s.write.format(IgniteDataFrameSettings.FORMAT_IGNITE).option(IgniteDataFrameSettings.OPTION_CONFIG_FILE, "~/apache-ignite-fabric-2.6.0-bin/examples/config/spark/example-shared-rdd.xml").option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS,"id").option(IgniteDataFrameSettings.OPTION_TABLE, "json_table").save()
Here is the xml config file that I use for my single Ignite server:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<!-- SharedRDD cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="sharedRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Integer</value>
<value>java.lang.Integer</value>
</list>
</property>
<!-- Set atomicity mode. -->
<property name="atomicityMode" value="ATOMIC"/>
<!-- Configure a number of backups. -->
<property name="backups" value="1"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Here's the full log.
18/10/30 12:32:54 WARN GridDiagnostic: Initial heap size is 252MB (should be no less than 512MB, use -Xms512m -Xmx512m).
18/10/30 12:32:54 WARN TcpCommunicationSpi: Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
18/10/30 12:32:55 WARN NoopCheckpointSpi: Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
18/10/30 12:32:55 WARN GridCollisionManager: Collision resolution is disabled (all jobs will be activated upon arrival).
18/10/30 12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
18/10/30 12:32:57 WARN IgniteAuthenticationProcessor: Cannot find the server coordinator node. Possible a client is started with forceServerMode=true. Security warning: user authentication will be disabled on the client.
18/10/30 12:32:58 ERROR ClusterCachesInfo: Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL6, aliveSrvNodes=[]]
18/10/30 12:32:58 WARN CacheAffinitySharedManager: No server nodes found for cache client: SQL_PUBLIC_JSON_TBL
The log contains the following record:
12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
It says, that a discovery message cannot be deserialized because HadoopAttributes class is not on the classpath. It may lead to connectivity problems and affect ability of nodes to see each other.
Make sure, that all nodes have ignite-hadoop module on their classpath, or get rid of this dependency.

Kundera with Datastax DS Driver Connection Port

I'm using Kundera with Datastax DS Driver for Cassandra connection.
On persistence.xml, I defined the port as 9042.
However, I noticed that that Kundera failed to connect to Cassandra:
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.503 sec <<< FAILURE! - in com.abc.DataServiceImplTest
writeReadTest(com.abc.DataServiceImplTest) Time elapsed: 0.38 sec <<< ERROR!
com.impetus.kundera.configure.schema.SchemaGenerationException: Error while opening socket, Caused by: .
at com.impetus.client.cassandra.schemamanager.CassandraSchemaManager.initiateClient(CassandraSchemaManager.java:718)
at com.impetus.kundera.configure.schema.api.AbstractSchemaManager.exportSchema(AbstractSchemaManager.java:112)
at com.impetus.client.cassandra.schemamanager.CassandraSchemaManager.exportSchema(CassandraSchemaManager.java:166)
at com.impetus.kundera.configure.SchemaConfiguration.configure(SchemaConfiguration.java:191)
at com.impetus.kundera.configure.ClientMetadataBuilder.buildClientFactoryMetadata(ClientMetadataBuilder.java:48)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.configureClientFactories(EntityManagerFactoryImpl.java:408)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.configure(EntityManagerFactoryImpl.java:161)
at com.impetus.kundera.persistence.EntityManagerFactoryImpl.<init>(EntityManagerFactoryImpl.java:135)
at com.impetus.kundera.KunderaPersistence.createEntityManagerFactory(KunderaPersistence.java:85)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:79)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:54)
And once I enabled the Thrift port (9160) at /etc/cassandra/cassandra.yaml, things start working.
Here's the persistence.xml:
<persistence-unit name="cassandra_pu">
<provider>com.impetus.kundera.KunderaPersistence</provider>
<class>com.abc.Person</class>
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="kundera.nodes" value="localhost" />
<property name="kundera.port" value="9042" />
<property name="kundera.keyspace" value="testkeyspace" />
<property name="kundera.ddl.auto.prepare" value="update" />
<property name="kundera.dialect" value="cassandra" />
<property name="kundera.client.lookup.class"
value="com.impetus.kundera.client.cassandra.dsdriver.DSClientFactory" />
<property name="kundera.annotations.scan.package" value="com.abc.impl"/>
</properties>
Wonder if I need to enable 9160 even with using DS-Driver?
Thanks!
kundera.ddl.auto.prepare works via thrift way which works on port 9160 but you are using com.impetus.kundera.client.cassandra.dsdriver.DSClientFactory provided by datastax & works on port 9042 as you have also mentioned in your persistence.xml. So ,
removing this line
<property name="kundera.ddl.auto.prepare" value="update" />
from your persistence.xml will solve this issue.

How to set Spring integration gemfire port

Im using gemfire as metadatastore
<gfe:cache/>
<gfe:replicated-region id="region" />
<bean id="metadataStore" class="org.springframework.integration.metadata.PropertiesPersistingMetadataStore"/>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean id="filterAllFiles" class="id.lsa.scb.spring.integration.filter.EntireFileFilter">
<property name="adrUtil" ref="pojoUtil"/>
</bean>
<bean id="acceptOnceFilter"
class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg name="store" ref="metadataStore"/>
<constructor-arg name="prefix" value="test-"/>
<property name="flushOnUpdate" value="true"/>
</bean>
</list>
</constructor-arg>
</bean>
but got error :
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'gemfireCache': FactoryBean threw exception on object creation; nested exception is com.gemstone.gemfire.SystemConnectException: Unable to find a free port in the membership-port-range
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1590)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:254)
at org
How to solve that? Can i set the port manualy?Can i set the port manualy?
How to read metadata from gemfire manualy, can i use something like sql query?
That is more question to Gemfire. Not sure where you can find their forum, mailing list or any other support.
The message
Unable to find a free port in the membership-port-range
is for this membership-port-range property, which you can configure in the gemfire.properties: http://gemfire702.docs.pivotal.io/7.0.2/userguide/reference/topics/gemfire_properties.html#gemfire_properties
I don't see by your config that you really use GemfireMetadataStore.
I think you can read that metadata manually and even use queries. GemfireMetadataStore is fully based on the Region<String, String> injected: http://docs.spring.io/spring-integration/reference/html/gemfire.html#gemfire-metadata-store

Spring jms failing to connect to Websphere MQ - Resource Exception

Issue: Getting resource exception at runtime attempting to connect to a websphere mq to get a jms message using spring. I just can't determine what I am missing?
Description:
Attempting to set up the example here. Spring MDP Activation Spec for Websphere MQ.
http://stackoverflow.com/questions/14523572/spring-jms-and-websphere-mq
Maven Dependencies
Note: Version numbers for the ibm jars looks odd because I created a local repo in my project to add the 3rd party libraries. I'm taking the ibm jars from my local Websphere SDP version for Websphere 7.5 . I also tried directly adding the jar dependencies on the STS spring package and had the same error.
Spring Config XML
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-jms="http://www.springframework.org/schema/integration/jms"
xmlns:jms="http://www.springframework.org/schema/jms"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-4.1.xsd
http://www.springframework.org/schema/integration/jms http://www.springframework.org/schema/integration/jms/spring-integration-jms-4.1.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms-4.1.xsd">
<bean id="messageListener" class="myproject.spring.integration.mq.SpringMdp" />
<bean id="messageListener" class="com.rohid.samples.SpringMdp" />
<bean class="org.springframework.jms.listener.endpoint.JmsMessageEndpointManager">
<property name="activationSpec">
<bean class="com.ibm.mq.connector.inbound.ActivationSpecImpl">
<property name="destinationType" value="javax.jms.Queue"/>
<property name="destination" value="QUEUE1"/>
<property name="hostName" value="A.B.C"/>
<property name="queueManager" value="QM_"/>
<property name="port" value="1414"/>
<property name="channel" value="SYSTEM.ADMIN.SVNNN"/>
<property name="transportType" value="CLIENT"/>
<property name="userName" value="abc"/>
<property name="password" value="jabc"/>
</bean>
</property>
<property name="messageListener" ref="messageListener"/>
<property name="resourceAdapter" ref="myResourceAdapterBean"/>
</bean>
<bean id="myResourceAdapterBean" org.springframework.jca.support.ResourceAdapterFactoryBean">
<property name="resourceAdapter">
<bean class="com.ibm.mq.connector.ResourceAdapterImpl">
<property name="maxConnections" value="50"/>
</bean>
</property>
<property name="workManager">
<bean class="org.springframework.jca.work.SimpleTaskWorkManager"/>
</property>
</bean>
</beans>
Stack Trace:
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.jms.listener.endpoint.JmsMessageEndpointManager#0' defined in class path resource [context.xml]: Instantiation of bean failed; nested exception is java.lang.NoClassDefFoundError: javax/resource/ResourceException
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1101)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1046)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:504)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:476)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:303)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:299)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:194)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:755)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:757)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:480)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:139)
at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:83)
at myproject.spring.integration.mq.Main.main(Main.java:9)
Caused by: java.lang.NoClassDefFoundError: javax/resource/ResourceException
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Unknown Source)
at java.lang.Class.getConstructor0(Unknown Source)
at java.lang.Class.getDeclaredConstructor(Unknown Source)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:80)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1094)
... 13 more
Caused by: java.lang.ClassNotFoundException: javax.resource.ResourceException
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 19 more
Updated- Solution: Required IBM Jar dependencies required.
Generally the easiest way to get the classpath right in RAD or other IBM tools for applications that are ultimately going to be deployed to WAS is to load the server runtime library of the WAS version into your app.
It is very easy. Go to the app properties page, go to build path, libraries tab and click add Library. You will see the screen shot below. Choose server runtimes and as long as you have installed the correct WAS versions as part of RAD, you will see their runtimes.
This is generally the best way to go since it allows you to keep you WAS libraries separate from your app, but to still make them available for compilation. The worst thing you can do is embed WAS libraries as part of your app. If you do this and you deploy your app to different versions of WAS, then you will get weird classpath or weird runtime errors. On top of that, things that were working might stop working after fixpacks get applied or other software adjustments.
If you add the server runtime library, then this is what your app will look like in RAD.
Looks like you are missing some dependencies here. Can you try to add javaee-api to your pom file?
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>6.0</version> <!-- or take version 7.0 if needed -->
</dependency>

Spring Quartz Blocking Threads

Currently, I am using spring quartz to fire three different jobs at different times. I am using three different schedulers for each trigger.
This scenario works fine for the first couple of hours but then all quartz threads become blocked. The following is my bean definition:
> <!-- TRAFFIC POLLER DECLERATION -->
> <!-- job -->
> <bean id="TriggerJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
> <property name="targetObject" ref="TriggerClass" />
> <property name="targetMethod" value="trigger" />
> </bean>
>
> <!-- trigger -->
> <bean id="trigger1" class="org.springframework.scheduling.quartz.CronTriggerBean">
> <property name="jobDetail" ref="TriggerJob" />
> <property name="cronExpression" value="${trigger.cron}" />
> </bean>
>
> <!-- SCHEDULER -->
> <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
> <property name="waitForJobsToCompleteOnShutdown" value="true" />
> <property name="triggers">
> <list>
> <ref bean="trigger1" />
> </list>
> </property>
> </bean>
The following is the stack trace from my jconsole of the blocked threads:
Name: org.springframework.scheduling.quartz.SchedulerFactoryBean#1_Worker-5
State: BLOCKED on java.lang.Object#131b502 owned by: org.springframework.scheduling.quartz.SchedulerFactoryBean#1_Worker-6
Total blocked: 20 Total waited: 101,919
Stack trace:
com.ecs.Trigger.TriggerClass trigger(TriggerClass.java:89)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.springframework.util.MethodInvoker.invoke(MethodInvoker.java:273)
org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean$MethodInvokingJob.executeInternal(MethodInvokingJobDetailFactoryBean.java:264)
org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86)
org.quartz.core.JobRunShell.run(JobRunShell.java:216)
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Anyone has got any idea why I am having this? Thanks in advance.
it seems that on job (org.springframework.scheduling.quartz.SchedulerFactoryBean#1_Worker-5) is synchronizing with another one org.springframework.scheduling.quartz.SchedulerFactoryBean#1_Worker-6 and is waiting for this one for releasing a resource.
Do you have some wait()/notify() instructions in your jobs?
or do you sinchronize the same job on some variable and you did not set in your TriggerJob?
You did not set concurrent=false so it may be that two instances on the same job are running and one is waiting for the same job executed in another thread.

Resources