We have a job running in Spring batch each weekday, triggered from another system. Sometimes there are several instances of the job to be run on the same day. Each one triggered from the other system.
Every job runs for about an hour and if there are several job instances to be run we experience some problems with the data.
We would like to optimize this step as following, if no job instance is running then start a new one, if there is a job instance running already put the new one in a queue.
Each job instance must be COMPLETED before the next one is triggered. If one fail the next one must wait.
The job parameters are an incrementer and a timestamp.
I've Googled a bit but can't find anything that I find useful.
So I wonder if this is duable, to queue job instances in spring batch?
If so, how do I do this? I have looked into Spring integration and job-launching-gateway but I don't really see how to implement it, I guess I don't understand how it works. I try to read about these things but I still don't understand.
Maybe I have the wrong versions of spring batch? Maybe I am missing something?
If you need more information from me please let me know!
Thank you!
We are using spring-core and spring-beans 3.2.5, spring-batch-integration 1.2.2, spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 2.0.3
Well, if you are good to have Spring Integration in your application alongside with the Spring Batch, that really would be great idea to leverage the job-launching-gateway capability.
Right, you can place your tasks into the queue - essentially QueueChannel.
The endpoint to poll that channel can be configured with the max-message-per-poll="1" to poll from the internal queue only one task at a time.
When you have just polled one message, send it into the job-launching-gateway and at the same time to the Control Bus component the command to stop that polling endpoint do not touch other messages in queue until the finish of the current job. When job is COMPLETED, you can send one more control message to start that polling endpoint.
Be sure that you use all the Spring Integration modules in the same version: spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 3.0.5, as well.
If you still require an answer one could use a ThreadPoolTaskExecutor with a core size of 1 and max size of 1 and then a queue size that you desire.
i.e.
<bean id="jobLauncherTaskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="1" />
<property name="maxPoolSize" value="1" />
<property name="queueCapacity" value="200" />
</bean>
and then pass that to the SimpleJobLauncher
i.e.
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="jobLauncherTaskExecutor" />
</bean>
Related
We are using Spring Batch and partitioning of jobs in a 10 server JBoss EAP 5.2 cluster. Because of a problem in JBoss messaging, we needed to use a topic for the reply message from the partitioned steps. All has been working fine until we see JBoss Messaging glitches (on the server that launches the job) and that drops it from the cluster. It recovers but the main partition does no``t pick up the messages sent from the partition steps. I see the messages in the topic in the JMX-Console but also see that the subscription and the messages are non-durable. Therefore I would like to make the communication for the partition step reply into a durable subscription. I can't seem to fine a document way to do this. This is my current configuration of the partitioned step and associated bean.
Inbound Gateway Configuration
<int:channel id="springbatch.slave.jms.request"/>
<int:channel id="springbatch.slave.jms.response" />
<int-jms:inbound-gateway
id="springbatch.master.inbound.gateway"
connection-factory="springbatch.listener.jmsConnectionFactory"
request-channel="springbatch.slave.jms.request"
request-destination="springbatch.partition.jms.requestsQueue"
reply-channel="springbatch.slave.jms.response"
concurrent-consumers="${springbatch.partition.concurrent.consumers}"
max-concurrent-consumers="${springbatch.partition.concurrent.maxconsumers}"
max-messages-per-task="${springbatch.partition.concurrent.maxmessagespertask}"
reply-time-to-live="${springbatch.partition.reply.time.to.live}"
/>
Outbound Gateway Configuration
<int:channel id="jms.requests">
<int:dispatcher task-executor="springbatch.partitioned.jms.taskExecutor" />
</int:channel>
<int:channel id="jms.reply" />
<int-jms:outbound-gateway id="outbound-gateway"
auto-startup="false" connection-factory="springbatch.jmsConnectionFactory"
request-channel="jms.requests"
request-destination="springbatch.partition.jms.requestsQueue"
reply-channel="jms.reply"
reply-destination="springbatch.partition.jms.repliesQueue"
correlation-key="JMSCorrelationID">
<int-jms:reply-listener />
</int-jms:outbound-gateway>
</code>
Further to Michael's comment; there is currently no way to configure a topic for the <reply-listener/> - it's rather unusual to use a topic in a request/reply scenario and we didn't anticipate that requirement.
Feel free to open a JIRA Issue against Spring Integration.
An alternative would be to wire in an outbound-channel-adapter for the requests and an inbound-channel-adapter for the replies. However, some special handling of the replyChannel header is needed when doing that - see the docs here for more information about that.
Issue:
Have 1 active MDP (jmsIn below) attached to a single queue and keep the second clustered MDP on server 2 passive. I require only a single active process because I want to perform aggregation and not lose messages.
I have been reading about the control bus but since its a clustered environment , the channel id and jms:message-driven-channel-adapter id would have the save name on both servers. Is it possible for the control bus to deactivate on another server using JMX even though they have the id's? Or possibly have a check done first by the message-driven-channel-adapter to determine if there is already a connection active on the queue itself.
Message-driven-channel-adapter Sample Code:
<jms:message-driven-channel-adapter id="jmsIn"
destination="requestQueue"
channel="jmsInChannel" />
<channel id="jmsInChannel" />
<beans:beans profile="default">
<stream:stdout-channel-adapter id="stdout" channel="jmsInChannel" append-newline="true"/>
</beans:beans>
<beans:beans profile="testCase">
<bridge input-channel="jmsInChannel" output-channel="queueChannel"/>
<channel id="queueChannel">
<queue />
</channel>
</beans:beans>
There is no reason to worry about clustered singleton for the <message-driven-channel-adapter>, just because any queuing system (like JMS) is designed for the single consumer for the particular message. In other words independently of the number of processes on the queue, only one of them picks up the message and process it.
The Transaction semantics on JMS help you do not lose messages. If there is some Exception, the TX rallbacks and the message is returned to the queue and can be picked up by another consumer.
For the aggregator you really can use some distributed persistent MessageStore. Where all your consumer send their messages to the same <aggregator> component which just deals with the shared MS to do its own aggregation logic.
Gridgain has failover spi mechanism for failure of jobs on nodes.
However, we would like to configure a failure mechanism that throws exception even when once of the configured data nodes goes down.
How can we do this?
Are you trying to prevent failover for your tasks and throw an exception if a node that was in process of executing a job fails? (I'm not sure I understood you correctly, so please correct me if I'm wrong)
If I'm right, the easiest way is to configure NeverFailoverSpi, like this:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.never.NeverFailoverSpi"/>
</property>
</bean>
Another option is to use IgniteCompute.withAsyncNoFailover() method. It's useful if you want to disable failover for a small subset of tasks, but still use default mechanisms for others. Here is an example:
IgniteCompute compute = ignite.compute().withAsyncNoFailover();
// Tasks executed with this compute instance will never failover.
compute.execute(MyTask1.class, "arg");
My requirement is to use transactional session with message-driven-channel-adapter (JmsMessageDrivenEndpoint). I am able to setup the configuration buy using sessionTransacted = true for DefaultMessageListenerContainer.
Work flow: receive a message -> call the service activator -> service activator calls dao class
On successful commit to database a commit() is called by spring framework and on any runtime exception a rollback() is called by spring framework. Which works just fine. When a rollback happens JMS Broker sends the message back again to my application.
For a specific type of exception in dao I want to add a message header (i.e redelivery time) so that JMS Broker will not send the message again right away. How can I do it?
For another specific type of exception in dao I want to use control bus to stop the end point (message-driven-channel-adapter) and before stopping it rollback the previous transaction. How can I do it?
Any one can help me out?
There is no wonder, how to use Control Bus for start/stop endpoints:
<int:control-bus input-channel="controlChannel"/>
<int-jms:message-driven-channel-adapter id="jmsInboundEndpoint"/>
<int:transformer input-channel="stopImsInboundEndpointChannel"
outbound-channel="controlChannel"
expression="'#jmsInboundEndpoint.stop()'"/>
Or you can send to the controlChannel the same command string from any place of your code.
But it doesn't matter that the last transaction will be rollbacked. It depends on your 'unit of work' (in other words - the behaviour of your service).
However you can, at the same time when you send 'stop command', mark current transaction for rollback:
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
Another your question about 'adding some message header' is abnormal for Messaging at all.
If you change the message it will be a new one and you can't rollback message to the queue with some new info.
Of course, you can do it anyway and have new message. But you should resend it, not rollback. So, you should commit transaction anyway and send that new message somewhere (or to the same queue), but it will be new message as for Broker as well for your application. And one more time: for this case you have to commit transaction.
Not sure that it is very clear and I go right way in my asnwer, but hope it helps you a bit.
You cannot modify the message (add a header) before rollback. You could, of course, requeue it as a new message after catching the exception. Some brokers (e.g. ActiveMQ) provide a back-off retry policy after a rollback. That might be a better solution if your broker supports it.
You can use the control bus to stop the container, but you will probably have to do it asynchronously (invoke the stop on another thread, e.g. by using an ExecutorChannel on the control bus). Otherwise, depending on your environment you might have problems with the stop waiting for the container thread to exit, so you shouldn't execute the stop on the container thread itself.
Best thing to do is experiment.
Thanks Gary and Artem. The solution is working. I am using the below configuration:
<jms:message-driven-channel-adapter id="jmsMessageDrivenChannelAdapter" connection-factory="connectionFactory"
destination="destination" transaction-manager="jmsTransactionManager" channel="serviceChannel" error-channel="ultimateErrorChannel" />
<si:service-activator input-channel="ultimateErrorChannel" output-channel="controlChannel">
<bean class="play.spring.integration.TestErrorHandler">
<property name="adapterNeedToStop" value="jmsMessageDrivenChannelAdapter" />
<property name="exceptionWhenNeedToStop" value="play.spring.integration.ShutdownException" />
</bean>
</si:service-activator>
<si:channel id="controlChannel">
<si:dispatcher task-executor="controlBusExecutor" />
</si:channel>
<task:executor id='controlBusExecutor' pool-size='10' queue-capacity='50' />
<si:control-bus input-channel="controlChannel" />
Now my question is if I want to stop multiple inbound adapters how can I send a single message to control-bus for all these adapters?
I am going to study SpEL. Would appreciate if someone already know it.
Thanks
Having defined a channel adapter as:
<int:channel id="target">
<int:queue />
</int:channel>
<int-jdbc:inbound-channel-adapter id="adapter" channel="target" query="${int.poll.query}" update="${int.update.query}" data-source="mock-datasource">
<int:poller fixed-rate="5000"/>
</int-jdbc:inbound-channel-adapter>
I wonder why I cannot modify the polling rate on runtime, as follows:
SourcePollingChannelAdapter adapter = applicationContext.getBean("adapter",SourcePollingChannelAdapter.class);
adapter.setTrigger(new PeriodicTrigger(1000));
When i debug this solution, I can see that the adapter has this new trigger attached to it, however the polling rate remains unchanged (every 5 secs). I tried also to stop() and start() the adapter, with similar luck.
Anyone can point me out what I am doing wrong?
Thanks
[RESOLVED]
It has been confirmed by members of Spring team, that a trigger cannot be modified on runtime. So if you want to modify the polling rate dynamically, for example to throttle inbound messages, you will have to roll your own Trigger implementation and add a setter for the interval polling.
I leave here the changes done in my configuration:
<int-jdbc:inbound-channel-adapter id="bancsAdapter" channel="target" query="${int.bancs.poll.query}" update="${int.bancs.update.query}" data-source="bancsMockDB">
<int:poller trigger="dynamicTrigger" />
</int-jdbc:inbound-channel-adapter>
<bean id="dynamicTrigger" class="directlabs.integration.DynamicTrigger">
<constructor-arg value="5000" />
</bean>
So for throttling, you only need to do the following:
applicationContext.getBean("dynamicTrigger",DynamicTrigger.class).setPeriod(1000);
The implementation of the DynamicTrigger can be found here
The original comments from the Spring team members can be found here.
While space here does not allow for a full example, we created a Service that uses Quartz Scheduler as a triggering mechanism. It accepts an XML document with the Quartz Jobs and Triggers defined (this stack overflow describes the process Use simple xml to drive the Quartz Sheduler )
The input channel will accept the XML to be used for setting the Schedules in Quartz. The input channel then can be used to accept dynamic updates of jobs and triggers.
The xml entries in the job-map-data will have an "output" channel defined and one can add other job-map-data that can be set in the output message header to allow for routing.
We constantly re-use this Service in many of our Spring Integration contexts.
Hope this helps.