Issue:
Have 1 active MDP (jmsIn below) attached to a single queue and keep the second clustered MDP on server 2 passive. I require only a single active process because I want to perform aggregation and not lose messages.
I have been reading about the control bus but since its a clustered environment , the channel id and jms:message-driven-channel-adapter id would have the save name on both servers. Is it possible for the control bus to deactivate on another server using JMX even though they have the id's? Or possibly have a check done first by the message-driven-channel-adapter to determine if there is already a connection active on the queue itself.
Message-driven-channel-adapter Sample Code:
<jms:message-driven-channel-adapter id="jmsIn"
destination="requestQueue"
channel="jmsInChannel" />
<channel id="jmsInChannel" />
<beans:beans profile="default">
<stream:stdout-channel-adapter id="stdout" channel="jmsInChannel" append-newline="true"/>
</beans:beans>
<beans:beans profile="testCase">
<bridge input-channel="jmsInChannel" output-channel="queueChannel"/>
<channel id="queueChannel">
<queue />
</channel>
</beans:beans>
There is no reason to worry about clustered singleton for the <message-driven-channel-adapter>, just because any queuing system (like JMS) is designed for the single consumer for the particular message. In other words independently of the number of processes on the queue, only one of them picks up the message and process it.
The Transaction semantics on JMS help you do not lose messages. If there is some Exception, the TX rallbacks and the message is returned to the queue and can be picked up by another consumer.
For the aggregator you really can use some distributed persistent MessageStore. Where all your consumer send their messages to the same <aggregator> component which just deals with the shared MS to do its own aggregation logic.
Related
<int:channel id="emailInputChannel"/>
<!-- Email Poller. Only one poller thread -->
<task:executor id="emailPollingExecutor" pool-size="1" />
<int-mail:inbound-channel-adapter id="pop3EmailAdapter" store-uri="pop3://${pop3.user}:${pop3.pwd}#${pop3.server.host}/Inbox"
channel="emailInputChannel" should-delete-messages="true" auto-startup="true" java-mail-properties="javaMailProperties">
<int:poller max-messages-per-poll="1" fixed-delay="${email.poller.delay}" task-executor="emailPollingExecutor"/>
</int-mail:inbound-channel-adapter>
<!-- Java Mail POP3 properties -->
<util:properties id="javaMailProperties">
<beans:prop key="mail.debug">true</beans:prop>
<beans:prop key="mail.pop3.port">${pop3.server.port}</beans:prop>
</util:properties>
This application polls for emails containing application file attachments which contain the data to process. The email attachments are sent typically a few a day and are relatively sporadic. Since the files contain data for bulk load, we resorted to this configuration with a single poller for the Inbound POP3 mail adapter. Having multiple pollers caused duplicate poller invocations to pull the same email while another poller is processing it. With this configuration, however, the single poller hangs after some time with no indications of the problem in the logs. Please review what is wrong with this configuration. Also, is there is an alternative way to trigger the email adapter (e.g cron etc at a periodic interval)? I am using Spring Integration 2.1
A hung poller is most likely caused by the the thread stuck in user code. I see you have mail.debug=true. If that shows no activity then a hung thread is probably the cause. Use us jstack to take a thread dump.
Yes, you can use a cron expression but that's unlikely to change things.
2.1 is extremely old but I still think a hung thread is the cause.
Requirement:
We need to retrieve a message from a JMS Queue(published by a different application) and persist the message in our JMS Queue. Need the entire flow to be transactional so in case a message cannot be persisted in the downstream JMS queue, message received from upstream JMS Queue should not be acknowledged.
My configuration is as below
<int-jms:message-driven-channel-adapter
id="MessageDrivenAdapter" channel=" jmsMessageChannel " destination="sourceDestination"
connectionFactory="CF1"
acknowledge="transacted"
/>
<int:channel id=" jmsMessageChannel " />
<int-jms:outbound-channel-adapter id="sendsomemsg"
channel=" jmsMessageChannel " destination=”finalDestination”
connectionFactory="CF2"
session-transacted="true" />
Do I need to use JmsTransactionManager in this scenario or should be above configuration suffice. We can handle duplicate messages so I believe we do not need an XA transaction.
You definitely need XA transaction here because you are using several separate transactional resources. Even if they both are JMS, that doesn't mean that they can share transaction.
OTOH you can try a solution like ChainedTransactionManager and chain two JmsTransactionManagers - one for each your JMS resource.
More info is in Dave Syer's article.
As long as you don't hand off to another thread (queue channel, task executor), and both components are using the same connection factory, the outbound operation will run in the same transaction as the inbound - the underlying JmsTemplatein the outbound adapter will use the same session that the listener container delivered the message on.
I have a scenario where I would like to separate the flow into a number of transactions. I am using queue channels based on a JdbcChannelMessageStore to do so and that works excellent. Its robust and it just works. But because these Jdbc based queues (the database) are polled by the executors, I get a natural limitation on the throughput (I don't really want to configure the poller to poll every 1 millisecond). So my question is this, is there a way for the queue channel to notify the consumer of that channel that a new messages has been queued, and then trigger the "poller" to have a look in the database to see what has to be consumed?
So the simple scenario:
1. A queue channel where someone puts a message
2. A service activator that will process that message (in parallel)
<int:channel id="InputChannel">
<int:queue message-store="jdbcChannelStore"/>
</int:channel>
<task:executor id="TradeTransformerExecutor" pool-size="2-20" queue-capacity="20" rejection-policy="CALLER_RUNS"/>
<int:service-activator id="TradeConverter" input-channel="InputChannel" output-channel="TradeChannel" method="transform">
<beans:bean class="com.service.TradeConverter"/>
<int:poller task-executor="TradeTransformerExecutor" max-messages-per-poll="-1" receive-timeout="0" fixed-rate="100">
<int:transactional transaction-manager="dbTransactionManager"/>
</int:poller>
</int:service-activator>
<int:channel id="TradeChannel"></int:channel>
So how could I make this InputChannel notify the poller (or something else) to start executing the message right away and not wait for 100ms?
Also I don't want to use DirectChannels as I do want some persistence between defined flows for robustness reasons.
Cheers guys.
Jonas
There's no way to change a trigger on demand; you can have a dynamic trigger, but changes only take effect after the next poll.
Instead of using a JDBC-backed channel, consider using an outbound channel adapter to store the data and a JDBC outbound gateway (with just a query, no update).
Use a pub-sub channel and, after storing, send the message (perhaps via a bridged ExecutorChannel) to the gateway.
Alternatively, simply inject your queue channel into a service and invoke it via a <service-activator/>. You would need a pub-sub channel bridged to your queue channel, with the second subscriber being the service activator which, when it receives the message calls receive() on the channel.
Finally, consider using a JMS, or RabbitMQ backed-channel for high performance persistence instead - they are much better a queueing than a database.
My requirement is to use transactional session with message-driven-channel-adapter (JmsMessageDrivenEndpoint). I am able to setup the configuration buy using sessionTransacted = true for DefaultMessageListenerContainer.
Work flow: receive a message -> call the service activator -> service activator calls dao class
On successful commit to database a commit() is called by spring framework and on any runtime exception a rollback() is called by spring framework. Which works just fine. When a rollback happens JMS Broker sends the message back again to my application.
For a specific type of exception in dao I want to add a message header (i.e redelivery time) so that JMS Broker will not send the message again right away. How can I do it?
For another specific type of exception in dao I want to use control bus to stop the end point (message-driven-channel-adapter) and before stopping it rollback the previous transaction. How can I do it?
Any one can help me out?
There is no wonder, how to use Control Bus for start/stop endpoints:
<int:control-bus input-channel="controlChannel"/>
<int-jms:message-driven-channel-adapter id="jmsInboundEndpoint"/>
<int:transformer input-channel="stopImsInboundEndpointChannel"
outbound-channel="controlChannel"
expression="'#jmsInboundEndpoint.stop()'"/>
Or you can send to the controlChannel the same command string from any place of your code.
But it doesn't matter that the last transaction will be rollbacked. It depends on your 'unit of work' (in other words - the behaviour of your service).
However you can, at the same time when you send 'stop command', mark current transaction for rollback:
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
Another your question about 'adding some message header' is abnormal for Messaging at all.
If you change the message it will be a new one and you can't rollback message to the queue with some new info.
Of course, you can do it anyway and have new message. But you should resend it, not rollback. So, you should commit transaction anyway and send that new message somewhere (or to the same queue), but it will be new message as for Broker as well for your application. And one more time: for this case you have to commit transaction.
Not sure that it is very clear and I go right way in my asnwer, but hope it helps you a bit.
You cannot modify the message (add a header) before rollback. You could, of course, requeue it as a new message after catching the exception. Some brokers (e.g. ActiveMQ) provide a back-off retry policy after a rollback. That might be a better solution if your broker supports it.
You can use the control bus to stop the container, but you will probably have to do it asynchronously (invoke the stop on another thread, e.g. by using an ExecutorChannel on the control bus). Otherwise, depending on your environment you might have problems with the stop waiting for the container thread to exit, so you shouldn't execute the stop on the container thread itself.
Best thing to do is experiment.
Thanks Gary and Artem. The solution is working. I am using the below configuration:
<jms:message-driven-channel-adapter id="jmsMessageDrivenChannelAdapter" connection-factory="connectionFactory"
destination="destination" transaction-manager="jmsTransactionManager" channel="serviceChannel" error-channel="ultimateErrorChannel" />
<si:service-activator input-channel="ultimateErrorChannel" output-channel="controlChannel">
<bean class="play.spring.integration.TestErrorHandler">
<property name="adapterNeedToStop" value="jmsMessageDrivenChannelAdapter" />
<property name="exceptionWhenNeedToStop" value="play.spring.integration.ShutdownException" />
</bean>
</si:service-activator>
<si:channel id="controlChannel">
<si:dispatcher task-executor="controlBusExecutor" />
</si:channel>
<task:executor id='controlBusExecutor' pool-size='10' queue-capacity='50' />
<si:control-bus input-channel="controlChannel" />
Now my question is if I want to stop multiple inbound adapters how can I send a single message to control-bus for all these adapters?
I am going to study SpEL. Would appreciate if someone already know it.
Thanks
Having defined a channel adapter as:
<int:channel id="target">
<int:queue />
</int:channel>
<int-jdbc:inbound-channel-adapter id="adapter" channel="target" query="${int.poll.query}" update="${int.update.query}" data-source="mock-datasource">
<int:poller fixed-rate="5000"/>
</int-jdbc:inbound-channel-adapter>
I wonder why I cannot modify the polling rate on runtime, as follows:
SourcePollingChannelAdapter adapter = applicationContext.getBean("adapter",SourcePollingChannelAdapter.class);
adapter.setTrigger(new PeriodicTrigger(1000));
When i debug this solution, I can see that the adapter has this new trigger attached to it, however the polling rate remains unchanged (every 5 secs). I tried also to stop() and start() the adapter, with similar luck.
Anyone can point me out what I am doing wrong?
Thanks
[RESOLVED]
It has been confirmed by members of Spring team, that a trigger cannot be modified on runtime. So if you want to modify the polling rate dynamically, for example to throttle inbound messages, you will have to roll your own Trigger implementation and add a setter for the interval polling.
I leave here the changes done in my configuration:
<int-jdbc:inbound-channel-adapter id="bancsAdapter" channel="target" query="${int.bancs.poll.query}" update="${int.bancs.update.query}" data-source="bancsMockDB">
<int:poller trigger="dynamicTrigger" />
</int-jdbc:inbound-channel-adapter>
<bean id="dynamicTrigger" class="directlabs.integration.DynamicTrigger">
<constructor-arg value="5000" />
</bean>
So for throttling, you only need to do the following:
applicationContext.getBean("dynamicTrigger",DynamicTrigger.class).setPeriod(1000);
The implementation of the DynamicTrigger can be found here
The original comments from the Spring team members can be found here.
While space here does not allow for a full example, we created a Service that uses Quartz Scheduler as a triggering mechanism. It accepts an XML document with the Quartz Jobs and Triggers defined (this stack overflow describes the process Use simple xml to drive the Quartz Sheduler )
The input channel will accept the XML to be used for setting the Schedules in Quartz. The input channel then can be used to accept dynamic updates of jobs and triggers.
The xml entries in the job-map-data will have an "output" channel defined and one can add other job-map-data that can be set in the output message header to allow for routing.
We constantly re-use this Service in many of our Spring Integration contexts.
Hope this helps.