Is there a configurable failure mechanism that can throw an exception if nodes go down from grid? - gridgain

Gridgain has failover spi mechanism for failure of jobs on nodes.
However, we would like to configure a failure mechanism that throws exception even when once of the configured data nodes goes down.
How can we do this?

Are you trying to prevent failover for your tasks and throw an exception if a node that was in process of executing a job fails? (I'm not sure I understood you correctly, so please correct me if I'm wrong)
If I'm right, the easiest way is to configure NeverFailoverSpi, like this:
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="failoverSpi">
<bean class="org.apache.ignite.spi.failover.never.NeverFailoverSpi"/>
</property>
</bean>
Another option is to use IgniteCompute.withAsyncNoFailover() method. It's useful if you want to disable failover for a small subset of tasks, but still use default mechanisms for others. Here is an example:
IgniteCompute compute = ignite.compute().withAsyncNoFailover();
// Tasks executed with this compute instance will never failover.
compute.execute(MyTask1.class, "arg");

Related

Queue job instances in Spring Batch

We have a job running in Spring batch each weekday, triggered from another system. Sometimes there are several instances of the job to be run on the same day. Each one triggered from the other system.
Every job runs for about an hour and if there are several job instances to be run we experience some problems with the data.
We would like to optimize this step as following, if no job instance is running then start a new one, if there is a job instance running already put the new one in a queue.
Each job instance must be COMPLETED before the next one is triggered. If one fail the next one must wait.
The job parameters are an incrementer and a timestamp.
I've Googled a bit but can't find anything that I find useful.
So I wonder if this is duable, to queue job instances in spring batch?
If so, how do I do this? I have looked into Spring integration and job-launching-gateway but I don't really see how to implement it, I guess I don't understand how it works. I try to read about these things but I still don't understand.
Maybe I have the wrong versions of spring batch? Maybe I am missing something?
If you need more information from me please let me know!
Thank you!
We are using spring-core and spring-beans 3.2.5, spring-batch-integration 1.2.2, spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 2.0.3
Well, if you are good to have Spring Integration in your application alongside with the Spring Batch, that really would be great idea to leverage the job-launching-gateway capability.
Right, you can place your tasks into the queue - essentially QueueChannel.
The endpoint to poll that channel can be configured with the max-message-per-poll="1" to poll from the internal queue only one task at a time.
When you have just polled one message, send it into the job-launching-gateway and at the same time to the Control Bus component the command to stop that polling endpoint do not touch other messages in queue until the finish of the current job. When job is COMPLETED, you can send one more control message to start that polling endpoint.
Be sure that you use all the Spring Integration modules in the same version: spring-integration-core 3.0.5, spring-integration-file, -http, -sftp, -stream 3.0.5, as well.
If you still require an answer one could use a ThreadPoolTaskExecutor with a core size of 1 and max size of 1 and then a queue size that you desire.
i.e.
<bean id="jobLauncherTaskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="1" />
<property name="maxPoolSize" value="1" />
<property name="queueCapacity" value="200" />
</bean>
and then pass that to the SimpleJobLauncher
i.e.
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
<property name="taskExecutor" ref="jobLauncherTaskExecutor" />
</bean>

JMS connections exhausted using WebSphere MQ

I have configured CachingConnectionFactory that wraps a MQTopicConnectionFactory and MQQueueConnectionFactory with cache size set to 10 each.
These are than used in several jms:outbound-channel-adapter or jms:message-driven-channel-adapter as part of various spring integration workflows that I have in my application.
It is noticed that once in a while the connection count on MQ channel reaches maximum allowed (about 1000) when the process stops functioning. This is a serious problem for a production application.
Bringing the application down does not reduce the connection count so looks like orphaned connections on MQ side? I am not sure if I am missing anything in my spring jms / SI configuration that can resolve this issue, any help would be highly appreciated.
Also I would like to log connection open and close from application but don't see a way to do that.
<bean id="mqQcf" class="com.ibm.mq.jms.MQQueueConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="qcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqQcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
<bean id="mqTcf" class="com.ibm.mq.jms.MQTopicConnectionFactory">
//all that it needs host/port/ queue manager /channel
</bean>
<bean id="tcf" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref=" mqTcf "/>
<property name="sessionCacheSize" value="10"/>
</bean>
//Qcf and tcf are than used in spring integration configuration as required
You really need to show your configuration but the Spring CachingConnectionFactory only creates a single connection that's shared for all sessions. Turning on INFO logging for the CCF category emits this log when a new connection is created...
if (logger.isInfoEnabled()) {
logger.info("Established shared JMS Connection: " + this.target);
}
EDIT:
There's nothing in your config that stands out. As I said, each CCF will have at most 1 connection open at a time.
One possibility, if you have idle times, is that the network (a switch or firewall) might be silently dropping connections without telling the client or server. The next time the client tries to use its connection it will fail and create a new one but the server may never find out that the old one is dead.
Typically, for such situations, enabling heartbeats or keepalives would keep the connection active (or at least allow the server to know it's dead).
I was debugging a similar issue in my application about the number of open output counts in MQ when there is only one Connection is opened by the connection factory.
The number output counts in MQ explorer is the number of connection handles created by the IBM MQ classes. Per IBM documentation, A Session object encapsulates an IBM MQ connection handle, which therefore defines the transnational scope of the session.
Since the session cache size was 10 in my application, there were 10 IBM MQ connections handles created (one for each session) stayed open for days and the handle state was inactive.
More info can be found in,
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q031960_.htm
As Gary Russell mentioned, Spring doesn't provide way to configure the time outs for these idle connections. IBM has inbuilt properties in MQConnectionFactory which can be configured to setup the reconnect timeouts.
More infor can be found in,
https://www.ibm.com/developerworks/community/blogs/messaging/entry/simplify_your_wmq_jms_client_with_automatic_client_reconnection19?lang=en
The reconnect on exception is true by default for CCF. So care should be taken if IBM throws an exception after time out interval. I am not sure if there is a max number of times reconnect will try before throwing an exception in CCF.

Using Control bus to stop message-driven-channel-adapter that uses transactional session

My requirement is to use transactional session with message-driven-channel-adapter (JmsMessageDrivenEndpoint). I am able to setup the configuration buy using sessionTransacted = true for DefaultMessageListenerContainer.
Work flow: receive a message -> call the service activator -> service activator calls dao class
On successful commit to database a commit() is called by spring framework and on any runtime exception a rollback() is called by spring framework. Which works just fine. When a rollback happens JMS Broker sends the message back again to my application.
For a specific type of exception in dao I want to add a message header (i.e redelivery time) so that JMS Broker will not send the message again right away. How can I do it?
For another specific type of exception in dao I want to use control bus to stop the end point (message-driven-channel-adapter) and before stopping it rollback the previous transaction. How can I do it?
Any one can help me out?
There is no wonder, how to use Control Bus for start/stop endpoints:
<int:control-bus input-channel="controlChannel"/>
<int-jms:message-driven-channel-adapter id="jmsInboundEndpoint"/>
<int:transformer input-channel="stopImsInboundEndpointChannel"
outbound-channel="controlChannel"
expression="'#jmsInboundEndpoint.stop()'"/>
Or you can send to the controlChannel the same command string from any place of your code.
But it doesn't matter that the last transaction will be rollbacked. It depends on your 'unit of work' (in other words - the behaviour of your service).
However you can, at the same time when you send 'stop command', mark current transaction for rollback:
TransactionAspectSupport.currentTransactionStatus().setRollbackOnly();
Another your question about 'adding some message header' is abnormal for Messaging at all.
If you change the message it will be a new one and you can't rollback message to the queue with some new info.
Of course, you can do it anyway and have new message. But you should resend it, not rollback. So, you should commit transaction anyway and send that new message somewhere (or to the same queue), but it will be new message as for Broker as well for your application. And one more time: for this case you have to commit transaction.
Not sure that it is very clear and I go right way in my asnwer, but hope it helps you a bit.
You cannot modify the message (add a header) before rollback. You could, of course, requeue it as a new message after catching the exception. Some brokers (e.g. ActiveMQ) provide a back-off retry policy after a rollback. That might be a better solution if your broker supports it.
You can use the control bus to stop the container, but you will probably have to do it asynchronously (invoke the stop on another thread, e.g. by using an ExecutorChannel on the control bus). Otherwise, depending on your environment you might have problems with the stop waiting for the container thread to exit, so you shouldn't execute the stop on the container thread itself.
Best thing to do is experiment.
Thanks Gary and Artem. The solution is working. I am using the below configuration:
<jms:message-driven-channel-adapter id="jmsMessageDrivenChannelAdapter" connection-factory="connectionFactory"
destination="destination" transaction-manager="jmsTransactionManager" channel="serviceChannel" error-channel="ultimateErrorChannel" />
<si:service-activator input-channel="ultimateErrorChannel" output-channel="controlChannel">
<bean class="play.spring.integration.TestErrorHandler">
<property name="adapterNeedToStop" value="jmsMessageDrivenChannelAdapter" />
<property name="exceptionWhenNeedToStop" value="play.spring.integration.ShutdownException" />
</bean>
</si:service-activator>
<si:channel id="controlChannel">
<si:dispatcher task-executor="controlBusExecutor" />
</si:channel>
<task:executor id='controlBusExecutor' pool-size='10' queue-capacity='50' />
<si:control-bus input-channel="controlChannel" />
Now my question is if I want to stop multiple inbound adapters how can I send a single message to control-bus for all these adapters?
I am going to study SpEL. Would appreciate if someone already know it.
Thanks

Unavailable exception for CassendraLog4net appenadar

I want to develop a logging techniques using CassandraLog4net Appender. I am getting Unavailable exception.
Can u tell me whether i have to create a keyspace or database before running this code?
Also, I am not able to use NODE TOOL When i click on it, it disappears again.
what changes should I make?
Please, find details of configuration of CassendraLog4netAppendar.
<KeyspaceName value="Logging" /><ColumnFamily value="LogEntries"/>\
<PlacementStrategy value="org.apache.cassandra.locator.NetworkTopologyStrategy" />
<StrategyOptions value="Datacentre1:1" /><ReplicationFactor value="1" />
<ConsistencyLevel value="QUORUM" />
<MaxBufferedRows value="1" />
UnavailableException means there aren't enough replicas available to satisfy your query. From your configuration I see a lot of inconsistency in your cluster config. Your log4net appender strategy options point to "Datacentre1"; your topology file lists a bunch of machines in "DC1", "DC2", and "DC3" with multiple racks; your keyspace is set up with only one DC called "DC1"; nodetool shows a single node listening on 127.0.0.1 (which doesn't correlate to any of your configured machines). So you're getting UnavailableException because you're asking for something that doesn't exist. You need to have a consistent configuration across the various pieces.

Problems while trying to modify polling rate on runtime using Spring Integration

Having defined a channel adapter as:
<int:channel id="target">
<int:queue />
</int:channel>
<int-jdbc:inbound-channel-adapter id="adapter" channel="target" query="${int.poll.query}" update="${int.update.query}" data-source="mock-datasource">
<int:poller fixed-rate="5000"/>
</int-jdbc:inbound-channel-adapter>
I wonder why I cannot modify the polling rate on runtime, as follows:
SourcePollingChannelAdapter adapter = applicationContext.getBean("adapter",SourcePollingChannelAdapter.class);
adapter.setTrigger(new PeriodicTrigger(1000));
When i debug this solution, I can see that the adapter has this new trigger attached to it, however the polling rate remains unchanged (every 5 secs). I tried also to stop() and start() the adapter, with similar luck.
Anyone can point me out what I am doing wrong?
Thanks
[RESOLVED]
It has been confirmed by members of Spring team, that a trigger cannot be modified on runtime. So if you want to modify the polling rate dynamically, for example to throttle inbound messages, you will have to roll your own Trigger implementation and add a setter for the interval polling.
I leave here the changes done in my configuration:
<int-jdbc:inbound-channel-adapter id="bancsAdapter" channel="target" query="${int.bancs.poll.query}" update="${int.bancs.update.query}" data-source="bancsMockDB">
<int:poller trigger="dynamicTrigger" />
</int-jdbc:inbound-channel-adapter>
<bean id="dynamicTrigger" class="directlabs.integration.DynamicTrigger">
<constructor-arg value="5000" />
</bean>
So for throttling, you only need to do the following:
applicationContext.getBean("dynamicTrigger",DynamicTrigger.class).setPeriod(1000);
The implementation of the DynamicTrigger can be found here
The original comments from the Spring team members can be found here.
While space here does not allow for a full example, we created a Service that uses Quartz Scheduler as a triggering mechanism. It accepts an XML document with the Quartz Jobs and Triggers defined (this stack overflow describes the process Use simple xml to drive the Quartz Sheduler )
The input channel will accept the XML to be used for setting the Schedules in Quartz. The input channel then can be used to accept dynamic updates of jobs and triggers.
The xml entries in the job-map-data will have an "output" channel defined and one can add other job-map-data that can be set in the output message header to allow for routing.
We constantly re-use this Service in many of our Spring Integration contexts.
Hope this helps.

Resources