transaction with jms:inbound-channel-adapter - spring-integration

I want with jms:inbound-channel-adapter to read a jms message and apply treatment, if treatment throw exception i want that broker keep message
<int-jms:inbound-channel-adapter
id="jmsAdapter"
session-transacted="true"
destination="destination"
connection-factory="cachedConnectionFactory"
channel="inboundChannel"
auto-startup="false">
<int:poller fixed-delay="100"></int:poller>
</int-jms:inbound-channel-adapter>
I look code of jmsTemplate.doReceive
Message message = doReceive(consumer, timeout);
if (session.getTransacted()) {
// Commit necessary - but avoid commit call within a JTA transaction.
if (isSessionLocallyTransacted(session)) {
// Transacted session created by this template -> commit.
JmsUtils.commitIfNecessary(session);
}
}
else if (isClientAcknowledge(session)) {
// Manually acknowledge message, if any.
if (message != null) {
message.acknowledge();
}
}
We acknowledge after reading directly
How can i do ?

Related

Spring integration - How to make message survive in jdbc message store in case of an error or/and shut down in consuming handler

I have a flow that stores messages into jdbc message store:
...
.channel { c -> c.queue(jdbcChannelMessageStore, "persist") }
.handle(MessageHandler {
Thread.sleep(3000);
throw RuntimeException()
} ) { e -> e.poller { it.fixedDelay(1000)} }
How to make sure that message is not deleted until hanler succesfully finishes?
Make the poller .transactional() so that the downstream flow runs in a transaction; the removal won't be committed until the flow ends (or hands off to another thread).

How to handle errors after message has been handed off to QueueChannel?

I have 10 rabbitMQ queues, called event.q.0, event.q.2, <...>, event.q.9. Each of these queues receive messages routed from event.consistent-hash exchange. I want to build a fault tolerant solution that will consume messages for a specific event in sequential manner, since ordering is important. For this I have set up a flow that listens to those queues and routes messages based on event ID to a specific worker flow. Worker flows work based on queue channels so that should guarantee the FIFO order for an event with specific ID. I have come up with with the following set up:
#Bean
public IntegrationFlow eventConsumerFlow(RabbitTemplate rabbitTemplate, Advice retryAdvice) {
return IntegrationFlows
.from(
Amqp.inboundAdapter(new SimpleMessageListenerContainer(rabbitTemplate.getConnectionFactory()))
.configureContainer(c -> c
.adviceChain(retryAdvice())
.addQueueNames(queueNames)
.prefetchCount(amqpProperties.getPreMatch().getDefinition().getQueues().getEvent().getPrefetch())
)
.messageConverter(rabbitTemplate.getMessageConverter())
)
.<Event, String>route(e -> String.format("worker-input-%d", e.getId() % numberOfWorkers))
.get();
}
private Advice deadLetterAdvice() {
return RetryInterceptorBuilder
.stateless()
.maxAttempts(3)
.recoverer(recoverer())
.backOffPolicy(backOffPolicy())
.build();
}
private ExponentialBackOffPolicy backOffPolicy() {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(1000);
backOffPolicy.setMultiplier(3.0);
backOffPolicy.setMaxInterval(15000);
return backOffPolicy;
}
private MessageRecoverer recoverer() {
return new RepublishMessageRecoverer(
rabbitTemplate,
"error.exchange.dlx"
);
}
#PostConstruct
public void init() {
for (int i = 0; i < numberOfWorkers; i++) {
flowContext.registration(workerFlow(MessageChannels.queue(String.format("worker-input-%d", i), queueCapacity).get()))
.autoStartup(false)
.id(String.format("worker-flow-%d", i))
.register();
}
}
private IntegrationFlow workerFlow(QueueChannel channel) {
return IntegrationFlows
.from(channel)
.<Object, Class<?>>route(Object::getClass, m -> m
.resolutionRequired(true)
.defaultOutputToParentFlow()
.subFlowMapping(EventOne.class, s -> s.handle(oneHandler))
.subFlowMapping(EventTwo.class, s -> s.handle(anotherHandler))
)
.get();
}
Now, when lets say an error happens in eventConsumerFlow, the retry mechanism works as expected, but when an error happens in workerFlow, the retry doesn't work anymore and the message doesn't get sent to dead letter exchange. I assume this is because once message is handed off to QueueChannel, it gets acknowledged automatically. How can I make the retry mechanism work in workerFlow as well, so that if exception happens there, it could retry a couple of times and send a message to DLX when tries are exhausted?
If you want resiliency, you shouldn't be using queue channels at all; the messages will be acknowledged immediately after the message is put in the in-memory queue;if the server crashes, those messages will be lost.
You should configure a separate adapter for each queue if you want no message loss.
That said, to answer the general question, any errors on downstream flows (including after a queue channel) will be sent to the errorChannel defined on the inbound adapter.

Can I filter an Azure ServiceBusService using node.js SDK?

I have millions of messages in a queue and the first ten million or so are irrelevant. Each message has a sequential ActionId so ideally anything < 10000000 I can just ignore or better yet delete from the queue. What I have so far:
let azure = require("azure");
function processMessage(sb, message) {
// Deserialize the JSON body into an object representing the ActionRecorded event
var actionRecorded = JSON.parse(message.body);
console.log(`processing id: ${actionRecorded.ActionId} from ${actionRecorded.ActionTaken.ActionTakenDate}`);
if (actionRecorded.ActionId < 10000000) {
// When done, delete the message from the queue
console.log(`Deleting message: ${message.brokerProperties.MessageId} with ActionId: ${actionRecorded.ActionId}`);
sb.deleteMessage(message, function(deleteError, response) {
if (deleteError) {
console.log("Error deleting message: " + message.brokerProperties.MessageId);
}
});
}
// immediately check for another message
checkForMessages(sb);
}
function checkForMessages(sb) {
// Checking for messages
sb.receiveQueueMessage("my-queue-name", { isPeekLock: true }, function(receiveError, message) {
if (receiveError && receiveError === "No messages to receive") {
console.log("No messages left in queue");
return;
} else if (receiveError) {
console.log("Receive error: " + receiveError);
} else {
processMessage(sb, message);
}
});
}
let connectionString = "Endpoint=sb://<myhub>.servicebus.windows.net/;SharedAccessKeyName=KEYNAME;SharedAccessKey=[mykey]"
let serviceBusService = azure.createServiceBusService(connectionString);
checkForMessages(serviceBusService);
I've tried looking at the docs for withFilter but it doesn't seem like that applies to queues.
I don't have access to create or modify the underlying queue aside from the operations mentioned above since the queue is provided by a client.
Can I either
Filter my results that I get from the queue
speed up the queue processing somehow?
Filter my results that I get from the queue
As you found, filters as a feature are only applicable to Topics & Subscriptions.
speed up the queue processing somehow
If you were to use the #azure/service-bus package which is the newer, faster library to work with Service Bus, you could receive the messages in ReceiveAndDelete mode until you reach the message with ActionId 9999999, close that receiver and then create a new receiver in PeekLock mode. For more on these receive modes, see https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement#settling-receive-operations

Complete a message in a dead letter queue on Azure Service Bus

I want to be able to remove selected messages from my deadletter queue.
How is this accomplished?
I constantly get an error:
The operation cannot be completed because the RecieveContext is Null
I have tried every approach I can think of and read about, here is where I am now:
public void DeleteMessageFromDeadletterQueue<T>(string queueName, long sequenceNumber)
{
var client = GetQueueClient(queueName, true);
var messages = GetMessages(client);
foreach(var m in messages)
{
if(m.SequenceNumber == sequenceNumber)
{
m.Complete();
}
else
{
m.Abandon();
}
}
}
/// <summary>
/// Return a list of all messages in a Queue
/// </summary>
/// <param name="client"></param>
/// <returns></returns>
private IEnumerable<BrokeredMessage> GetMessages(QueueClient client)
{
var peekedMessages = client.PeekBatch(0, peekedMessageBatchCount).ToList();
bool getmore = peekedMessages.Count() == peekedMessageBatchCount ? true : false;
while (getmore)
{
var moreMessages = client.PeekBatch(peekedMessages.Last().SequenceNumber, peekedMessageBatchCount);
peekedMessages.AddRange(moreMessages);
getmore = moreMessages.Count() == peekedMessageBatchCount ? true : false;
}
return peekedMessages;
}
Not sure why this seems to be such a difficult task.
The issue here is that you've called PeekBatch which returns the messages that are just peeked. There is no Receive context which you can then use to either complete or abandon the message. The Peek and PeekBatch operations only return the messages and does not lock them at all, even if the receivedMode is set to PeekLock. It's mostly for skimming the queue, but you can't take an action on them. Note the docs for both Abandon and Complete state, "must only be called on a message that has been received by using a receiver operating in Peek-Lock ReceiveMode." It's not clear here, but Peek and PeekBatch operations don't count in this as they don't actually get a receive context. This is why it fails when you attempt to call abandon. If you actually found the one you were looking for it would throw a different error when you called Complete.
What you want to do is use a ReceiveBatch operation instead in PeekBatch RecieveMode. This will actually pull a batch of the messages back and then when you look through them to find the one you want you can actually affect the message complete. When you fire the abandon it will immediately release the message that isn't what you want back to the queue.
If your deadletter queue is pretty small usually this won't be bad. If it is really large then taking this approach isn't the most efficient. You're treating the dead letter queue more like a heap and digging through it rather than processing the messages "in order". This isn't uncommon when dealing with dead letter queues which require manual intervention, but if you have a LOT of these then it may be better to have something processing the dead letter queue into a different type of store where you can more easily find and destroy messages, but can still recreate the messages that are okay to push to a different queue to reprocess.
There may be other options, such as using Defer, if you are manually dead lettering things. See How to use the MessageReceiver.Receive method by sequenceNumber on ServiceBus.
I was unsuccessful with MikeWo's suggestion, because when I used the combination of instantiating the DLQ QueueClient with ReceiveMode.PeekLock and pulling messages with ReceiveBatch, I was using the versions of Receive/ReceiveBatch requesting the message by its SequenceNumber.
[aside: in my app, I peek all the messages and list them, and have another handler to re-queue to the main queue a dead-lettered message based on it's specific sequence number...]
But the call to Receive(long sequenceNumber) or ReceiveBatch(IEnumerable sequenceNumber) on the DLQClient always throws the exception, "Failed to lock one or more specified messages. The message does not exist." (even when I only passed 1 and it is definitely in the queue).
Additionally, for reasons that aren't clear, using ReceiveBatch(int messageCount), always only returns the next 1 message in the queue no matter what value is used as the messageCount.
What finally worked for me was the following:
QueueClient queueClient, deadLetterClient;
GetQueueClients(qname, ReceiveMode.PeekLock, out queueClient, out deadLetterClient);
BrokeredMessage msg = null;
var mapSequenceNumberToBrokeredMessage = new Dictionary<long, BrokeredMessage>();
while (msg == null)
{
#if UseReceive
var message = deadLetterClient.Receive();
#elif UseReceiveBatch
var messageEnumerable = deadLetterClient.ReceiveBatch(CnCountOfMessagesToPeek).ToList();
if ((messageEnumerable == null) || (messageEnumerable.Count == 0))
break;
else if (messageEnumerable.Count != 1)
throw new ApplicationException("Invalid expectation that there'd always be only 1 msg returned by ReceiveBatch");
// I've yet to get back more than one in the deadletter queue, but...
var message = messageEnumerable.First();
#endif
if (message.SequenceNumber == lMessageId)
{
msg = message;
break;
}
else if (mapSequenceNumberToBrokeredMessage.ContainsKey(message.SequenceNumber))
{
// this means it's started the list over, so we didn't find it...
break;
}
else
mapSequenceNumberToBrokeredMessage.Add(message.SequenceNumber, message);
message.Abandon();
}
if (msg == null)
throw new ApplicationException("Unable to find a message in the deadletter queue with the SequenceNumber: " + msgid);
var strMessage = GetMessage(msg);
var newMsg = new BrokeredMessage(strMessage);
queueClient.Send(newMsg);
msg.Complete();

WMQ Transactions Rollback using .net explicit Transactions not working

I have used .net C# code to put messages on the queue and get messages back. I have no problem in accessing the queue and getting messages. Now I want to have the get message calls under Transaction and used explicit transaction option to commit and rollback the messages.
try
{
MQQueueManager queueManager;
MQEnvironment.Hostname = hostName;
MQEnvironment.Channel = channelName;
MQEnvironment.Port = 1414;
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
queueManager = new MQQueueManager(queueManagerName);
// obtain a read/write queue reference
var queue = queueManager.AccessQueue(queueName, MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_INQUIRE + MQC.MQOO_FAIL_IF_QUIESCING);
IList<string> Messages = new List<string>();
using (var scope = new CommittableTransaction())
{
CommittableTransaction.Current = scope;
var message = new MQMessage();
try
{
var getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_SYNCPOINT ;
int i = queue.CurrentDepth;
queue.Get(message,getMessageOptions);
Console.WriteLine(message.ReadString(message.MessageLength));
scope.Rollback();
}
catch (MQException mqe)
{
if (mqe.ReasonCode == 2033)
{
Console.WriteLine("No more message available");
Console.ReadLine();
scope.Rollback();
}
else
{
Console.WriteLine("MQException caught: {0} - {1}", mqe.ReasonCode, mqe.Message);
Console.ReadLine();
scope.Rollback();
}
}
CommittableTransaction.Current = null;
}
// closing queue
queue.Close();
// disconnecting queue manager
queueManager.Disconnect();
Console.ReadLine();
}
catch (MQException mqe)
{
Console.WriteLine("");
Console.WriteLine("MQException caught: {0} - {1}", mqe.ReasonCode, mqe.Message);
Console.WriteLine(mqe.StackTrace);
Console.ReadLine();
}
The first problem I faced was , related to access to System.Dotnet.XARecovery queue. Even though I had access to the queue to get messages from the queue , the program started to fail because of the access rights on the recovery queue when below line was invoked.
queue.Get(messages),
Then I got the access on the recovery queue and access denied problem was resolved. Now after getting the message from the queue , the messages are not roll backed after scope.RollBack() is called.
I checked in the System.Dotnet.XARecovery queue and dead letter queue and there was not nothing there as well.
Why I am not able to see the rolled back messages in the WMQ message queue.
You have a scope.Commit(); after queue.Get(message); After getting the message you are explicitly calling the Commit. If the Get is successful, the Commit call tells the queue manager to remove the message from queue. So there is no chance of message getting rolled back.
EDIT: GMO_SYNCPOINT option is missing in your code. You need to have something like this
MQGetMessageOptions getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_SYNCPOINT;
queue.Get(message, getMessageOptions);
I figured out the solution of my problem. In my code above if I change the line from
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
to
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
then it starts to register the transactions of the local DTC as well as it works fine in rolling back or commit a message on the queue.

Resources