How to Nak a ServiceStack RabbitMQ message within the RegisterHandler? - servicestack

I'd like to be able to requeue a message from within my Service Endpoint that has been wired up through the RegisterHandler method of RabbitMQ Server. e.g.
mqServer.RegisterHandler<OutboundILeadPhone>(m =>
{
var db = container.Resolve<IFrontEndRepository>();
db.SaveMessage(m as Message);
return ServiceController.ExecuteMessage(m);
}, noOfThreads: 1);
or here.
public object Post(OutboundILeadPhone request)
{
throw new OutBoundAgentNotFoundException(); // added after mythz posted his first response
}
I don't see any examples how this is accomplished, so I'm starting to believe that it may not be possible with the ServiceStack abstraction. On the other hand, this looks promising.
Thank you, Stephen
Update
Throwing an exception in the Service does nak it, but then the message is sent to the OutboundILeadPhone.dlq which is normal ServiceStack behavior. Guess what I'm looking for is a way for the message to stay in the OutboundILeadPhone.inq queue.

Throwing an exception in your Service will automatically Nak the message. This default exception handling behavior can also be overridden with RabbitMqServer's RegisterHandler API that takes an Exception callback, i.e:
void RegisterHandler<T>(
Func<IMessage<T>, object> processMessageFn,
Action<IMessage<T>, Exception> processExceptionEx);
void RegisterHandler<T>(
Func<IMessage<T>, object> processMessageFn,
Action<IMessage<T>, Exception> processExceptionEx,
int noOfThreads)

Related

Send a email when any Errors got in my errorChannel using Spring integration DSL

I am developing an API in spring-integration using DSL, this how it works
JDBC Polling Adapter initiates the flow and gets some data from tables and send it to DefaultRequestChannel, from here the message is handled/flowing thru various channels.
Now I am trying to
1. send a email, if any errors (e.g connectivity issue, bad record found while polling the data) occurred/detected in my error channel.
After sending email to my support group, I want to suspend my flow for 15 mins and then resume automatically.
I tried creating a sendEmailChannel (recipient of my errorChannel), it doesn't work for me. So just created a transformer method like below
this code is running fine, but is it a good practice?
#
#Transformer(inputChannel="errorChannel", outputChannel="suspendChannel")
public Message<?> errorChannelHandler(ErrorMessage errorMessage) throws RuntimeException, MessagingException, InterruptedException {
Exception exception = (Exception) errorMessage.getPayload();
String errorMsg = errorMessage.toString();
String subject = "API issue";
if (exception instanceof RuntimeException) {
errorMsg = "Run time exception";
subject = "Critical Alert";
}
if (exception instanceof JsonParseException) {
errorMsg = ....;
subject = .....;
}
MimeMessage message = sender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(message);
helper.setFrom(senderEmail);
helper.setTo(receiverEmail);
helper.setText(errorMsg);
helper.setSubject(subject);
sender.send(message);
kafkaProducerSwitch.isKafkaDown());
return MessageBuilder.withPayload(exception.getMessage())
.build();
}
I am looking for some better way of handling the above logic.
And also any suggestions to suspend my flow for few mins.
You definitely can use a mail sending channel adapter from Spring Integration box to send those messages from the error channel: https://docs.spring.io/spring-integration/docs/5.1.5.RELEASE/reference/html/#mail-outbound. The Java DSL variant is like this:
.handle(Mail.outboundAdapter("gmail")
.port(smtpServer.getPort())
.credentials("user", "pw")
.protocol("smtp")))
The suspend can be done via CompoundTriggerAdvice extension, when you check the some AtimocBoolean bean for the state to activate one or another trigger in the beforeReceive() implementation. Such a AtimocBoolean can change its state in one more subscriber to that errorChannel because this one is a PublishSubscribeChannel by default. Don't forget to bring the state back to normal after that you return a false from the beforeReceive(). Just because that is enough to mark your system as normal at this moment since it is is going to work only after 15 mins.

How to handle exceptions from webjobs in application insights?

When an exception is thrown from webjob, it exits without logging to the application insights. Observed that flushing the logs to application insights takes few minutes, so we are missing the exceptions here. How to handle this?
Also, is there a way to move the message which hit the exception to poison queue automatically without manually inserting that message to poison queue?
I am using latest stable 3.x versions for the 2 NuGet packages:
Microsoft.Azure.WebJobs and Microsoft.Azure.WebJobs.Extensions
Created a host that implemented IHost as below:
var builder = new HostBuilder()
.UseEnvironment("Development")
.ConfigureWebJobs(b =>
{
...
})
.ConfigureLogging((context, b) =>
{
string appInsightsKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
if (!string.IsNullOrEmpty(appInsightsKey))
{
b.AddApplicationInsights(o => o.InstrumentationKey = appInsightsKey);
appInsights.TrackEvent("Application Insights is starting!!");
}
})
.ConfigureServices(services =>
{
….
})
.UseConsoleLifetime();
var host = builder.Build();
using (host)
{
host.RunAsync().Wait();
}
and Function.cs
public static async void ProcessQueueMessageAsync([QueueTrigger("queue")] Message message, int dequeueCount, IBinder binder, ILogger logger)
{
switch (message.Name)
{
case blah:
...break;
default:
logger.LogError("Invalid Message object in the queue.", message);
logger.LogWarning("Current dequeue count: " + dequeueCount);
throw new InvalidOperationException("Simulated Failure");
}
}
My questions here are:
1) When the default case is hit, webjob is terminating immediately and the loggers are not getting flushed into app insights even after waiting and starting the web job again. As it takes few minutes to reflect in app insights, and webjob stops, I am losing the error logs. How to handle this?
2) From the sample webjobs here, https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/QueueOperations/Functions.cs they are using JobHost host = new JobHost(); and if the 'FailAlways' function fails, it automatically retries for 5 times and pushed the message into poison queue. But this is not happening in my code. Is it because of different Hosts? or do I have to add any more configurations?
Try changing your function to return Task instead of void:
public static async Task ProcessQueueMessageAsync([QueueTrigger("queue")] Message message, int dequeueCount, IBinder binder, ILogger logger)
This worked for me where even though I was logging the error and throwing the exception, Application Insights would either show a successful invocation or no invocation occurring.
After inspecting the source code of the Application Insights SDK it became apparent that to get an Exception in Application Insights you must pass an exception object into the LogError call.
log.Error(ex, "my error message") - will result in Application Insight Exception
log.Error("my error message") - will result in Application Insight Trace.
is there a way to move the message which hit the exception to poison queue automatically without manually inserting that message to poison queue?
You could set config.Queues.MaxDequeueCount = 1; in webjob. The number of times to try processing a message before moving it to the poison queue.
And where is the MaxDequeueCount configuration should be added in the code?
You could set the property in JobHostConfiguration in program.cs

Spring Integration : SimpleAsyncTaskExecutor

I have read another issue in this website like this but I don't get how to resolve the issue.
Spring Integration: Application leaking SimpleAsyncTaskExecutor threads?
My error is similar to previous link
SimpleAsyncTaskExecutor-2327" - Thread t#2405
java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <7a224c1> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel.receive(GenericMessagingTemplate.java:199)
at org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel.receive(GenericMessagingTemplate.java:192)
at org.springframework.messaging.core.GenericMessagingTemplate.doReceive(GenericMessagingTemplate.java:130)
at org.springframework.messaging.core.GenericMessagingTemplate.doSendAndReceive(GenericMessagingTemplate.java:157)
at org.springframework.messaging.core.GenericMessagingTemplate.doSendAndReceive(GenericMessagingTemplate.java:45)
at org.springframework.messaging.core.AbstractMessagingTemplate.sendAndReceive(AbstractMessagingTemplate.java:42)
at org.springframework.integration.core.MessagingTemplate.sendAndReceive(MessagingTemplate.java:97)
at org.springframework.integration.core.MessagingTemplate.sendAndReceive(MessagingTemplate.java:38)
at org.springframework.messaging.core.AbstractMessagingTemplate.convertSendAndReceive(AbstractMessagingTemplate.java:79)
at org.springframework.messaging.core.AbstractMessagingTemplate.convertSendAndReceive(AbstractMessagingTemplate.java:70)
at org.springframework.integration.gateway.MessagingGatewaySupport.doSendAndReceive(MessagingGatewaySupport.java:449)
I'm using
spring-integration-java-dsl-1.2.3.RELEASE
spring-integration-ip-4.3.17.RELEASE
spring-integration-http-4.3.17.RELEASE
My scenario is the next: I receive a message throught a Api Controller and this message is sent a un TCP socket.
I have defined a MessageGateway interface
#MessagingGateway(defaultRequestChannel = "toTcp.input")
public interface MessageTcpGateway {
#Gateway
public ListenableFuture<Void> sendTcpChannel(byte[] data,
#Header("connectionId") String connectionId );
}
After I use this interface in a service class like this:
public void sendMessageTcpGateway(final String bridgeId,final String connectionId, final byte[] message) {
LOGGER.debug("sendMessageTcpGateway connectionId:{} - message:{}", connectionId, message);
if (holder.existsConnection(connectionId)!=null) {
gatewayTcp.sendTcpChannel(message,connectionId);
} else {
LOGGER.error("Not send message connectionId:{} - message:{}", connectionId, message);
}
}
Why the thread is waiting ?. Is my process waiting for any kind of sign and I'm not considered?. I guess that if the connection is not available or whatever kind of error, spring-integration will throw a exception
How can i resolve this issue?
I wonder why don't follow recommendations from that SO thread...
The ListenableFuture<Void> is a bottleneck in your solution. As you see by stack trace you have there doSendAndReceive(), but I guess your target solution is really one-way and doesn't return anything for the replyChannel in headers.
You should consider to have just plain void return type and an ExecutorChannel downstream.
Unfortunately we can't detect such a situation from the framework side since a Future return type of the gateway method indicates that you are going to perform request-reply async manner. In your case it is just an async request, nothing more.

TaskCanceledException on azure function (Service bus trigger)

I have a Service Bus Trigger Azure function, which is triggered every time a topic receives a message.
Messages arrive at regular intervals, for example every 30 minutes. Between lots, no activity.
The function does nothing special, it does an asynchronous posting of the message via HttpClient. The function is regularly stopped with a TaskCanceledException.
The HttpClient is static
public static class SampleEventTrigger
{
private static DefaultHttpWebHook webHook = new DefaultHttpWebHook(new Uri("https://nonexistent.invalid/sampleWebHook"), "/event/sampleEvent");
[FunctionName("SampleEventTrigger")]
public static async Task Run(
[ServiceBusTrigger("sampleevent", "SampleEvent.Subs", AccessRights.Manage, Connection = GlobalConfiguration.ServiceBusConnection)]BrokeredMessage message,
TraceWriter log)
{
log.Info("launch sample event subscription");
try
{
var resp = await webHook.Post(message, log);
log.Info($"{resp.StatusCode}, {resp.ReasonPhrase}");
}
catch (Exception ex)
{
log.Error($"exception in webhook: {ex.Message}", ex);
throw;
}
}
}
If I raise it again just after, this time it passes.
Where does this exception come from? How do we avoid that?
Is it related to a timeout, or to launching the function that would be too slow?
My function is in Consumption mode.
Chances are that your Http call is timing out. Awaited Http calls that time out throw TaskCanceledException . I'm not sure what your DefaultHttpWebHook class does under the covers, but it should be using PostAsync in the Post method (which itself should have the Async suffix).
To verify you could catch TaskCanceledException and examine the inner exception. If you are still struggling, convert your code to non-async during local development to get a better handle on what's happening - it'll give you back a true exception rather than bubbling it up as a TCE.

Queue messages that are moved to Poison Queue still show as queue count, but stay hidden

I am testing the Poison message handling of the Webjob that I am building.
Everything seems to be working as expected except, one strange thing:
When a message is moved to the “-poison” queue, its ghost seems to remain hidden (invisible) in the main job queue. That means if I have 6 poison messages moved to the “-poison” queue, storage explorer shows “Showing 0 of 6 messages in queue”. I can not see the 6 hidden messages in the Storage Explorer.
I tried to delete the job queue and recreating it, but the strange issue still happening after I run my tests. Storage explorer shows “Showing 0 of 6 messages in queue”.
What is happening behind the scene?
Update 1
I did some investigation and I think WebJob SDK does not delete the poison message.
I went through WebJob SDK source code and I think this line of code is not being executed for some reason:
https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L119
Here is my Function that can help reproducing the issue:
public class Functions
{
public static void ProcessQueueMessage([QueueTrigger("%QueueName%")] string message, TextWriter log)
{
if (message.Contains("Break"))
{
throw new Exception($"Error while processing message {message}");
}
log.WriteLine($"Processed message {message}");
}
}
Update 2
Here is the WebJob SDK I am using:
As far as I know, the azure storage SDK 8.+ is not work well with the Azure webjobs SDK2.0 (related issue).
If you use storage SDK 8.+ the poison messages stay undeleted-but-invisible.
Workaround method is using the low azure storage SDK 7.2.1.
It will work well.
And this issue will be solved in the future SDK version.
I have the same problem.
The problem is when then Message copy in poison queue pass by ref without visibility time https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L145 and when try to delete the message from original queue the service returns 404 not found. Is a problem in azure-webjobs-sdk and the solution is to make this change
await AddMessageAndCreateIfNotExistsAsync(poisonQueue, new CloudQueueMessage(message.AsString), cancellationToken);
in https://github.com/Azure/azure-webjobs-sdk/blob/dev/src/Microsoft.Azure.WebJobs.Host/Queues/QueueProcessor.cs#L145
we wait new version with this fix
Custom solution
To solve this create your own CustomProcessor and in CopyMessageToPoisonQueueAsync function create new CloudMessage from original to pass in poison queue, see example below.
var config = new JobHostConfiguration
config.Queues.QueueProcessorFactory = new CustomQueueProcessorFactory();
public QueueProcessor Create(QueueProcessorFactoryContext context)
{
// demonstrates how the Queue.ServiceClient options can be configured
context.Queue.ServiceClient.DefaultRequestOptions.ServerTimeout = TimeSpan.FromSeconds(30);
// demonstrates how queue options can be customized
context.Queue.EncodeMessage = true;
// return the custom queue processor
return new CustomQueueProcessor(context);
}
/// <summary>
/// Custom QueueProcessor demonstrating some of the virtuals that can be overridden
/// to customize queue processing.
/// </summary>
private class CustomQueueProcessor : QueueProcessor
{
private QueueProcessorFactoryContext _context;
public CustomQueueProcessor(QueueProcessorFactoryContext context)
: base(context)
{
_context = context;
}
public override async Task CompleteProcessingMessageAsync(CloudQueueMessage message, FunctionResult result, CancellationToken cancellationToken)
{
await base.CompleteProcessingMessageAsync(message, result, cancellationToken);
}
protected override async Task CopyMessageToPoisonQueueAsync(CloudQueueMessage message, CloudQueue poisonQueue, CancellationToken cancellationToken)
{
var msg = new CloudQueueMessage(message.AsString);
await base.CopyMessageToPoisonQueueAsync(msg, poisonQueue, cancellationToken);
}
protected override void OnMessageAddedToPoisonQueue(PoisonMessageEventArgs e)
{
base.OnMessageAddedToPoisonQueue(e);
}
}
For anyone out there still having this issue. This should be fixed since 2.1.0-beta1-10851. The downside is that there is currently no stable released version of 2.1.0 yet.

Resources