Is there anyway to get a timestamp as to when a message is placed on my Azure queue? What is the best way?
For example, a partner sends a message to my queue and I want to know the time the partner placed a specific message in the queue.
Thanks
If you're using the .NET API, the property InsertionTime in the CloudQueueMessage you get when fetching a message or peeking the queue will contain;
The time that that message was added to the queue.
For example, in Java code the following would retrieve a message (then get the insertion time) and then peek at the next message (and get its insertion time.) This also shows how access to the queue can be controlled by using a revocable stored access policy string.
public class Dequeue {
public static void main(String[] args) throws InvalidKeyException, URISyntaxException, StorageException
{
String sas = "sv=2012-02-12&sig=XUHWUy2ayrcEjNvZUhcgdPbgKZflwSXxtr0BH87XRII%3D&si=heath";
StorageCredentialsSharedAccessSignature credentials = new StorageCredentialsSharedAccessSignature(sas);
URI baseuri = new URI("http://grassy.queue.core.windows.net");
CloudQueueClient queueClient = new CloudQueueClient(baseuri,credentials);
CloudQueue queue = queueClient.getQueueReference("queue1");
CloudQueueMessage retrievedMessage = queue.retrieveMessage();
System.out.println(retrievedMessage.getMessageContentAsString());
System.out.println(retrievedMessage.getInsertionTime());
queue.deleteMessage(retrievedMessage);
CloudQueueMessage peekedMessage = queue.peekMessage();
System.out.println(peekedMessage.getMessageContentAsString());
System.out.println(peekedMessage.getInsertionTime());
} }
Related
It really annoys me that we're unable to move messages from a Dead Letter Queue over to the Original Queue for processing when using Azure Servicebus. So, I figured out that I will try to implement this feature myself. We are using Masstransit to publish events. The queuename in ASB will be an events full assembly name.
I've created an REST Endpoint in my application to move messages from the DLQ to the original queue for reprocessing. This is where I'm stuck at the moment.
To get all messages in a DLQ, the user gives me the queuename, and I will format it to contain the DeadLetterQueue. Like this:
myproject.events.usercreatedevent -> myproject.events.usercreatedevent/$DeadLetterQueue
I get all the messages from this queue by using classes from the Nuget package Microsoft.Azure.Servicebus
public async Task RequeueMessagesAsync(string queueName)
{
var msg = new MessageReceiver(BuildConnectionString(), queueName);
var messages = await msg.PeekAsync(50);
foreach (var message in messages)
{
var content = Encoding.UTF8.GetString(message.Body);
var jsonObject = JsonConvert.DeserializeObject<JObject>(content);
var destinationAddress = jsonObject["destinationAddress"].ToString();
var messageContent = jsonObject["message"].ToString();
var messageType = destinationAddress.Split("/").Last();
await _bus.SendAsync(jsonObject, messageType);
}
}
The when calling _bus.SendAsync(object, address) the message ends up in a _skipped queue. I think the reason for this is that the messageHeaders is set to JObject, and not the actual message type. I cannot use reflection to recreated the event either, as we have a lot of microservices and source code of the event it not necessarily available. The code behind the _bus.SendAsync(object, address) looks like this:
public async Task SendAsync(object message, string queueName, CancellationToken cancellationToken = default)
{
ISendEndpoint sender = await GetSenderAsync(queueName);
sender.ConnectSendObserver(new ErrorQueueConfiguration(_addressProvider.GetAddress("error")));
await sender.Send(message, cancellationToken);
}
Can I trick Masstransit to forward this "unknown" type to my Consumer by changing the MessageHeaders somehow? Have anyone successfully moved messages from a DLQ to its original queue?
I have created the code to get the Queue Client data using QueueClient.Receive() with Broken Message
BrokeredMessage deadmessage = client.Receive();
byte[] dataRaw = deadmessage.GetBody<byte[]>();
Due to some corrupted data, I got the exception on second line, while get the body of the broken message. So i was try to get the body of the message on catch block with SteamReader.
Stream stream = deadmessage.GetBody<Stream>();
StreamReader reader = new StreamReader(stream);
I experienced with below exception, Could anyone help me with appropriate fixes?
Exception details :
The message body cannot be read multiple times. To reuse it store the value after reading.
To take multiple attempts to read message body you need to read it as a stream first
serviceBusClient.GetBody<Stream>()
Then you can try to interpret it by different ways. For example it can be serialized directly by following way:
var brokeredMessage = new BrokeredMessage(message);
serviceBusClient.Send(brokeredMessage);
but it's better to serialize it to json first.
var brokeredMessage = new BrokeredMessage(JsonConvert.SerializeObject(message));
serviceBusClient.Send(brokeredMessage);
it's more safe in my view because json serialization ignores namespaces of message type, so you will not break your process when you move class of message to another namespace.
Suppose you are starting to send and read messages serialized in json but some old messages can be still binary serialized. In this case you can use the following logic:
public static T DeserializeMessage<T>(BrokeredMessage brokeredMessage)
{
using (var stream = brokeredMessage.GetBody<Stream>())
using (var streamReader = new StreamReader(stream))
{
string bodyText = streamReader.ReadToEnd();
try
{
return JsonConvert.DeserializeObject<T>(bodyText);
}
catch (JsonReaderException)
{
stream.Position = 0;
var reader = XmlDictionaryReader.CreateBinaryReader(stream, XmlDictionaryReaderQuotas.Max);
var serializer = new DataContractSerializer(typeof(T));
var msgBody = (T)serializer.ReadObject(reader);
return msgBody;
}
}
}
If you need to try to deserialize the message as another type, catch System.Runtime.Serialization.SerializationException on serializer.ReadObject(reader).
As Sean Feldman mentioned that if message is corrupted, then it will be handled by dead-letter queue.
Service Bus queues and topic subscriptions provide a secondary sub-queue, called a dead-letter queue (DLQ). The dead-letter queue does not need to be explicitly created and cannot be deleted or otherwise managed independent of the main entity.
The purpose of the dead-letter queue is to hold messages that cannot be delivered to any receiver, or simply messages that could not be processed.
If you need to know how to create and user Service Bus Queue we can refer to get start with Service Bus queues
To reuse it store the value after reading.
If it be can be read correctly then we can store it messageid and vaule for reuse.
The DLQ is mostly similar to any other queue.
If it is corrupted data, we can get it from the dead-letter queue as mormal queue.
string connectionString = CloudConfigurationManager.GetSetting('Microsoft.ServiceBus.ConnectionString');
QueueClient Client = QueueClient.CreateFromConnectionString(connectionString, deletLetterQueueName);
var message = Client.Receive(TimeSpan.FromSeconds(3));
if (message != null)
{
var ret = message.GetBody<stream>();
message.Complete();
}
I found another reason of the exception. The thing is that when I debugged different problem I got this exception. After some experiments I realized that Visual Studio reads the message behind the scene to show it for example in Watch panel, and when my code tried to get the message it was already read by Visual Studio.
To avoid this it needs to wrap the message to a property with backing field, which will store the value. And then I realized that the exception message abstracly says to make this.
So you should consider that it can be read behind the scene
I have a TimerTrigger function and the Output binding is a Azure Queue.
The idea is that every 10 minutes the timer will run, it will look at a view in my database and iterate through any rows returned adding them to the queue as messages.
Below is my sample TimerTrigger. It worked fine adding messages to the Queue.
However in my real world scenario some of the rows will require immediate execution while others will have a delay of some minutes (varies per row). I plan on handling the delay by using the VisibilityTimeout for the message.
Unfortunately the binding via a string wouldn't let me set the value. CloudQueueMessage.VisiblityTimeout (used below) is readonly.
#r "Microsoft.WindowsAzure.Storage"
using System;
using Microsoft.WindowsAzure.Storage.Queue;
public static void Run(TimerInfo myTimer, ICollector<CloudQueueMessage> outputQueueItem, TraceWriter log)
{
log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
//- Add a message to be processed now.
CloudQueueMessage msg = new CloudQueueMessage("Now");
outputQueueItem.Add(msg);
//- Add a message to be processed later.
//- this code below won't work because NextVisibleTime is readonly.
//- is there some way to set the VisibilityTimeout property before queueing?
msg = new CloudQueueMessage("Later");
DateTime otherDate = DateTime.Now.AddMinutes(3);
msg.NextVisibleTime = otherDate;
outputQueueItem.Add(msg);
}
Is there any way to have the binding add messages to the queue and let me set the VisibilityTimeout message by message as appropriate?
Azure Functions Storage Queue’s output binding only gives us access to the CloudQueueMessage which doesn’t let us set the VisibilityTimeout for a message.
I rewrote my code to connect to the Azure Storage Queue and post the messages onto the queue manually rather than through Azure Function output binding.
See below . . .
#r "Microsoft.WindowsAzure.Storage"
using System;
using System.Configuration;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
public static void Run(TimerInfo myTimer, TraceWriter log)
{
log.Info($"Queue Notifications: {DateTime.Now}, {myTimer.Schedule}, {myTimer.ScheduleStatus}, {myTimer.IsPastDue}");
//* Setup the connection to q-notifications, create it if it doesn't exist.
var connectionString = ConfigurationManager.AppSettings["AzureWebJobsStorage"];
var storageAccount = CloudStorageAccount.Parse(connectionString);
var queueClient = storageAccount.CreateCloudQueueClient();
var queue = queueClient.GetQueueReference("q-notifications");
queue.CreateIfNotExistsAsync();
//* Eventually this will come from iterating through a SQL Database View of messages that need queueing.
//* For testing just show we can add two messages with different Visibilty times.
CloudQueueMessage message;
TimeSpan delay;
//* Queue Message for Immediate Processing.
message = new CloudQueueMessage("Now Message");
queue.AddMessageAsync(message, null, null, null, null);
//* Queue Message for Later Processing.
delay = DateTime.UtcNow.AddMinutes(3) - DateTime.UtcNow;
message = new CloudQueueMessage("Later Message");
queue.AddMessageAsync(message, null, delay, null, null);
//* Queue Message for Even Later Processing.
delay = DateTime.UtcNow.AddMinutes(12) - DateTime.UtcNow;
message = new CloudQueueMessage("Even Later Message");
queue.AddMessageAsync(message, null, delay, null, null);
}
I have created a service bus queue in Azure and it works well. And if the message is not getting delivered within default try (10 times), it is correctly moving the message to the dead letter queue.
Now, I would like to resubmit this message from the dead letter queue back to the queue where it originated and see if it works again. I have tried the same using service bus explorer. But it gets moved to the dead letter queue immediately.
Is it possible to do the same, and if so how?
You'd need to send a new message with the same payload. ASB by design doesn't support message resubmission.
We had a batch of around 60k messages, which need to be reprocessed from the dead letter queue. Peeking and send the messages back via Service Bus Explorer took around 6 minutes per 1k messages from my machine. I solved the issue by setting a forward rule for DLQ messages to another queue and from there auto forward it to the original queue. This solution took around 30 seconds for all 60k messages.
Try to remove dead letter reason
resubmittableMessage.Properties.Remove("DeadLetterReason");
resubmittableMessage.Properties.Remove("DeadLetterErrorDescription");
full code
using Microsoft.ServiceBus.Messaging;
using System.Transactions;
namespace ResubmitDeadQueue
{
class Program
{
static void Main(string[] args)
{
var connectionString = "";
var queueName = "";
var queue = QueueClient.CreateFromConnectionString(connectionString, QueueClient.FormatDeadLetterPath(queueName), ReceiveMode.PeekLock);
BrokeredMessage originalMessage
;
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
do
{
originalMessage = queue.Receive();
if (originalMessage != null)
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// Create new message
var resubmittableMessage = originalMessage.Clone();
// Remove dead letter reason and description
resubmittableMessage.Properties.Remove("DeadLetterReason");
resubmittableMessage.Properties.Remove("DeadLetterErrorDescription");
// Resend cloned DLQ message and complete original DLQ message
client.Send(resubmittableMessage);
originalMessage.Complete();
// Complete transaction
scope.Complete();
}
}
} while (originalMessage != null);
}
}
}
Thanks to some other responses here!
We regularly need to resubmit messages. The answer from #Baglay-Vyacheslav helped a lot. I've pasted some updated C# code that works with the latest Azure.Messaging.ServiceBus Nuget Package.
Makes it much quicker/easier to process DLQ on both queues/topics/subscribers.
using Azure.Messaging.ServiceBus;
using System.Collections.Generic;
using System.Threading.Tasks;
using NLog;
namespace ServiceBus.Tools
{
class TransferDeadLetterMessages
{
// https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.ServiceBus_7.2.1/sdk/servicebus/Azure.Messaging.ServiceBus/README.md
private static Logger logger = LogManager.GetCurrentClassLogger();
private static ServiceBusClient client;
private static ServiceBusSender sender;
public static async Task ProcessTopicAsync(string connectionString, string topicName, string subscriberName, int fetchCount = 10)
{
try
{
client = new ServiceBusClient(connectionString);
sender = client.CreateSender(topicName);
ServiceBusReceiver dlqReceiver = client.CreateReceiver(topicName, subscriberName, new ServiceBusReceiverOptions
{
SubQueue = SubQueue.DeadLetter,
ReceiveMode = ServiceBusReceiveMode.PeekLock
});
await ProcessDeadLetterMessagesAsync($"topic: {topicName} -> subscriber: {subscriberName}", fetchCount, sender, dlqReceiver);
}
catch (Azure.Messaging.ServiceBus.ServiceBusException ex)
{
if (ex.Reason == Azure.Messaging.ServiceBus.ServiceBusFailureReason.MessagingEntityNotFound)
{
logger.Error(ex, $"Topic:Subscriber '{topicName}:{subscriberName}' not found. Check that the name provided is correct.");
}
else
{
throw;
}
}
finally
{
await sender.CloseAsync();
await client.DisposeAsync();
}
}
public static async Task ProcessQueueAsync(string connectionString, string queueName, int fetchCount = 10)
{
try
{
client = new ServiceBusClient(connectionString);
sender = client.CreateSender(queueName);
ServiceBusReceiver dlqReceiver = client.CreateReceiver(queueName, new ServiceBusReceiverOptions
{
SubQueue = SubQueue.DeadLetter,
ReceiveMode = ServiceBusReceiveMode.PeekLock
});
await ProcessDeadLetterMessagesAsync($"queue: {queueName}", fetchCount, sender, dlqReceiver);
}
catch (Azure.Messaging.ServiceBus.ServiceBusException ex)
{
if (ex.Reason == Azure.Messaging.ServiceBus.ServiceBusFailureReason.MessagingEntityNotFound)
{
logger.Error(ex, $"Queue '{queueName}' not found. Check that the name provided is correct.");
}
else
{
throw;
}
}
finally
{
await sender.CloseAsync();
await client.DisposeAsync();
}
}
private static async Task ProcessDeadLetterMessagesAsync(string source, int fetchCount, ServiceBusSender sender, ServiceBusReceiver dlqReceiver)
{
var wait = new System.TimeSpan(0, 0, 10);
logger.Info($"fetching messages ({wait.TotalSeconds} seconds retrieval timeout)");
logger.Info(source);
IReadOnlyList<ServiceBusReceivedMessage> dlqMessages = await dlqReceiver.ReceiveMessagesAsync(fetchCount, wait);
logger.Info($"dl-count: {dlqMessages.Count}");
int i = 1;
foreach (var dlqMessage in dlqMessages)
{
logger.Info($"start processing message {i}");
logger.Info($"dl-message-dead-letter-message-id: {dlqMessage.MessageId}");
logger.Info($"dl-message-dead-letter-reason: {dlqMessage.DeadLetterReason}");
logger.Info($"dl-message-dead-letter-error-description: {dlqMessage.DeadLetterErrorDescription}");
ServiceBusMessage resubmittableMessage = new ServiceBusMessage(dlqMessage);
await sender.SendMessageAsync(resubmittableMessage);
await dlqReceiver.CompleteMessageAsync(dlqMessage);
logger.Info($"finished processing message {i}");
logger.Info("--------------------------------------------------------------------------------------");
i++;
}
await dlqReceiver.CloseAsync();
logger.Info($"finished");
}
}
}
It may be "duplicate message detection" as Peter Berggreen indicated or more likely if you are directly moving the BrokeredMessage from the dead letter queue to the live queue then the DeliveryCount would still be at maximum and it would return to the dead letter queue.
Pull the BrokeredMessage off the dead letter queue, get the content using GetBody(), create in new BrokeredMessage with that data and send it to the queue. You can do this in a safe manor, by using peek to get the message content off the dead letter queue and then send the new message to the live queue before removing the message from the dead letter queue. That way you won't lose any crucial data if for some reason it fails to write to the live queue.
With a new BrokeredMessage you should not have an issue with "duplicate message detection" and the DeliveryCount will be reset to zero.
The Service Bus Explorer tool always creates a clone of the original message when you repair and resubmit a message from the deadletter queue. It could not be any different as by default Service Bus messaging does not provide any message repair and resubmit mechanism. I suggest you to investigate why your message gets ends up in the deadletter queue as well as its clone when you resubmit it. Hope this helps!
It sounds like it could be related to ASB's "duplicate message detection" functionality.
When you resubmit a message in ServiceBus Explorer it will clone the message and thereby the new message will have the same Id as the original message in the deadletter queue.
If you have enabled "Requires Duplicate Detection" on the queue/topic and you try to resubmit the message within the "Duplicate Detection History Time Window", then the message will immediately be moved to the deadletter queue again.
If you want to use Service Bus Explorer to resubmit deadletter messages, then I think that you will have to disable "Requires Duplicate Detection" on the queue/topic.
Is there a way to get the message ID after insert it in a queue Azure ?
CloudStorageAccount storageAccount =
CloudStorageAccount.parse(storageConnectionString);
CloudQueueClient queueClient = storageAccount.createCloudQueueClient();
CloudQueue queue = queueClient.getQueueReference("myqueue");
queue.createIfNotExist();
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.addMessage(message);
// Get message ID here ?
I realize it has been 5 years since this was originally asked; however, it is now possible to achieve this.
CloudQueueMessage message = new CloudQueueMessage("Hello, World");
queue.AddMessage(message);
// here's how you get the id
string id = message.Id;
Only way you could get the message id is by getting the message. So you would have to fetch messages from the queue using GetMessage or GetMessages method. However there's no guarantee that you will get the message you just created as GetMessages can only return up to 32 visible messages from the top of the queue.
Since queue lies on the principle "First In First Out" or FIFO, that is why you can't just get the particular message anytime you want but you have to use the GetMessage and iterate on it.