how to consume the message from azure event hub with only subscriber? - azure

I send the message to azure event hub. But I am not able to download the message from event hub.
enter code here
string eventHubConnectionString = "<connection string>";
string eventHubName = "<event Hub name>";
string storageAccountName = "<event hub storage>";
string storageAccountKey = "<storage Key>";
string storageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}",storageAccountName, storageAccountKey);
EventProcessorHost eventProcessorHost = new EventProcessorHost("message", eventHubName, EventHubConsumerGroup.DefaultGroupName, eventHubConnectionString, storageConnectionString);
eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>().Wait();
IEventProcessor:
enter code here
class SimpleEventProcessor : IEventProcessor
{
Stopwatch checkpointStopWatch;
async Task IEventProcessor.CloseAsync(PartitionContext context, CloseReason reason)
{
Console.WriteLine(string.Format("Processor Shuting Down. Partition '{0}', Reason: '{1}'.", context.Lease.PartitionId, reason.ToString()));
if (reason == CloseReason.Shutdown)
{
await context.CheckpointAsync();
}
}
Task IEventProcessor.OpenAsync(PartitionContext context)
{
Console.WriteLine(string.Format("SimpleEventProcessor initialize. Partition: '{0}', Offset: '{1}'", context.Lease.PartitionId, context.Lease.Offset));
this.checkpointStopWatch = new Stopwatch();
this.checkpointStopWatch.Start();
return Task.FromResult<object>(null);
}
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (EventData eventData in messages)
{
string data = Encoding.UTF8.GetString(eventData.GetBytes());
Console.WriteLine(string.Format("Message received. Partition: '{0}', Data: '{1}'",
context.Lease.PartitionId, data));
}
//Call checkpoint every 5 minutes, so that worker can resume processing from the 5 minutes back if it restarts.
if (this.checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(5))
{
await context.CheckpointAsync();
lock (this)
{
this.checkpointStopWatch.Reset();
}
}
}
}
It show the following error
Aggregate exception handling. One or more error occurred.
Message details: No such host is known
what is EventProcessor host name?
It show error at this line: eventProcessorHost.RegisterEventProcessorAsync().Wait();
it is not calling IEventprocessor. Is it have any other method for consuming message from event hub?

You can trace your exception and look for inner exception while debugging so this should give you a clue whats the real reason.. I had this stupid exception too and it was because when you use the eventHubName variable with EventProcessorHost it should be in lowercase, (containing only letters/numbers and '-' which has to be followed by letter or number which means that '--' is not supported. eventHubName should also begin with a letter)
Even if the event hub name is "myEventHub123" your variable has to be like:
string eventHubName = "myeventhub123";
Hope this will help someone..

I have had success building a event processor using the sample code located here.
It is hard to tell from your sample code what the error is because it could be related to a typo in your connection string/eventhub/storage account name since it is not provided (you did right, don't post your connection string with sensitive data).
The difference between the way the example loads the event hub information from the connection string and the way the code you provided is how the information is provided through the Evenhub Client. Try update the way you build your EventProcessorHost like the example below:
EventHubClient eventHubClient = EventHubClient.CreateFromConnectionString(eventHubConnectionString, this.eventHubName);
// Get the default Consumer Group
defaultConsumerGroup = eventHubClient.GetDefaultConsumerGroup();
string blobConnectionString = ConfigurationManager.AppSettings["AzureStorageConnectionString"]; // Required for checkpoint/state
eventProcessorHost = new EventProcessorHost("singleworker", eventHubClient.Path, defaultConsumerGroup.GroupName, this.eventHubConnectionString, blobConnectionString);
eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>().Wait();

Related

Azure Service Bus - Messages body not getting printed in console. Code not getting in the for loop and taking a lot of time

with servicebus_client:
receiver = servicebus_client.get_subscription_receiver(topic_name=TOPIC_NAME, subscription_name=SUBSCRIPTION_NAME, max_wait_time=5)
with receiver:
for msg in receiver:
print("Received: " + str(msg))
receiver.complete_message(msg)
This code is taken from the Azure service bus code snippet in Microsoft portal. The code works fine till the it sends the messages, but when it reaches this part of the code, it is not getting in the for loop. The print statement is not getting execute. But I can see the Incoming and outcoming messages on the particular metrics page of that topic. Can I get some help here as I am new to Azure Service Bus?
Code for receive message from topic. its working as expected. you can try again with all library reference and follow all steps.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;
namespace SubscriptionReceiver
{
internal class Program
{
// connection string to your Service Bus namespace
static string connectionString = "Endpoint=sb://xxx/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxx";
// name of the Service Bus topic
static string topicName = "mytopic";
// name of the subscription to the topic
static string subscriptionName = "S1";
// the client that owns the connection and can be used to create senders and receivers
static ServiceBusClient client;
// the processor that reads and processes messages from the subscription
static ServiceBusProcessor processor;
static async Task Main()
{
// The Service Bus client types are safe to cache and use as a singleton for the lifetime
// of the application, which is best practice when messages are being published or read
// regularly.
//
// Create the clients that we'll use for sending and processing messages.
client = new ServiceBusClient(connectionString);
// create a processor that we can use to process the messages
processor = client.CreateProcessor(topicName, subscriptionName, new ServiceBusProcessorOptions());
try
{
// add handler to process messages
processor.ProcessMessageAsync += MessageHandler;
// add handler to process any errors
processor.ProcessErrorAsync += ErrorHandler;
// start processing
await processor.StartProcessingAsync();
Console.WriteLine("Wait for a minute and then press any key to end the processing");
Console.ReadKey();
// stop processing
Console.WriteLine("\nStopping the receiver...");
await processor.StopProcessingAsync();
Console.WriteLine("Stopped receiving messages");
}
finally
{
// Calling DisposeAsync on client types is required to ensure that network
// resources and other unmanaged objects are properly cleaned up.
await processor.DisposeAsync();
await client.DisposeAsync();
}
}
// handle received messages
static async Task MessageHandler(ProcessMessageEventArgs args)
{
string body = args.Message.Body.ToString();
Console.WriteLine($"Received: {body} from subscription: {subscriptionName}");
// complete the message. messages is deleted from the subscription.
await args.CompleteMessageAsync(args.Message);
}
// handle any errors when receiving messages
static Task ErrorHandler(ProcessErrorEventArgs args)
{
Console.WriteLine(args.Exception.ToString());
return Task.CompletedTask;
}
}
}
Output :- Receive Message from Topic Subscription

Azure Servicebus, MassTransit and DLQ's. Moving from DLQ to original queue

It really annoys me that we're unable to move messages from a Dead Letter Queue over to the Original Queue for processing when using Azure Servicebus. So, I figured out that I will try to implement this feature myself. We are using Masstransit to publish events. The queuename in ASB will be an events full assembly name.
I've created an REST Endpoint in my application to move messages from the DLQ to the original queue for reprocessing. This is where I'm stuck at the moment.
To get all messages in a DLQ, the user gives me the queuename, and I will format it to contain the DeadLetterQueue. Like this:
myproject.events.usercreatedevent -> myproject.events.usercreatedevent/$DeadLetterQueue
I get all the messages from this queue by using classes from the Nuget package Microsoft.Azure.Servicebus
public async Task RequeueMessagesAsync(string queueName)
{
var msg = new MessageReceiver(BuildConnectionString(), queueName);
var messages = await msg.PeekAsync(50);
foreach (var message in messages)
{
var content = Encoding.UTF8.GetString(message.Body);
var jsonObject = JsonConvert.DeserializeObject<JObject>(content);
var destinationAddress = jsonObject["destinationAddress"].ToString();
var messageContent = jsonObject["message"].ToString();
var messageType = destinationAddress.Split("/").Last();
await _bus.SendAsync(jsonObject, messageType);
}
}
The when calling _bus.SendAsync(object, address) the message ends up in a _skipped queue. I think the reason for this is that the messageHeaders is set to JObject, and not the actual message type. I cannot use reflection to recreated the event either, as we have a lot of microservices and source code of the event it not necessarily available. The code behind the _bus.SendAsync(object, address) looks like this:
public async Task SendAsync(object message, string queueName, CancellationToken cancellationToken = default)
{
ISendEndpoint sender = await GetSenderAsync(queueName);
sender.ConnectSendObserver(new ErrorQueueConfiguration(_addressProvider.GetAddress("error")));
await sender.Send(message, cancellationToken);
}
Can I trick Masstransit to forward this "unknown" type to my Consumer by changing the MessageHeaders somehow? Have anyone successfully moved messages from a DLQ to its original queue?

Resubmitting a message from dead letter queue - Azure Service Bus

I have created a service bus queue in Azure and it works well. And if the message is not getting delivered within default try (10 times), it is correctly moving the message to the dead letter queue.
Now, I would like to resubmit this message from the dead letter queue back to the queue where it originated and see if it works again. I have tried the same using service bus explorer. But it gets moved to the dead letter queue immediately.
Is it possible to do the same, and if so how?
You'd need to send a new message with the same payload. ASB by design doesn't support message resubmission.
We had a batch of around 60k messages, which need to be reprocessed from the dead letter queue. Peeking and send the messages back via Service Bus Explorer took around 6 minutes per 1k messages from my machine. I solved the issue by setting a forward rule for DLQ messages to another queue and from there auto forward it to the original queue. This solution took around 30 seconds for all 60k messages.
Try to remove dead letter reason
resubmittableMessage.Properties.Remove("DeadLetterReason");
resubmittableMessage.Properties.Remove("DeadLetterErrorDescription");
full code
using Microsoft.ServiceBus.Messaging;
using System.Transactions;
namespace ResubmitDeadQueue
{
class Program
{
static void Main(string[] args)
{
var connectionString = "";
var queueName = "";
var queue = QueueClient.CreateFromConnectionString(connectionString, QueueClient.FormatDeadLetterPath(queueName), ReceiveMode.PeekLock);
BrokeredMessage originalMessage
;
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
do
{
originalMessage = queue.Receive();
if (originalMessage != null)
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// Create new message
var resubmittableMessage = originalMessage.Clone();
// Remove dead letter reason and description
resubmittableMessage.Properties.Remove("DeadLetterReason");
resubmittableMessage.Properties.Remove("DeadLetterErrorDescription");
// Resend cloned DLQ message and complete original DLQ message
client.Send(resubmittableMessage);
originalMessage.Complete();
// Complete transaction
scope.Complete();
}
}
} while (originalMessage != null);
}
}
}
Thanks to some other responses here!
We regularly need to resubmit messages. The answer from #Baglay-Vyacheslav helped a lot. I've pasted some updated C# code that works with the latest Azure.Messaging.ServiceBus Nuget Package.
Makes it much quicker/easier to process DLQ on both queues/topics/subscribers.
using Azure.Messaging.ServiceBus;
using System.Collections.Generic;
using System.Threading.Tasks;
using NLog;
namespace ServiceBus.Tools
{
class TransferDeadLetterMessages
{
// https://github.com/Azure/azure-sdk-for-net/blob/Azure.Messaging.ServiceBus_7.2.1/sdk/servicebus/Azure.Messaging.ServiceBus/README.md
private static Logger logger = LogManager.GetCurrentClassLogger();
private static ServiceBusClient client;
private static ServiceBusSender sender;
public static async Task ProcessTopicAsync(string connectionString, string topicName, string subscriberName, int fetchCount = 10)
{
try
{
client = new ServiceBusClient(connectionString);
sender = client.CreateSender(topicName);
ServiceBusReceiver dlqReceiver = client.CreateReceiver(topicName, subscriberName, new ServiceBusReceiverOptions
{
SubQueue = SubQueue.DeadLetter,
ReceiveMode = ServiceBusReceiveMode.PeekLock
});
await ProcessDeadLetterMessagesAsync($"topic: {topicName} -> subscriber: {subscriberName}", fetchCount, sender, dlqReceiver);
}
catch (Azure.Messaging.ServiceBus.ServiceBusException ex)
{
if (ex.Reason == Azure.Messaging.ServiceBus.ServiceBusFailureReason.MessagingEntityNotFound)
{
logger.Error(ex, $"Topic:Subscriber '{topicName}:{subscriberName}' not found. Check that the name provided is correct.");
}
else
{
throw;
}
}
finally
{
await sender.CloseAsync();
await client.DisposeAsync();
}
}
public static async Task ProcessQueueAsync(string connectionString, string queueName, int fetchCount = 10)
{
try
{
client = new ServiceBusClient(connectionString);
sender = client.CreateSender(queueName);
ServiceBusReceiver dlqReceiver = client.CreateReceiver(queueName, new ServiceBusReceiverOptions
{
SubQueue = SubQueue.DeadLetter,
ReceiveMode = ServiceBusReceiveMode.PeekLock
});
await ProcessDeadLetterMessagesAsync($"queue: {queueName}", fetchCount, sender, dlqReceiver);
}
catch (Azure.Messaging.ServiceBus.ServiceBusException ex)
{
if (ex.Reason == Azure.Messaging.ServiceBus.ServiceBusFailureReason.MessagingEntityNotFound)
{
logger.Error(ex, $"Queue '{queueName}' not found. Check that the name provided is correct.");
}
else
{
throw;
}
}
finally
{
await sender.CloseAsync();
await client.DisposeAsync();
}
}
private static async Task ProcessDeadLetterMessagesAsync(string source, int fetchCount, ServiceBusSender sender, ServiceBusReceiver dlqReceiver)
{
var wait = new System.TimeSpan(0, 0, 10);
logger.Info($"fetching messages ({wait.TotalSeconds} seconds retrieval timeout)");
logger.Info(source);
IReadOnlyList<ServiceBusReceivedMessage> dlqMessages = await dlqReceiver.ReceiveMessagesAsync(fetchCount, wait);
logger.Info($"dl-count: {dlqMessages.Count}");
int i = 1;
foreach (var dlqMessage in dlqMessages)
{
logger.Info($"start processing message {i}");
logger.Info($"dl-message-dead-letter-message-id: {dlqMessage.MessageId}");
logger.Info($"dl-message-dead-letter-reason: {dlqMessage.DeadLetterReason}");
logger.Info($"dl-message-dead-letter-error-description: {dlqMessage.DeadLetterErrorDescription}");
ServiceBusMessage resubmittableMessage = new ServiceBusMessage(dlqMessage);
await sender.SendMessageAsync(resubmittableMessage);
await dlqReceiver.CompleteMessageAsync(dlqMessage);
logger.Info($"finished processing message {i}");
logger.Info("--------------------------------------------------------------------------------------");
i++;
}
await dlqReceiver.CloseAsync();
logger.Info($"finished");
}
}
}
It may be "duplicate message detection" as Peter Berggreen indicated or more likely if you are directly moving the BrokeredMessage from the dead letter queue to the live queue then the DeliveryCount would still be at maximum and it would return to the dead letter queue.
Pull the BrokeredMessage off the dead letter queue, get the content using GetBody(), create in new BrokeredMessage with that data and send it to the queue. You can do this in a safe manor, by using peek to get the message content off the dead letter queue and then send the new message to the live queue before removing the message from the dead letter queue. That way you won't lose any crucial data if for some reason it fails to write to the live queue.
With a new BrokeredMessage you should not have an issue with "duplicate message detection" and the DeliveryCount will be reset to zero.
The Service Bus Explorer tool always creates a clone of the original message when you repair and resubmit a message from the deadletter queue. It could not be any different as by default Service Bus messaging does not provide any message repair and resubmit mechanism. I suggest you to investigate why your message gets ends up in the deadletter queue as well as its clone when you resubmit it. Hope this helps!
It sounds like it could be related to ASB's "duplicate message detection" functionality.
When you resubmit a message in ServiceBus Explorer it will clone the message and thereby the new message will have the same Id as the original message in the deadletter queue.
If you have enabled "Requires Duplicate Detection" on the queue/topic and you try to resubmit the message within the "Duplicate Detection History Time Window", then the message will immediately be moved to the deadletter queue again.
If you want to use Service Bus Explorer to resubmit deadletter messages, then I think that you will have to disable "Requires Duplicate Detection" on the queue/topic.

ApproximateMessageCount always null after calling FetchAttributesAsync in a Universal windows App

I am making a small App that should list the number of items in my Azure queues.
When I use FetchAttributesAsync and ApproximateMessageCount in a Console App, I get the expected result in ApproximateMessageCount after a call to FetchAttributesAsync (or FetchAttributes).
When I use the same in a Universal Windows app, ApproximateMessageCount remains stuck at null after a call to FetchAttributesAsync (FetchAttributes is not available there).
Console code:
CloudStorageAccount _account;
if (CloudStorageAccount.TryParse(_connectionstring, out _account))
{
var queueClient = _account.CreateCloudQueueClient();
Console.WriteLine(" {0}", _account.QueueEndpoint);
Console.WriteLine(" ----------------------------------------------");
var queues = (await queueClient.ListQueuesSegmentedAsync(null)).Results;
foreach (CloudQueue q in queues)
{
await q.FetchAttributesAsync();
Console.WriteLine($" {q.Name,-40} {q.ApproximateMessageCount,5}");
}
}
Universal App code:
IEnumerable<CloudQueue> queues;
CloudStorageAccount _account;
CloudQueueClient queueClient;
CloudStorageAccount.TryParse(connectionstring, out _account);
queueClient = _account.CreateCloudQueueClient();
queues = (await queueClient.ListQueuesSegmentedAsync(null)).Results;
foreach (CloudQueue q in queues)
{
await q.FetchAttributesAsync();
var count = q.ApproximateMessageCount;
// count is always null here!!!
}
I have tried all kinds of alternatives, like Wait()'s and such on the awaitables. Whatever I try, the ApproximateMessageCount stays a null with dertermination :-(.
Am I missing something?
I think you have discovered a bug in the storage client library. I looked up the code on Github and essentially instead of reading the value for Approximate Message Count header, the code is reading the value for Lease Status header.
In QueueHttpResponseParsers.cs class:
public static string GetApproximateMessageCount(HttpResponseMessage response)
{
return response.Headers.GetHeaderSingleValueOrDefault(Constants.HeaderConstants.LeaseStatus);
}
This method should have been:
public static string GetApproximateMessageCount(HttpResponseMessage response)
{
return response.Headers.GetHeaderSingleValueOrDefault(Constants.HeaderConstants.ApproximateMessagesCount);
}
I have submitted a bug for this: https://github.com/Azure/azure-storage-net/issues/155.

How do I prevent duplicate message not to be inserted in Service Bus Queue while WebJob processing?

I want to be sure that if same message is already present in the queue then second message should be ignored (not inserted to queue) while webjob is processing first message.
I tried following code:
var namespaceManager =
NamespaceManager.CreateFromConnectionString(connectionString);
if (!namespaceManager.QueueExists(queueName))
{
namespaceManager.CreateQueue(new QueueDescription(queueName) { RequiresDuplicateDetection = true });
}
property RequiresDuplicateDetection should ensure about duplicate message.
// Get messageFactory for runtime operation
MessagingFactory messagingFactory = MessagingFactory.CreateFromConnectionString(connectionString);
QueueClient queueClient = messagingFactory.CreateQueueClient("TestQueue");
BrokeredMessage message = new BrokeredMessage();
message.MessageId = "Localization";
queueClient.Send(message);
But webjob gets trigger for every messageId. I gave sleep time 150000 milliseconds but before that I tried to insert same message to same queue, which should not be inserted because of duplicate message.
I tried MSDN but it is not working in Azure Webjob.
WebJob code:
public static void ProcessQueueMessage([ServiceBusTrigger("TestQueue")] BrokeredMessage message, TextWriter log)
{
log.WriteLine("Webjob Start" + message.MessageId + DateTime.Now);
Thread.Sleep(150000);
log.WriteLine("Webjob End" + message.MessageId + DateTime.Now);
}
Duplicate message detection is based on the MessageId of the BrokeredMessage. The Azure Product Team has a sample illustrating this feature here.

Resources