I have created a simple event processor host which reads events from Azure IOT hub configured with the default options for lease management. But at regular intervals, I get following exception even though I am running single instance of the EventProcessorHost:
ReceiverDisconnectedException: New receiver with higher epoch is created hence current receiver with epoch is getting disconnected. Reason: LeaseLost
The client re-acquires the lease and process messages further.
Here is the initialization of the EventProcessorHost
EventProcessorHost eventProcessorHost = new EventProcessorHost(hostName, eventHubName, EventHubConsumerGroup.DefaultGroupName,
eventHubConnectionString, storageAccountConnectionString);
var options = new EventProcessorOptions();
options.ExceptionReceived += (sender, e) =>
{
Console.WriteLine(e.Exception);
};
eventProcessorHost.PartitionManagerOptions = PartitionManagerOptions.DefaultOptions;
eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>(options).Wait();
Is this behaviour normal or I need to do something to avoid it for a single instance?
Below is my EventProcessor:
public class SimpleEventProcessor : IEventProcessor
{
Stopwatch checkpointStopWatch;
public async Task CloseAsync(PartitionContext context, CloseReason reason)
{
Console.WriteLine("Processor Shutting Down. Partition '{0}', Reason: '{1}'.", context.Lease.PartitionId, reason);
await context.CheckpointAsync();
}
public Task OpenAsync(PartitionContext context)
{
Console.WriteLine("SimpleEventProcessor initialized. Partition: '{0}', Offset: '{1}'", context.Lease.PartitionId, context.Lease.Offset);
this.checkpointStopWatch = new Stopwatch();
this.checkpointStopWatch.Start();
this.rawDataService = new RawDataService();
return Task.FromResult<object>(null);
}
public async Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (EventData eventData in messages)
{
string data = Encoding.UTF8.GetString(eventData.GetBytes());
Console.WriteLine(string.Format("Message received. Partition: '{0}', Data: '{1}'", context.Lease.PartitionId, data));
}
}
}
Related
When I create a scheduled service bus message, both in Azure Portal and in my app using the Service bus producer code (below) and I receive a sequence number. I save it in my db.
Problem - When my Service bus consumer code is triggered by the dequeue of the scheduled message the sequence number is different than the one that was initially given to me by both the service bus producer code and through the Azure portal.
Shown here, where '13' is the sequnce number shown in Azure Portal screen.
Here is the code that receives the scheduled message and you can see the sequence number is different!
Here is my consumer code (don't think it matters)
private async Task MessageHandler(ProcessMessageEventArgs args)
{
string body = args.Message.Body.ToString();
JObject jsonObject = JObject.Parse(body);
var eventStatus = (string)jsonObject["EventStatus"];
await args.CompleteMessageAsync(args.Message);
// fetch row here by sequence number
// edit some data from entity, then save
int result = await dbContext.SaveChangesAsync();
}
Here is my producer code
public async Task<long> SendMessage(string messageBody, DateTimeOffset scheduledEnqueueTimeUtc)
{
await using (ServiceBusClient client = new ServiceBusClient(_config["ServiceBus:Connection"]))
{
ServiceBusSender sender = client.CreateSender(_config["ServiceBus:Queue"]);
ServiceBusMessage message = new ServiceBusMessage(messageBody);
var sequenceNumber = await sender.ScheduleMessageAsync(message, scheduledEnqueueTimeUtc);
return sequenceNumber;
}
}
From the documentation:
The SequenceNumber for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if had been enqueued at the current instant, which includes assigning a new SequenceNumber.
This is the code on my side:
using System;
using System.Threading;
using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;
namespace ConsoleApp3
{
class Program
{
static string connectionString = "xxxxxx";
static string queueName = "myqueue";
static ServiceBusClient client;
static ServiceBusProcessor processor;
static async Task Main(string[] args)
{
client = new ServiceBusClient(connectionString);
processor = client.CreateProcessor(queueName, new ServiceBusProcessorOptions());
try
{
processor.ProcessMessageAsync += MessageHandler;
processor.ProcessErrorAsync += ErrorHandler;
await processor.StartProcessingAsync();
Console.WriteLine("Wait for a minute and then press any key to end the processing");
Console.ReadKey();
Console.WriteLine("\nStopping the receiver...");
await processor.StopProcessingAsync();
Console.WriteLine("Stopped receiving messages");
}
finally
{
await processor.DisposeAsync();
await client.DisposeAsync();
}
}
static async Task MessageHandler(ProcessMessageEventArgs args)
{
string body = args.Message.Body.ToString();
Console.WriteLine($"Received: {body}");
Console.WriteLine($"ID: {args.Message.MessageId}");
await args.CompleteMessageAsync(args.Message);
}
static Task ErrorHandler(ProcessErrorEventArgs args)
{
Console.WriteLine(args.Exception.ToString());
return Task.CompletedTask;
}
}
}
And it seems no problem on my side:
Message Id changed should be the message be thrown back by some reasons.
I'm facing a strange issue, and I ran out of the possible causes. The scenario is
Fetch incoming message from queue
Process it and then add new message to another queue
but the thing is, if I finish the long running task for the incoming message, and then try to add new message to another queue, I don't receive it. If I just add a face message to that another queue, then I am able to receive the real message after the long-running operation is finished. But why ? I don't want to put any fake messages to the queue, but without that my scenario doesn't work. Any ideas ?
public class WorkerRole : RoleEntryPoint
{
// QueueClient is thread-safe. Recommended that you cache
// rather than recreating it on every request
Microsoft.ServiceBus.Messaging.QueueClient Client;
ManualResetEvent CompletedEvent = new ManualResetEvent(false);
public override void Run()
{
MyResult result = null;
var queueClient = new Microsoft.Azure.ServiceBus.QueueClient("QueueConnectionString", "QueueName");
Client.OnMessage(async (receivedMessage) =>
{
try
{
using (Stream stream = receivedMessage.GetBody<Stream>())
{
using (StreamReader reader = new StreamReader(stream))
{
string json = reader.ReadToEnd();
OCRQueueItem_Incoming item = JsonConvert.DeserializeObject<IncomingClass>(json);
var someClass = new OCRManager();
var message = new Message(Encoding.UTF8.GetBytes("test 1"));
await queueClient.SendAsync(message);
result = new SomeManager().RunLongRunningTask(item); //it runs for 1-2min
}
}
}
catch (Exception ex) { }
finally
{
var json = JsonConvert.SerializeObject(result);
var message = new Message(Encoding.UTF8.GetBytes(json));
await queueClient.SendAsync(message);
}
});
CompletedEvent.WaitOne();
}
public override bool OnStart()
{
ServicePointManager.DefaultConnectionLimit = 12;
string connectionString = CloudConfigurationManager.GetSetting("Queue.ConnectionString");
Client = Microsoft.ServiceBus.Messaging.QueueClient.Create(connectionString);
return base.OnStart();
}
public override void OnStop()
{
Client.Close();
CompletedEvent.Set();
base.OnStop();
}
}
I'm using the frameworks "Microsoft.Azure.EventHubs (2.0.0)" and "Microsoft.Azure.EventHubs.Processor (2.0.1)" to read from the azure notification hub. It works by running the app on the iPhone X iOS 11.2 Simulator but if I try to run the app on my iPhone 6s iOS 11.3 device - the app doesn't connect with the eventhub.
This is my code to connect with the eventhub:
public async Task StartReadingFromHub () {
_eventProcessorHost = new EventProcessorHost (
EhEntityPath,
PartitionReceiver.DefaultConsumerGroupName,
EhConnectionString,
StorageConnectionString,
LeaseContainerName);
var options = new EventProcessorOptions () {
MaxBatchSize = 10
};
await _eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor> (options);
}
This is my EventProcessor:
public class SimpleEventProcessor : IEventProcessor {
public Task CloseAsync (PartitionContext context, CloseReason reason) {
return Task.CompletedTask;
}
public Task OpenAsync (PartitionContext context) {
return Task.CompletedTask;
}
public Task ProcessErrorAsync (PartitionContext context, Exception error) {
return Task.CompletedTask;
}
public Task ProcessEventsAsync (PartitionContext context, IEnumerable<EventData> messages) {
foreach (var eventData in messages) {
var data = Encoding.UTF8.GetString (eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
var alert = JsonConvert.DeserializeObject<Alert> (data);
HubMessages.receivedMessages.Add (alert);
}
return context.CheckpointAsync ();
}
}
And the OpenAsync Function will never enter.
I got into an issue with IMessageSessionAsyncHandlerFactory where new instances of IMessageSessionAsyncHandler are not created when the volume of writing goes to 0 and then up to a normal level.
To be more precise, I'm using SessionHandlerOptions with a value of 500 for MaxConcurrentSessions. This allows reading at a speed of more than 1k msg/s.
The queue I'm reading from is a partitioned queue.
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
Here is the code for the factory registering:
private void RegisterHandler()
{
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(1),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(_callback), sessionHandlerOptions);
}
The factory class:
public class SessionHandlerFactory : IMessageSessionAsyncHandlerFactory
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandlerFactory(Action<BrokeredMessage> callback)
{
_callback = callback;
}
public IMessageSessionAsyncHandler CreateInstance(MessageSession session, BrokeredMessage message)
{
return new SessionHandler(session.SessionId, _callback);
}
public void DisposeInstance(IMessageSessionAsyncHandler handler)
{
var disposable = handler as IDisposable;
disposable?.Dispose();
}
}
And the handler:
public class SessionHandler : MessageSessionAsyncHandler
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandler(string sessionId, Action<BrokeredMessage> callback)
{
SessionId = sessionId;
_callback = callback;
}
public string SessionId { get; }
protected override async Task OnMessageAsync(MessageSession session, BrokeredMessage message)
{
try
{
_callback(message);
}
catch (Exception ex)
{
Logger.Error(...);
}
}
I can see that the session handlers are closed and that the factories are disposed when the writing/reading is at a normal level. However, once the queue empties, there's no way new session handlers are created. Is there a policy for allocating session IDs that forbids reallocating the same sessions after a period of inactivity?
Edit 1:
I'm adding two pictures to illustrate the behavior:
When the writer is stopped and restarted, the running reader is not able to read as much as before.
The number of sessions created after that moment is also much lower than before:
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
When using IMessageSessionHandlerFactory to control how the IMessageSessionAsyncHandler instances are created, you could try to log the creation and destruction for all of your IMessageSessionAsyncHandler instances.
Based on your code, I created a console application to this issue on my side. Here is my code snippet for initializing queue client and handling messages:
InitializeReceiver
static void InitializeReceiver(string connectionString, string queuePath)
{
_queueClient = QueueClient.CreateFromConnectionString(connectionString, queuePath, ReceiveMode.PeekLock);
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(5),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(OnMessageHandler), sessionHandlerOptions);
}
OnMessageHandler
static void OnMessageHandler(BrokeredMessage message)
{
var body = message.GetBody<Stream>();
dynamic recipeStep = JsonConvert.DeserializeObject(new StreamReader(body, true).ReadToEnd());
lock (Console.Out)
{
Console.ForegroundColor = ConsoleColor.Cyan;
Console.WriteLine(
"Message received: \n\tSessionId = {0}, \n\tMessageId = {1}, \n\tSequenceNumber = {2}," +
"\n\tContent: [ title = {3} ]",
message.SessionId,
message.MessageId,
message.SequenceNumber,
recipeStep.title);
Console.ResetColor();
}
Task.Delay(TimeSpan.FromSeconds(3)).Wait();
message.Complete();
}
Per my test, the SessionHandler could work as expected when the volume of messages in the queue from normal to zero and from zero to normal for some time as follows:
I also tried to leverage QueueClient.RegisterSessionHandlerAsync to test this issue and it works as well. Additionally, I found this git sample about Service Bus Sessions, you could refer to it.
I was hoping for some guidance on how to use the EventProcessorHost with a worker role. Basically I am hoping to have the EventProcessorHost process the partitions in parallel and I'm wondering where I should go about placing this type of code within the worker role and if I'm missing anything key.
var manager = NamespaceManager.CreateFromConnectionString(connectionString);
var desc = manager.CreateEventHubIfNotExistsAsync(path).Result;
var client = Microsoft.ServiceBus.Messaging.EventHubClient.CreateFromConnectionString(connectionString, path);
var host = new EventProcessorHost(hostname, path, consumerGroup, connectionString, blobStorageConnectionString);
EventHubProcessorFactory<EventData> factory = new EventHubProcessorFactory<EventData>();
host.RegisterEventProcessorFactoryAsync(factory);
Everything I've read says the EventProcessorHost will divide up the partitions on its own, but is the above code sufficient to process all the partitions asynchronously?
Here's a simplified version of how we process our event hub from an Worker Role. We keep the instance in the mainWorker role and call the IEventProcessor to start processing it.
This way we can call it and close it down when the Worker Responds to shutdown events etc.
EDIT:
As for the processing it in parallel, the IEventProcessor class will just grab 10 more events from the event hub when it's finished processing the current one. Handling all the fancy partition leasing for you.
It's a synchronous workflow, When I scale to multiple worker roles I start to see the partitions get split between instances and it gets faster etc. You'd have to roll your own solution if you wanted it to process the event hub in a different way.
public class WorkerRole : RoleEntryPoint
{
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
private readonly ManualResetEvent _runCompleteEvent = new ManualResetEvent(false);
private EventProcessorHost _eventProcessorHost;
public override bool OnStart()
{
ThreadPool.SetMaxThreads(4096, 2048);
ServicePointManager.DefaultConnectionLimit = 500;
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
var eventClient = EventHubClient.CreateFromConnectionString("consumersConnectionString",
"eventHubName");
_eventProcessorHost = new EventProcessorHost(Dns.GetHostName(), eventClient.Path,
eventClient.GetDefaultConsumerGroup().GroupName,
"consumersConnectionString", "blobLeaseConnectionString");
return base.OnStart();
}
public override void Run()
{
try
{
RunAsync(this._cancellationTokenSource.Token).Wait();
}
finally
{
_runCompleteEvent.Set();
}
}
private async Task RunAsync(CancellationToken cancellationToken)
{
// starts processing here
await _eventProcessorHost.RegisterEventProcessorAsync<EventProcessor>();
while (!cancellationToken.IsCancellationRequested)
{
await Task.Delay(TimeSpan.FromMinutes(1));
}
}
public override void OnStop()
{
_eventProcessorHost.UnregisterEventProcessorAsync().Wait();
_cancellationTokenSource.Cancel();
_runCompleteEvent.WaitOne();
base.OnStop();
}
}
I have multiple processors for the specific partitions (you can guarantee FIFO this way), but you can implement you're own logic easily i.e. skip the use of a EventDataProcessor class and Dictionary lookup in my example and just implement some logic within the ProcessEventsAsync method.
public class EventProcessor : IEventProcessor
{
private readonly Dictionary<string, IEventDataProcessor> _eventDataProcessors;
public EventProcessor()
{
_eventDataProcessors = new Dictionary<string, IEventDataProcessor>
{
{"A", new EventDataProcessorA()},
{"B", new EventDataProcessorB()},
{"C", new EventDataProcessorC()}
}
}
public Task OpenAsync(PartitionContext context)
{
return Task.FromResult<object>(null);
}
public async Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach(EventData eventData in messages)
{
// implement your own logic here, you could just process the data here, just remember that they will all be from the same partition in this block
try
{
IEventDataProcessor eventDataProcessor;
if(_eventDataProcessors.TryGetValue(eventData.PartitionKey, out eventDataProcessor))
{
await eventDataProcessor.ProcessMessage(eventData);
}
}
catch (Exception ex)
{
_//log exception
}
}
await context.CheckpointAsync();
}
public async Task CloseAsync(PartitionContext context, CloseReason reason)
{
if (reason == CloseReason.Shutdown)
await context.CheckpointAsync();
}
}
Example of one of our EventDataProcessors
public interface IEventDataProcessor
{
Task ProcessMessage(EventData eventData);
}
public class EventDataProcessorA : IEventDataProcessor
{
public async Task ProcessMessage(EventData eventData)
{
// Do Something specific with data from Partition "A"
}
}
public class EventDataProcessorB : IEventDataProcessor
{
public async Task ProcessMessage(EventData eventData)
{
// Do Something specific with data from Partition "B"
}
}
Hope this helps, it's been rock solid for us so far and scales easily to multiple instances