Should Azure ServiceBusConnection be singleton? - azure

In the old NuGet, it was good practice to retain the MessagingFactory instance.
Should we retain a single instance of ServiceBusConnection (from new Nuget) to be injected into multiple clients?
EDIT 2018-07-25 00:22:00 UTC+8:
added a bit of clarity that ServiceBusConnection is from new NuGet.

Azure Service Bus .NET Standard client connections are not managed via QueueClient or any factory, like its predecessor used to. QueueClient is an abstraction on top of MessageSender and MessageReceiver that can take a ServiceBusConnection shared by multiple senders/receivers. You're free to choose either to share the same connection object or not.
var connection = new ServiceBusConnection(connectionString);
var queueClient1 = new QueueClient(connection, "queue1", ReceiveMode.PeekLock, RetryPolicy.Default);
var queueClient2 = new QueueClient(connection, "queue2", ReceiveMode.PeekLock, RetryPolicy.Default);
// Queue clients share the same connection
var message1 = new Message(Encoding.UTF8.GetBytes("Message1"));
var message2 = new Message(Encoding.UTF8.GetBytes("Message2"));
Based on the namespace tier you're using, you'll have to benchmark and see what works better for you. My findings showed that with Standard tier having multiple connections helps throughput, while with Premium tier it doesn't. This post hints at the reasons why it's so.

Not necessarily a singleton for all.
The right level of grain is the Messaging Factory. You may connect to a number of namespaces in your application, each will require their own MessageFactory and connection.
Alternatively you may have code that is not very async in it's nature and in this instance multiples connections (factories) make sense. For example you have two processes in your code, one that does a lot of work in a loop to send messages and another that receives, in this case two factories may be useful or you may refactor to make your code more async.
From the documentation.
It is recommended that you do not close messaging factories or queue,
topic, and subscription clients after you send a message, and then
re-create them when you send the next message. Closing a messaging
factory deletes the connection to the Service Bus service.
In short, you should be re-using your message factories, which then maintain a connection to the service bus. Depending on how your code is written you may want to have multiple factories.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements#reusing-factories-and-clients

Related

Azure Service Bus Send throughput .Net SDK

I am currently implementing a library to send the messages faster to the Service bus queue. What is observed is that, if I used the same ServiceBusClient and use the same sender to send the messages in Parallel.For, the throughput is not so high and my network upload speed is not fully utilized. The moment I make individual clients and use them to send, the throughput increases drastically and even utilizes my upload bandwidth very well.
Is my understanding correct or a single client-sender must do? Also, I am averse to create multiple clients as it will use a lot of resources to establish the client connection. Any articles that throw some light on this?
There is a throughput test tool and its code also creates multiple client.
protected override Task OnStartAsync()
{
for (int i = 0; i < this.Settings.SenderCount; i++)
{
this.senders.Add(Task.Run(SendTask));
}
return Task.WhenAll(senders);
}
async Task SendTask()
{
var client = new ServiceBusClient(this.Settings.ConnectionString);
ServiceBusSender sender = client.CreateSender(this.Settings.SendPath);
var payload = new byte[this.Settings.MessageSizeInBytes];
var semaphore = new DynamicSemaphoreSlim(this.Settings.MaxInflightSends.Value);
var done = new SemaphoreSlim(1);
done.Wait();
long totalSends = 0;
https://github.com/Azure-Samples/service-bus-dotnet-messaging-performance
Is there a library to manage the connections in a pool?
From the patterns in your code, I'm assuming that you're using the Azure.Messaging.ServiceBus package. If that isn't the case, please ignore the remainder of this post.
ServiceBusClient represents a single AMQP connection to the service. Any senders, receivers, and processors spawned from this client will share that connection. This gives your application the ability to control the number of connections used and pool them in the manner that works best in your context.
It is recommended to reuse clients, senders, receivers, and processors for the lifetime of your application; though the connection is shared, each time a new child type is spawned, it must establish a new AMQP link and perform the authorization handshake - which is non-trivial overhead.
These types are self-managing with respect to resources. For idle periods, connections and links will be closed to avoid waste, and they'll automatically be recreated for the first operation that requires them.
With respect to using multiple clients, senders, receivers, and processors - it is a valid approach and can yield better performance in some scenarios. The one caveat that I'll mention is that using more clients than the number of CPU cores in your host environment comes with an increased risk of causing contention in the thread pool. The Service Bus library is highly asynchronous, and its performance relies on continuations for async calls being scheduled in a timely manner.
Unfortunately, performance tuning is very difficult to generalize due to how much it varies for different application and hosting contexts. To find the right number of senders to maximize throughput for your application, we recommend that you spend time testing different values and observing the performance characteristics in your specific system.
For the new SDK, the same principle of connection management is applied i.e., re-creating connections is expensive.
You can connect client objects directly to the bus or by creating a ServiceBusConnection, can share a single connection between client
This is for the scenario to send as many messages as possible to a single queue then you can increase throughput by spinning up multiple ServiceBusConnection and client objects on separate threads.
Is there a library to manage the connections in a pool?
There’s no connection pooling happening under the hood and new connections are relatively expensive to create. With the previous SDK the advice was to re-use factories and clients where possible.
Refer this article for more information.

How do I reuse QueueClient instances to send response messages from Azure Function?

The best practices article from Azure docs recommends reusing QueueClient to send multiple messages to obtain faster performance.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements#reusing-factories-and-clients
I have a scenario of request-response messaging where Azure function is sending responses after being triggered by a service bus queue trigger (which is a request message).
The performance of QueueClient.SendAsync is slow (1.5 to 3 seconds) if I create a new QueueClient everytime the function executes.
If I reuse the QueueClient in a static variable then the SendAsync time reduces to 50ms for later calls. However, the static reference does not seem to be a reliable way of reusing QueueClient connection, since I do not know when to close it.
Is there a reliable way to reuse and then close the QueueClient across multiple function executions?
Can Azure-Function runtime provide any event where I can reliably close the queue client?
I think the challenge here is that the Azure Function doesn't know when you are "done" sending messages. If your Function has gone idle for sometime we may remove instances, but I don't believe there is any downside to doing a static QueueClient in a Function without ever explicitly closing it -- and it would be a best practice.
I suggest you switching to Service Bus output binding instead of sending messages manually with QueueClient. This way runtime will manage the instances of QueueClient and their configuration for you. See Azure Functions Service Bus bindings, "Service Bus output binding" section.
Otherwise, static QueueClient as suggested by #jeffhollan is a viable option.

Azure IOT ServiceClient / RegistryClient: What is the recommended frequency of CloseAsync?

Microsoft.Azure.Devices.ServiceClient and Microsoft.Azure.Devices.RegistryManager both have ConnectFromConnectionString and CloseAsync methods.
Should we use them like we use other .NET connection-close patterns, such as ADO.NET connections, Redis connections, Sockets' etc.? When I use objects like those I try to Close or Disposable.Dispose() of them as soon as possible.
What's the upside and downside to doing the same with the Microsoft.Azure.Devices objects when accessing the same IOT Hub? I have running code that treats the individual RegistryManager and ServiceClient instances as singletons, which are used throughout an application's lifetime -- which may be weeks or months. Are we short circuiting ourselves by keeping these objects "open" for this duration?
As pointed by #David R. Williamson, and this discussion on GitHub, it is generally recommended to register RegistryManager and ServiceClient as singletons.
For ASP NET Core or Azure Functions, it can be done as follows at startup:
ConfigureServices(services)
{
services.AddSingleton<RegistryManager>(RegistryManager.CreateFromConnectionString("connectionstring"));
}
I'm on the Azure IoT SDK team. We recommend keeping the service client instances around, depending on your usage pattern.
While it is generally fine to init/close/dispose the Azure IoT service clients, you can run into the HttpClient socket exhaustion problem. We've done work to prevent it from happening if you keep a singleton of the client around, but when the client is disposed 2 HttpClient instances are also disposed.
There's little harm in keeping them around and will save your program execution time in init/dispose.
We're working on a v2 where we'll allow the user to pass in their own copy of HttpClient, so you could keep that around and shut down our client more frequently.

Azure Queue Storage Proper Call Pattern in 2015?

What is the proper call/code pattern to write to Azure Queue Storage in a performant way?
Right now, the pseudo-code is
Create static class with StorageCredentials and CloudStorage account properies. At application startup, read values from config file into {get;}-only properties.
Create class with an async Task method with input parameter of my application message type. The method serializes the type, creates new CloudQueueMessage, new CloudQueueClient, new CloudQueue reference. Where configuration information is needed, it is read from the static class. My code then:
await Task.Run( ()=> theref.AddMessage(themessage).
It looks to me as if I have some redundancy in the code, and am not sure if/how connections might be pooled to the queue, and also if I require retry logic as I would with database (SQL Server etc) connectivity.
I am trying to understand which queue accessing steps can be reduced or optimized in any way.
All ideas appreciated.
Using .NET 4.5.2, C#. Code is executing in a Cloud Service (Worker Role).
Thanks.
Azure Storage Client Library already retries for you by default in case of service/network errors. It will retry up to 3 times per operation.
You can change your call to await theref.AddMessageAsync(themessage) instead of blocking on the synchronous AddMessage call on a separate thread.
As of the latest library, you can reuse the CloudQueueClient object to get a new reference to CloudQueue.
As long as you are calling AddMessageAsync sequentially, the same connection will be reused whenever possible. If you call it concurrently, more connections will be created, up to ServicePointManager.DefaultConnectionLimit connections. So, if you would like concurrent access to the queue, you might want to increase this number.
Disabling Nagle algorithm through ServicePointManager.UseNagleAlgorithm is also recommended given the size of queue messages.
I would cache your CloudQueue reference and re-use it. Each time you add a message to the queue, this class constructs a REST call using HttpClient. Since your credentials and storage/queue Uri are already known this could save a few cycles.
Also, using AddMessageAsync instead of AddMessage is recommended.
As a reference, you can see the implementation in the storage client libraries here.

Consuming MSMQ with WCF

New at MSMQ and WCF.
I want to be able to process incoming MSMQ messages at a high rate. I want to make it Multithreaded (and transactional).
What is the best way of doing this? Any examples, code snippets, theories are very much welcome.
Also, how is WCF able to know if there is a message in the MSMQ? Or would I have to create a Windows Service that polls the MSMQ, then for messages found, start it on a new thread and invoke the WCF service and pass the message to it?
What is the best way?
Many thanks
Answer here was to use WCF and create a data contract of service known types.
These known types are objects it would be expecting from the queue being read from.
To make it multi threaded and transactional, not only does the queue need to be transactional but also decorate the service attribute:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerSession, ReleaseServiceInstanceOnTransactionComplete = false)]
The InstanceContextMode IS perSession by default.
you also need to set up the bindings on your config file
example: http://msdn.microsoft.com/en-us/library/ms751493.aspx

Resources