Microsoft.Azure.Devices.ServiceClient and Microsoft.Azure.Devices.RegistryManager both have ConnectFromConnectionString and CloseAsync methods.
Should we use them like we use other .NET connection-close patterns, such as ADO.NET connections, Redis connections, Sockets' etc.? When I use objects like those I try to Close or Disposable.Dispose() of them as soon as possible.
What's the upside and downside to doing the same with the Microsoft.Azure.Devices objects when accessing the same IOT Hub? I have running code that treats the individual RegistryManager and ServiceClient instances as singletons, which are used throughout an application's lifetime -- which may be weeks or months. Are we short circuiting ourselves by keeping these objects "open" for this duration?
As pointed by #David R. Williamson, and this discussion on GitHub, it is generally recommended to register RegistryManager and ServiceClient as singletons.
For ASP NET Core or Azure Functions, it can be done as follows at startup:
ConfigureServices(services)
{
services.AddSingleton<RegistryManager>(RegistryManager.CreateFromConnectionString("connectionstring"));
}
I'm on the Azure IoT SDK team. We recommend keeping the service client instances around, depending on your usage pattern.
While it is generally fine to init/close/dispose the Azure IoT service clients, you can run into the HttpClient socket exhaustion problem. We've done work to prevent it from happening if you keep a singleton of the client around, but when the client is disposed 2 HttpClient instances are also disposed.
There's little harm in keeping them around and will save your program execution time in init/dispose.
We're working on a v2 where we'll allow the user to pass in their own copy of HttpClient, so you could keep that around and shut down our client more frequently.
Related
Our existing system uses App Services with API controllers.
This is not a good setup because our scaling support is poor, its basically all or nothing
I am looking at changing over to use Azure Functions
So effectively each method in a controller would become a new function
Lets say that we have a taxi booking system
So we have the following
Taxis
GetTaxis
GetTaxiDrivers
Drivers
GetDrivers
GetDriversAvailableNow
In the app service approach we would simply have a TaxiController and DriverController with the the methods as routes
How can I achieve the same thing with Azure Functions?
Ideally, I would have 2 function apps - Taxis and Drivers with functions inside for each
The problem with that approach is that 2 function apps means 2 config settings, and if that is expanded throughout the system its far too big a change to make right now
Some of our routes are already quite long so I cant really add the "controller" name to my function name because I will exceed the 32 character limit
Has anyone had similar issues migrating from App Services to Azure Functions>
Paul
The problem with that approach is that 2 function apps means 2 config
settings, and if that is expanded throughout the system its far too
big a change to make right now
This is why application setting is part of the release process. You should compile once, deploy as many times you want and to different environments using the same binaries from the compiling process. If you're not there yet, I strongly recommend you start by automating the CI/CD pipeline.
Now answering your question, the proper way (IMHO) is to decouple taxis and drivers. When requested a taxi, your controller should add a message to a Queue, which will have an Azure Function listening to it, and it get triggered automatically to dequeue / process what needs to be processed.
Advantages:
Your controller response time will get faster as it will pass the processing to another process
The more messages in the queue / more instances of the function to consume, so it will scale only when needed.
Http Requests (from one controller to another) is not reliable (unless you implement properly a circuit breaker and a retry policy. With the proposed architecture, if something goes wrong, the message will remain in the queue or it won't get completed by the Azure function and will return to the queue.
I am attempting to connect my application to multiple IRC channels to read incoming chat messages and send them to my users. New channels may be added or existing channels may be removed at any time during the day and the application must pick up on this in near real-time. I am currently using Microsoft Azure for my infrastructure and am using App Services for client-facing compute and Azure Functions on the App Service plan for background tasks (Not the consumption billing model).
My current implementation is in C#/.NET Core 3.1 and uses a TcpClient over an SslStream to watch each channel. I then use a StreamReader and await reader.ReadLineAsync() to watch for new messages. The problem I am running into is that neither App Services or Azure Functions seems to be an appropriate place to host a watcher like this.
At first, I tried hosting it in the Azure Function app as this clearly seems like a task for a background worker, however Azure Functions inherently want to be triggered by a specific event, run some code, and then end. In my implementation, the call to await reader.ReadLineAsync() halts processing until a message is received. In other words, the call running the watcher needs to run in perpetuity, which seems to go against the grain of an Azure Function. In my attempt, the Azure Function service eventually crashes, the host unloads, all functions on the service cease and then restart a few minutes later when the host reloads. I am unable to find any way to tell what is causing the crash. This is clearly not the solution I want. If I could find an IrcMessageTrigger Azure Function trigger, this would probably be the best option.
Theoretically, I could host the watcher in my App Service, however when I scale out I would run into a problem due to having multiple servers connecting to each channel at once. New messages would be sent to each server and my users would receive duplicates. I could probably deal with this, but the solution would probably be hacky and I feel like the real solution would be to architect it better in the first place.
Anyone have an idea? I am open to changing the code or using a different Azure service (assuming it isn't too expensive) but I will be sticking with C# and .NET Core on Azure infrastructure for this project.
Below is a part of my watcher code to provide some context.
while (client.Connected)
{
//This line will halt execution until a message is received
var data = await reader.ReadLineAsync();
if (data == null)
{
continue;
}
var dataArray = data.Split(' ');
if (dataArray[0] == "PING")
{
await writer.WriteLineAsync("PONG");
await writer.FlushAsync();
continue;
}
if (dataArray.Length > 1)
{
switch (dataArray[1])
{
case "PRIVMSG":
HandlePrivateMessage(data, dataArray);
break;
}
}
}
Thanks in advance!
Results are preliminary, but it appears that the correct approach is to use Azure WebJobs running continuously to accomplish what I am trying to achieve. I did not consider WebJobs initially because they are older technology than Azure Functions and essentially do the same work at a lower level of abstraction. In this case, however, WebJobs appear to handle a use case that Functions are not intended to support.
To learn more about WebJobs (including continuous WebJobs) and what they are capable of, see the Microsoft documentation
In the old NuGet, it was good practice to retain the MessagingFactory instance.
Should we retain a single instance of ServiceBusConnection (from new Nuget) to be injected into multiple clients?
EDIT 2018-07-25 00:22:00 UTC+8:
added a bit of clarity that ServiceBusConnection is from new NuGet.
Azure Service Bus .NET Standard client connections are not managed via QueueClient or any factory, like its predecessor used to. QueueClient is an abstraction on top of MessageSender and MessageReceiver that can take a ServiceBusConnection shared by multiple senders/receivers. You're free to choose either to share the same connection object or not.
var connection = new ServiceBusConnection(connectionString);
var queueClient1 = new QueueClient(connection, "queue1", ReceiveMode.PeekLock, RetryPolicy.Default);
var queueClient2 = new QueueClient(connection, "queue2", ReceiveMode.PeekLock, RetryPolicy.Default);
// Queue clients share the same connection
var message1 = new Message(Encoding.UTF8.GetBytes("Message1"));
var message2 = new Message(Encoding.UTF8.GetBytes("Message2"));
Based on the namespace tier you're using, you'll have to benchmark and see what works better for you. My findings showed that with Standard tier having multiple connections helps throughput, while with Premium tier it doesn't. This post hints at the reasons why it's so.
Not necessarily a singleton for all.
The right level of grain is the Messaging Factory. You may connect to a number of namespaces in your application, each will require their own MessageFactory and connection.
Alternatively you may have code that is not very async in it's nature and in this instance multiples connections (factories) make sense. For example you have two processes in your code, one that does a lot of work in a loop to send messages and another that receives, in this case two factories may be useful or you may refactor to make your code more async.
From the documentation.
It is recommended that you do not close messaging factories or queue,
topic, and subscription clients after you send a message, and then
re-create them when you send the next message. Closing a messaging
factory deletes the connection to the Service Bus service.
In short, you should be re-using your message factories, which then maintain a connection to the service bus. Depending on how your code is written you may want to have multiple factories.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements#reusing-factories-and-clients
A really common pattern that I need in multi instance web applications is invalidating MemoryCaches over all instances - and waiting for a confirmation that this has been done. (Because a user might otherwise after a refresh suddenly see old data on another instance)
We can make this with a combination of:
AzureServicebus,
Sending message to a topic
other instances send message back with ReplyTo to the original instance
have a wait loop for waiting on the messages back,
be aware of how many other instances are there in the first place.
probably some timeout because what happens if an instance crashes in between?
I think working out all these little edge cases might be a lot of work - so before we reinvent the wheel - is there already a common pattern or library for this?
(of course one solution would be using a shared cache like Redis, but for some situations a memorycache is a lot faster)
Have a look at Azure Durable Functions, e.g. Fan-In/Fan-Out scenario. They use Azure Storage Queues underneath, but provide higher-level abstractions.
Note that Durable Functions are still in early preview (as of August 2017), so not suitable for production use yet.
I think working out all these little edge cases might be a lot of work - so before we reinvent the wheel - is there already a common pattern or library for this?
Indeed. This sounds like a candidate for a middleware framework such as NServiceBus or MassTransit.
AzureServicebus
Both NServiceBus and MassTransit support Azure Service Bus as the transport.
Sending message to a topic
Both NServiceBus and MassTransit can Publish messages (events) to topics.
other instances send message back with ReplyTo to the original instance
Both NServiceBus and MassTransit can send messages to specific destination. NServiceBus also can Reply to the originator of an incoming message using a request/reply pattern.
have a wait loop for waiting on the messages back
Both NServiceBus and MassTransit support Sagas, also known as Process Coordinator pattern.
be aware of how many other instances are there in the first place.
Not sure about this requirement. When you scale out, you're running with a competing consumer and shouldn't care about number of instances of an endpoint.
probably some timeout because what happens if an instance crashes in between?
If you refer to retries and recovery, then both NServiceBus and MassTransit support retries.
You can use Azure Redis cache pub/sub model to do this.
1) Subscribe to Redis multiplexer
connectionMultiplexer.GetSubscriber().Subscribe(
"SubscribeChannelName",
(channel, message) => {
invalidate cache here and publish the confirmation using below publish method
connectionMultiplexer.GetSubscriber().PublishAsync("PublishChannelName", "Cache invalidated for instance").Wait();
});
2) Publish the cache invalidation and subscribe for confirmation from instances
var connection = ConnectionMultiplexer.Connect("redis connection string");
var redisSubscriber = connection.GetSubscriber();
redisSubscriber.Subscribe(
"PublishChannelName",
(channel, message) => {
// write logic to verify if all instances notified about cache invalidation.
});
redisSubscriber.PublishAsync("SubscribeChannelName","invalidate cache")).Wait();
I recently created an error manager to take logged errors from clients on our network and put them into an MSMQ for processing. I have a separate Windows Service running on the server to pick items off the queue and push them into a database.
When I wrote it and tested it everything worked great; however I neglected to consider that at deploy-time, having 100 clients all sending to a public queue might not be performant, best-case, and worst-case there could be all kinds of collisions, it seems to me.
My thought right now is to front the MSMQ with a WCF service and make everyone go through that. The logic being that at that point I could employ some locking, etc. If I went with a service I think I could employ a private queue instead of a public one, which would be tons faster, as well.
What I'm not sure is, am I overthinking it? MSMQ is pretty robust and the methods I think are thread-safe. Should I just leave it alone and see what happens? If I do put in the service, how much management would I need to have in place?
I recently created an error manager to take logged errors from clients
on our network and put them into an MSMQ for processing
I assume you're using System.Messaging for this? If so there is nothing at all wrong with your approach.
having 100 clients all sending to a public queue might not be
performant
MSMQ was designed from the bottom up to handle high load. Depending on the size of the individual messages and the storage threshold of the machine, a queue can hold 10's of thousand of messages without any noticeable performance impact.
Because a "send" in MSMQ involves the queue manager on each machine writing messages locally before transmission (in a store and forward messaging pattern), there is almost no chance of "collisions" or any other forms of contention happening; if the sender is unable to transmit the message it simply "sends" it to a temporary local queue and then the actual transmission happens in the background and is mediated by the fault tolerant and very reliable msmq protocol.
My thought right now is to front the MSMQ with a WCF service and make
everyone go through that
This would be a valid choice if you were starting from nothing. As another poster has stated, WCF does hide you from some of the msmq-voodoo by removing the necessity to use System.Messaging. However, you've already written the code so I see little benefit exposing a netMsmqBinding endpoint.
If I went with a service I think I could employ a private queue
instead of a public one
As far as I understand it from your description, there's nothing to stop you using a private queue in your current scenario. In fact I'd recommend always using private queues as they're much simpler.
If I do put in the service, how much management would I need to have
in place?
You will have more management overhead with a wcf service. Because you're wrapping each end of a send-receive with the WCF stack, there is more code to spin up and therefore potentially fail. WCF stack exceptions are famously difficult to troubleshoot without full service logging enabled.
EDIT - in response to comments
I think for a private queue you have to actually be writing FROM the
machine the queue sits on, which would not work in a networked
environment
Untrue. MSMQ supports transactional reads to and writes from any private queue, regardless of whether the queue is local or remote.
This is because any time a message is sent from one machine to another in msmq, regardless of the queue address, the following happens:
Queue manager on sending machine writes the message to a temporary local "outbound" queue.
Queue manager on sending machine contacts queue manager on receiving machine and transmits the message.
Queue manager on receiving machine places the message into the destination queue.
If you are using transactions, the above steps will comprise 3 distinct transactions.
Something to remember: the safest paradigm in exchanging messages between queues on different machines is send remote, read local.
So this means when you send a message, you're instructing msmq to send to a remote queue address. However, when someone sends something to you, they must do the same. So you end up reading only from local queues, and sending only to remote queues.
This way you get the most reliable messaging setup, because when reading, a local queue will always be available.
Try it! I've been using msmq for cross machine communication for nearly 10 years and I've never used a public queue. I don't even know what they're for!
I would expose an WCF "IsOneWay" method.
And then host your WCF in IIS.
The IsOneWay will wire up to MSMQ.
This way...you have the robustness of IIS hosting. You can expose any endpoint you want.
But eventually the request makes it to MSMQ.
One of hte reasons is the ease of using msmq with wcf. Having written and used msmq "pre-wcf" I found the code (pulling messages off the queue and error handling) to be difficult and problematic. That alone would push me to WCF hosting.
And as you mention, the security around a local-queue is much easier to deal with.
Bottom line, let WCF handle the msmq-voodoo for you.
Simple example below.
[ServiceContract]
public interface IMyControllerController
{
[OperationContract(IsOneWay = true)]
void SubmitRequest( MyObject obj );
}
http://msdn.microsoft.com/en-us/library/ms733035%28v=vs.110%29.aspx
http://msdn.microsoft.com/en-us/library/system.servicemodel.operationcontractattribute.isoneway%28v=vs.110%29.aspx
What happens in WCF to methods with IsOneWay=true at application termination
http://blogs.msdn.com/b/tomholl/archive/2008/07/12/msmq-wcf-and-iis-getting-them-to-play-nice-part-1.aspx