Maximum number of concurrent threads inside Windows Azure Worker Role - multithreading

We are developing a server application that needs to open a lot of TCP/IP connections concurrently awaiting for some small notifications.
We are planning to use Windows Azure Cloud Services for easy scale of the server but we have only one doubt.
What is the maximum number of concurrent threads (or tcp/ip connections awaiting for messages) that a single Windows Azure Worker Role instance can have?

Windows Azure instances inside Worker Roles are regular Windows Server VM's that are managed by the Azure AppFabric controller.
As such, there is no Azure-specific limitation on the number of threads or connections that each serer can support logically.
However, be advised that servers within Azure can be of different size (power) and would physically be able to handle different number of running threads or open connections.
The theoretical max number also depends on the threads/connection themselves (how many resources they take is key).
Running a load-test on a deployed solution may help with the maximum number of threads/connections that you can open and perform adequately.
Furthermore, since Windows Azure supports scaling out, you can use something like AzureWatch to monitor performance counters for number of running threads or TCP/IP connections and automatically add/remove servers to your VM.

Related

Azure function apps and web sockets

I can see multiple places that web sockets are not supported in function apps. I just want to create a web socket for some seconds and close it down again. So I do not have a need for a complex socket framework. I was wondering why this settings is present if it is not supported? Has Microsoft started supporting this feature?
Azure Functions are generally hosted in 2 ways:
Consumption Plan (Serverless)
App Service Plan (Dedicated)
Consumption Plan (Serverless)
In this plan, the underlying machine is de-provisioned when the server is idle. So, you may lose your active Web-Socket connections when the machine is idle and de-provisioned.
Also, below is the statement from the Microsoft Azure Function team:
There are some challenges here because WebSocket is really a stateful protocol (you have a long lived connection between a given client and a given server) while Azure Functions is designed to be stateless. For example, we will often deprovision a Function App from one VM and start it on a different VM (perhaps in a different scale unit) based on capacity constraints. This is safe to do for us today because of our stateless design - but if we did it when there were WebSockets connections to that VM, we'd be terminating those connections. Source: GitHub
App Service Plan (Dedicated)
If you are using a dedicated App Service Plan, then Web Sockets will work for sure, because there is a machine in the background which is not serverless (always available).
Just make sure you have enabled Web Sockets in the configuration (as you have done already).
Check web-socket connection limits for App Service Plans from here -
App Service limits

programmatically monitoring of outbound azure app service connection

We have an azure app service backed up with an S3 plan hosting plenty of webjobs (dotnet/nodejs) performing outbound connections (azure service bus, external REST API, telemetry, etc).
On peak usage, we start to encounter time out issues from above services as we are exceeding the max number of outbound connections for S3 instance: 8064 (SNAT post exhaustion)
We have already plenty of pool & cache mechanisms, but it's complex to tune as spread over > 15 different webjobs.
is there a way to have fine tracking of SNAT exhaustion per process ? (current report provide only the machine name). TCP connections reports only track remote addresses.
is there way to retrieve this metric live programmatically (C# / nodejs) in order to try dynamic internal pool adjustement ?
better approach ? peaks are occurring seldom in an enterprise service bus scenario, time out errors, can be then resume and fix. buying a bigger app plan instance or isolate webjobs isn't reasonable so far :)

Azure App Service TCP/IP Port Exhaustion

I recently got a "recommendation" from Azure regarding reaching the upper limit for TCP/IP ports in my App Service.
TCP/IP ports near exhaustion Your app service plan containing app
****** is configured to use medium instances. The apps hosted in
that App Service plan are using more than 90% of the 4096 available
TCP/IP ports available per medium instance. You can upgrade the
instance size to increase the outbound connection limit or configure
connection pooling for more efficient use.
Is there a difference in Limits for App Service Plans (scale up)? or can I Configure my App Service to use more ports? Or is there any other solution for this?
An obvious solution would be scaling out, but since CPU and Memory usage is low I would rather not use this option if not necessarily.
As background, the service is an API built with ASP.NET Core MVC using .Net 4.6.
Yes, there is a difference in Limits for App Service Plans (scale up):
The maximum connection limits are the following:
1,920 connections per B1/S1/P1 instance
3,968 connections per B2/S2/P2 instance
8,064 connections per B3/S3/P3 instance
Regarding: other services (Cassandra, MSSQL, RabbitMQ etc) but I am not sure of those connection counts as well
This services calls will also result in TCP connection creation and need to be counted as well.
Most of the services in Azure are having their own Diagnostics and Dashaboard which we can correlate while doing debugging, like in my case MSSQL DTU was not sufficient to hold the number of concurrent requests and because of that the connections are piling up.
Source:
https://blogs.technet.microsoft.com/latam/2015/06/01/how-to-deal-with-the-limits-of-azure-sql-database-maximum-logins/
https://blogs.msdn.microsoft.com/appserviceteam/2018/03/01/deep-dive-into-tcp-connections-in-app-service-diagnostics/
Usually we instantiate and dispose after making a call within .NET but there is a gotcha for HttpClient class as we should be reusing the same class throughout the lifecycle of the application.
Azure ports are limited within its computing environment so you would experience that sooner compared to a standard server.
Read through this below:
Reusing HttpClient

Are Web Apps inside an Azure App Service Plan implemented as virtual web servers in IIS? Are web gardens used?

If Azure App Service plans are virtual machines dedicated to the Web, API, Logic, and Mobile apps defined within them, does that mean that a web app in an app service plan is an instance of a virtual web server in IIS on that virtual machine?
Assuming this is the case and that each virtual web site gets it's own application pool, is there an Azure scaling strategy or scenario where more than one worker process in that app pool will run, creating a web garden? My understanding of web app scale out is that it results in additional VMs being allocated and not additional worker processes.
The scaling strategy will depend upon the pricing tier you have opted for.
Basically each Service Plan will contain a collection of Web, API, Logic, Mobile apps. These will form a web garden within the Service Plan server you choose.
If you initially choose a single B1 Basic Service Plan, you will get a single virtual machine with all of your applications running on that. As the load on that server increases, you can scale it up to larger servers, but it will still be running on a single server.
If you then choose to create a second instance (and a 3rd, 4th, 5th...) that second server will be a replica of the first server, with the load being balanced between the two. (3,4...)
While I've not seen documentation for this, I would imagine that each Web, API, etc app is run under its own application pool / worker process, and scale out is simply duplicated instances.
I'm not sure what a Virtual Server is, but each app runs in its own dedicated application pool and w3wp.exe process. There is only a single w3wp.exe process per application pool, so no web gardens.
Is there a specific reason you think you need these to scale your apps? In most cases, using web gardens is the wrong way to scale, as adding more processes can cause unnecessary overhead (amongst other problems - you can find some useful resources on the web). You almost always want to prefer threads over processes for improving concurrency. If you're running out of physical resources (CPU, memory, etc), then the correct way to scale is to add additional VMs.

Windows Azure cloud service resource usage with RDP enabled

How does enabling RDP on Windows Azure cloud service instances affect them? Do they consume significantly less resources with RDP disabled? Are they running in server core mode?
Just enabling of RDP does little for performance/resources/badwidth of a VM.
However, when you actually connect to RDP on a server, there is a certain amount of bandwidth, RAM and CPU cycles that is now drawn away from the server to support the desktop experience for RDP'ed user. It is not super significant, but it does exist. It is hard to predict "how much" that is, as it depends on the activity of the RDP'ed user.
HTH

Resources