Azure App Service TCP/IP Port Exhaustion - azure

I recently got a "recommendation" from Azure regarding reaching the upper limit for TCP/IP ports in my App Service.
TCP/IP ports near exhaustion Your app service plan containing app
****** is configured to use medium instances. The apps hosted in
that App Service plan are using more than 90% of the 4096 available
TCP/IP ports available per medium instance. You can upgrade the
instance size to increase the outbound connection limit or configure
connection pooling for more efficient use.
Is there a difference in Limits for App Service Plans (scale up)? or can I Configure my App Service to use more ports? Or is there any other solution for this?
An obvious solution would be scaling out, but since CPU and Memory usage is low I would rather not use this option if not necessarily.
As background, the service is an API built with ASP.NET Core MVC using .Net 4.6.

Yes, there is a difference in Limits for App Service Plans (scale up):
The maximum connection limits are the following:
1,920 connections per B1/S1/P1 instance
3,968 connections per B2/S2/P2 instance
8,064 connections per B3/S3/P3 instance
Regarding: other services (Cassandra, MSSQL, RabbitMQ etc) but I am not sure of those connection counts as well
This services calls will also result in TCP connection creation and need to be counted as well.
Most of the services in Azure are having their own Diagnostics and Dashaboard which we can correlate while doing debugging, like in my case MSSQL DTU was not sufficient to hold the number of concurrent requests and because of that the connections are piling up.
Source:
https://blogs.technet.microsoft.com/latam/2015/06/01/how-to-deal-with-the-limits-of-azure-sql-database-maximum-logins/
https://blogs.msdn.microsoft.com/appserviceteam/2018/03/01/deep-dive-into-tcp-connections-in-app-service-diagnostics/

Usually we instantiate and dispose after making a call within .NET but there is a gotcha for HttpClient class as we should be reusing the same class throughout the lifecycle of the application.
Azure ports are limited within its computing environment so you would experience that sooner compared to a standard server.
Read through this below:
Reusing HttpClient

Related

Azure function apps and web sockets

I can see multiple places that web sockets are not supported in function apps. I just want to create a web socket for some seconds and close it down again. So I do not have a need for a complex socket framework. I was wondering why this settings is present if it is not supported? Has Microsoft started supporting this feature?
Azure Functions are generally hosted in 2 ways:
Consumption Plan (Serverless)
App Service Plan (Dedicated)
Consumption Plan (Serverless)
In this plan, the underlying machine is de-provisioned when the server is idle. So, you may lose your active Web-Socket connections when the machine is idle and de-provisioned.
Also, below is the statement from the Microsoft Azure Function team:
There are some challenges here because WebSocket is really a stateful protocol (you have a long lived connection between a given client and a given server) while Azure Functions is designed to be stateless. For example, we will often deprovision a Function App from one VM and start it on a different VM (perhaps in a different scale unit) based on capacity constraints. This is safe to do for us today because of our stateless design - but if we did it when there were WebSockets connections to that VM, we'd be terminating those connections. Source: GitHub
App Service Plan (Dedicated)
If you are using a dedicated App Service Plan, then Web Sockets will work for sure, because there is a machine in the background which is not serverless (always available).
Just make sure you have enabled Web Sockets in the configuration (as you have done already).
Check web-socket connection limits for App Service Plans from here -
App Service limits

Monitoring of Azure Hybrid Connections and OnPremise Data Gateway

I'm using the Hybrid Connection Manager and also the On Premise Data Gateway for several projects hosted in the Azure cloud.
There are more and more use cases for those two components and I need to setup a clean monitoring to detect connection troubles (for example when there is a network issue or a reboot of the servers hosting the gateways).
For the HCM, there are Relays metrics I can rely on, but I saw that some of those counters are not reliable. I had issues with my connexion in the past few days, and when I check the ListenerConnections-ClientError or ListenerConnections-ServerError counters, they always equal to 0... this sounds very strange?
Regarding the OnPremise Data Gateway, I think that because it also relies on SBus Relay, I should probably use the same metrics?

programmatically monitoring of outbound azure app service connection

We have an azure app service backed up with an S3 plan hosting plenty of webjobs (dotnet/nodejs) performing outbound connections (azure service bus, external REST API, telemetry, etc).
On peak usage, we start to encounter time out issues from above services as we are exceeding the max number of outbound connections for S3 instance: 8064 (SNAT post exhaustion)
We have already plenty of pool & cache mechanisms, but it's complex to tune as spread over > 15 different webjobs.
is there a way to have fine tracking of SNAT exhaustion per process ? (current report provide only the machine name). TCP connections reports only track remote addresses.
is there way to retrieve this metric live programmatically (C# / nodejs) in order to try dynamic internal pool adjustement ?
better approach ? peaks are occurring seldom in an enterprise service bus scenario, time out errors, can be then resume and fix. buying a bigger app plan instance or isolate webjobs isn't reasonable so far :)

I want to load balance my azure website

I have my website (abc.azurewebsites.net) hosted to Azure Web Apps using Visual Studio.
Now after 1 month I am facing problems with traffic management. My CPU is always 90 - 95% as the number of requests is too high.
Does anyone know how to add Traffic Management in this web app without changing the domain abc.azurewebsites.net? Is it hard coded in my application?
I thought of changing the web app to a Virtual Machine but now as it's already deployed I am scared of domain loss.
When you Scale your Web App you add instances of your current pricing tier and Azure deploys your Web App package to each of them.
There's a Load Balancer over all your instances, so, traffic is automatically load balanced between them. You shouldn't need a Virtual Machine for this and you don't need to configure any extra Traffic Manager.
I can vouch that my company is using Azure Web Apps to manage more than 1000 concurrent users making thousands of requests with just 2-3 instances. It all depends on what your application does and what other resources does it access too, if you implemented or not a caching strategy and what kind of data storage you are using.
High CPU does not always mean high traffic, it's a mix of CPU and Http Queue Length that gives you an idea of how well your instances are handling traffic.
Your solution might implementing a group of things:
Performance tweak your application
Add caching strategies (distributed cache like Azure Redis is a good option)
Increase Web App instances by configuring Auto-Scaling based on HTTP Queue Length / CPU.
You should not have to change your domain to autoscale a Web App, but you may have to change your pricing tier. Scaling to multiple instance is available at Basic pricing tier, and autoscaling starts at Standard tier. Custom domains are allowed at these levels but you don't have to change your domain if you don't want to.
Here is the overview of scaling a web app https://azure.microsoft.com/en-us/documentation/articles/web-sites-scale/
Adding a Virtual Machine (VM) is very costly as compared to adding instance. On top of it, Redundancy (recommended) for the VMs, adding NIC etc will blow up the cost. Maintenance is another challenge. PAAS (webApp etc) is always a better option than IAAS.
Serverless offerings like Azure Functions can also be thought of. They support http trigger and scale up really well.

Maximum number of concurrent threads inside Windows Azure Worker Role

We are developing a server application that needs to open a lot of TCP/IP connections concurrently awaiting for some small notifications.
We are planning to use Windows Azure Cloud Services for easy scale of the server but we have only one doubt.
What is the maximum number of concurrent threads (or tcp/ip connections awaiting for messages) that a single Windows Azure Worker Role instance can have?
Windows Azure instances inside Worker Roles are regular Windows Server VM's that are managed by the Azure AppFabric controller.
As such, there is no Azure-specific limitation on the number of threads or connections that each serer can support logically.
However, be advised that servers within Azure can be of different size (power) and would physically be able to handle different number of running threads or open connections.
The theoretical max number also depends on the threads/connection themselves (how many resources they take is key).
Running a load-test on a deployed solution may help with the maximum number of threads/connections that you can open and perform adequately.
Furthermore, since Windows Azure supports scaling out, you can use something like AzureWatch to monitor performance counters for number of running threads or TCP/IP connections and automatically add/remove servers to your VM.

Resources