Azure function apps and web sockets - azure

I can see multiple places that web sockets are not supported in function apps. I just want to create a web socket for some seconds and close it down again. So I do not have a need for a complex socket framework. I was wondering why this settings is present if it is not supported? Has Microsoft started supporting this feature?

Azure Functions are generally hosted in 2 ways:
Consumption Plan (Serverless)
App Service Plan (Dedicated)
Consumption Plan (Serverless)
In this plan, the underlying machine is de-provisioned when the server is idle. So, you may lose your active Web-Socket connections when the machine is idle and de-provisioned.
Also, below is the statement from the Microsoft Azure Function team:
There are some challenges here because WebSocket is really a stateful protocol (you have a long lived connection between a given client and a given server) while Azure Functions is designed to be stateless. For example, we will often deprovision a Function App from one VM and start it on a different VM (perhaps in a different scale unit) based on capacity constraints. This is safe to do for us today because of our stateless design - but if we did it when there were WebSockets connections to that VM, we'd be terminating those connections. Source: GitHub
App Service Plan (Dedicated)
If you are using a dedicated App Service Plan, then Web Sockets will work for sure, because there is a machine in the background which is not serverless (always available).
Just make sure you have enabled Web Sockets in the configuration (as you have done already).
Check web-socket connection limits for App Service Plans from here -
App Service limits

Related

Azure App Service TCP/IP Port Exhaustion

I recently got a "recommendation" from Azure regarding reaching the upper limit for TCP/IP ports in my App Service.
TCP/IP ports near exhaustion Your app service plan containing app
****** is configured to use medium instances. The apps hosted in
that App Service plan are using more than 90% of the 4096 available
TCP/IP ports available per medium instance. You can upgrade the
instance size to increase the outbound connection limit or configure
connection pooling for more efficient use.
Is there a difference in Limits for App Service Plans (scale up)? or can I Configure my App Service to use more ports? Or is there any other solution for this?
An obvious solution would be scaling out, but since CPU and Memory usage is low I would rather not use this option if not necessarily.
As background, the service is an API built with ASP.NET Core MVC using .Net 4.6.
Yes, there is a difference in Limits for App Service Plans (scale up):
The maximum connection limits are the following:
1,920 connections per B1/S1/P1 instance
3,968 connections per B2/S2/P2 instance
8,064 connections per B3/S3/P3 instance
Regarding: other services (Cassandra, MSSQL, RabbitMQ etc) but I am not sure of those connection counts as well
This services calls will also result in TCP connection creation and need to be counted as well.
Most of the services in Azure are having their own Diagnostics and Dashaboard which we can correlate while doing debugging, like in my case MSSQL DTU was not sufficient to hold the number of concurrent requests and because of that the connections are piling up.
Source:
https://blogs.technet.microsoft.com/latam/2015/06/01/how-to-deal-with-the-limits-of-azure-sql-database-maximum-logins/
https://blogs.msdn.microsoft.com/appserviceteam/2018/03/01/deep-dive-into-tcp-connections-in-app-service-diagnostics/
Usually we instantiate and dispose after making a call within .NET but there is a gotcha for HttpClient class as we should be reusing the same class throughout the lifecycle of the application.
Azure ports are limited within its computing environment so you would experience that sooner compared to a standard server.
Read through this below:
Reusing HttpClient

SignalR Actors or Stateless Services

I'm looking into migrating an application to Service Fabric running on Azure. It's a realtime chat-style application using SignalR. I'd like to have an instance of a service running, self-hosting a SignalR hub (via OWIN) for each "affinity group" in which users are communicating. This is so I can avoid having to scale out SignalR with a backplane. I'd like to be able to spin these services up and down as groups of users enter and leave my application. I would expect I could host tens of these services per VM with a typical load of hundreds of users per group.
My idea is that I'd have a service locator that clients connect to initially to discover which service (port) is hosting their group. I would also have a service that spun up an instance of the chat service when the first request for that group came in.
How would I architect this in Service Fabric on Azure so that a) each of the services/actors is accessible with a SignalR client from the internet? and b) I'm only running as many services as necessary to serve m active groups out of n total groups? The demand for this app is very transient and spiky, so I'm hoping to take advantage of the fact that services are simply processes and can be provisioned in a matter of seconds vs. my current scenario where I have to spin up entire cloud services and wait tens of minutes to handle spikes (at which point it's too late)
You would do a few things:
Have a "service manager service" that intercepted initial join requests and created new Service Fabric services on the fly if they didn't already exist OR if they did already exist resolve the service's current location and then return that address to the client
Alternatively they could just pass back the internal service name (if you're ok exposing that information) and the client could do the resolution and then connection. To some degree this will depend on how much info you want to expose to the client, whether you can or want to modify it to "know about" Service Fabric, etc.
The client would then connect to the actual backing service directly
You would have to come up with some sort of mechanism for the actual chat services to know that there is nobody left and to either delete themselves or to go back through the manager.
You probably would be best off modeling the chat service as a Reliable Service rather than an actor as the Reliable Services stack allows more flexibility around communication protocols/stacks.

Are Web Apps inside an Azure App Service Plan implemented as virtual web servers in IIS? Are web gardens used?

If Azure App Service plans are virtual machines dedicated to the Web, API, Logic, and Mobile apps defined within them, does that mean that a web app in an app service plan is an instance of a virtual web server in IIS on that virtual machine?
Assuming this is the case and that each virtual web site gets it's own application pool, is there an Azure scaling strategy or scenario where more than one worker process in that app pool will run, creating a web garden? My understanding of web app scale out is that it results in additional VMs being allocated and not additional worker processes.
The scaling strategy will depend upon the pricing tier you have opted for.
Basically each Service Plan will contain a collection of Web, API, Logic, Mobile apps. These will form a web garden within the Service Plan server you choose.
If you initially choose a single B1 Basic Service Plan, you will get a single virtual machine with all of your applications running on that. As the load on that server increases, you can scale it up to larger servers, but it will still be running on a single server.
If you then choose to create a second instance (and a 3rd, 4th, 5th...) that second server will be a replica of the first server, with the load being balanced between the two. (3,4...)
While I've not seen documentation for this, I would imagine that each Web, API, etc app is run under its own application pool / worker process, and scale out is simply duplicated instances.
I'm not sure what a Virtual Server is, but each app runs in its own dedicated application pool and w3wp.exe process. There is only a single w3wp.exe process per application pool, so no web gardens.
Is there a specific reason you think you need these to scale your apps? In most cases, using web gardens is the wrong way to scale, as adding more processes can cause unnecessary overhead (amongst other problems - you can find some useful resources on the web). You almost always want to prefer threads over processes for improving concurrency. If you're running out of physical resources (CPU, memory, etc), then the correct way to scale is to add additional VMs.

Difference Between Windows Azure Service Bus and Windows Azure Virtual Network

I want to connect to On-Premises database from Azure. Basically i will be hosting my web Application on azure and will be using Database from On-Premises.
According to www.WindowsAzure.com both Azure Service Bus and Windows Azure Virtual Network are used for connecting to On-Premises database. But what is the difference between these two and which of them should be used according to different situation ?
There is a big difference between both approaches:
Service Bus is connectivity on application or messaging level. Here you will have two options:
Service Bus Relay : here you have to expose a web service (that connects to your local database) over the Relay binding. This will
make a publically reachable service in a firewall friendly way. This
is mostly a synchronous approach.
Service Bus messaging: you will have to have a local process that listens on messages / events that you put on a queue or a
topic/subscription from your application. This is mostly an
asynchronous approach.
Virtual Networking: here you can set up connectivity on network level and you would be able to connect to your database as if he is on the same network as your cloud based application. The advantage here is that your code would not have to change, compared to a standard application (except for connectivity retries)
Both approaches are totally different, but can be valid, depending on your preference of architecture. (web service oriented, network level connectivity, or asynchronous processing).
Hope this helps.

Azure cloud service and web sites communication lock down

I have a azure cloud service (a server) where i host a Redis database. I also have a web site hosted in azure web sites. I want the web site to be able to talk to the Redis DB on port 6379. I know I can configure a public endpoint for that port on my server but that would open it for whole Internet. I want it opened only for azure web sites (or even better, only for my web site). How can i do this?
Windows Azure Web Sites is in an isolation bubble separate from your Cloud Services and there's no way to bridge that gap. Ideally you'd do this by connecting the web site machine to other Azure services via a Virtual Network, but this FAQ confirms you can't do that right now:
Can I use Windows Azure websites with Virtual Network?
No. We do not support websites with virtual networks.
Opening Redis up over the internet shouldn't even be considered as it doesn't have the kind of security you'd want out of the box to be opening up its port publicly as it is meant to be co-located with your application, so you really wouldn't want to do that. Never mind the added network overhead which will eat into the performance you expect to get by leveraging something like Redis anyway.
I believe your best bet given your current configuration is to add a Web Role that's part of the same Azure Cloud Service and run your web based application out of that so that it can communicate with worker role. It only requires a little bit of configuration to get this going (i.e. adding an InternalEndpoint to the Redis Worker Role). While I realize Web Roles don't offer as frictionless a development model as Web Sites, you have to choose the right tool for the job.
Another option, if you want to setup your Redis on a VM instead of tying it to the Cloud Service directly, is that you can setup a Virtual Network, put the Redis VM on the virtual network and then configure the Cloud Service so that it's part of the same affinity group and add the NetworkConfiguration/VirtualNetworkSite configuration section to the Cloud Service's .cscfg.
Which approach makes more sense all depends on how you leverage your Redis instance, but the main benefit of the latter approach is that the Redis instance is not recreated each time you deploy your Cloud Service and, so, any data that's in it will stay available between deployments. Another benefit is if you want to build and leverage a Redis cluster across multiple Cloud Services this enables you to do that.

Resources