SignalR: worker process is limited to 10 concurrent requests - iis

I've deployed a SignalR-based app onto my on-premises server, and it is crashing when there are more than ten concurrent users.
The long-running requests for the worker process associated with my app pool all have URLs of the form /signalr/reconnect?transport=serverSentEvents&connectionToken=....
As soon as more than ten users connect, the limit of ten concurrent requests is breached and the application hangs.
Do I need to change any IIS settings to allow SignalR to scale in this instance? If I'm deploying to Azure, how would I configure the settings to take account of this?

Workstation OSes (Windows XP, Vista, 7, 8, etc) have a limit of at most 10 concurrent connections in IIS. These OSes are not intended for use as a server (and I think the EULA prohibits it, as I recall). The concurrent connection limit is one way that Microsoft enforces the "not a server" restriction.
Windows Server OSes (Server 2008 R2, Server 2012, etc) do not have this restriction, and so you won't have this problem when you deploy you application to a production server. Windows Azure is fine also (it is Windows Server behind the scenes).

Related

Max Pool Size in web.config running on IIS Web Server

I have a cloud-based application where there can be up to 5000 "worker nodes" communicating with a web server. All 5000 could be started simultaneously and once online, would call an API method to pick-up a job. This API method in turn calls a SQL Server stored procedure to get the job details and return a response to the individual node. To prevent multiple nodes getting the same job, the stored procedure puts a lock on a particular database table while it executes.
Web Server is Windows 2016 Server running IIS 10.
SQL Server is Microsoft SQL Server Standard 2017
My question is how to best manage these connections through web.config or IIS settings. As far as I know, my main levers are
Max Pool Size in web.config
Connection time-out for IIS Site
I can increase the pool size to 10,000 of course and set the connection time-out to 2 or 3 minutes but I'm not sure if there's a best practice for managing something like this.

Will "Always On" setting prevent BOTH idleTimeout and periodicRestart?

Has you may know, Web sites hosted under Microsoft Azure Web Sites service are by default configure to timeout after idling 20 minutes (idleTimeout) and the application pool to restart every 29 hours (periodicRestart). This cause the web site to be slow for the first user accessing it.
I would like to know if the new "Always On" setting available on standard mode will prevent both situation from happening.
I found a few articles mentioning the feature, they are all very clear that the idle timeout will be avoided but none of them explicitly talks about the periodic restart:
One of the other useful Web Site features that we are introducing
today is a feature we call “Always On”. When Always On is enabled on
a site, Windows Azure will automatically ping your Web Site regularly
to ensure that the Web Site is always active and in a warm/running
state. This is useful to ensure that a site is always responsive (and
that the app domain or worker process has not paged out due to lack of
external HTTP requests).
http://weblogs.asp.net/scottgu/archive/2014/01/16/windows-azure-staging-publishing-support-for-web-sites-monitoring-improvements-hyper-v-recovery-manager-ga-and-pci-compliance.aspx
Also the Azure Documentation is not very explicit:
Always On - By default, web sites are unloaded if they have been idle
for some period of time. This lets the system conserve resources. You
can enable the Always On setting for a site in Standard mode if the
site needs to be loaded all the time. Because continuous web jobs may
not run reliably if Always On is disabled, you should enable Always On
when you have continuous web jobs running on the site.
http://www.windowsazure.com/en-us/documentation/articles/web-sites-configure/
Yes both of them will be prevented.
The default 29 hours periodicRestart was never on Azure Websites. That feature is an IIS feature that was enforced by WAS and was designed to run on a server level meaning restart all the worker processes on an IIS server. Both these things (WAS and IIS Server) don't apply to Azure Websites as WAS was the process management component of IIS and that was very specific to one box setup. Azure Websites uses a different process management component that doesn't have periodicRestart.

Maximum number of concurrent threads inside Windows Azure Worker Role

We are developing a server application that needs to open a lot of TCP/IP connections concurrently awaiting for some small notifications.
We are planning to use Windows Azure Cloud Services for easy scale of the server but we have only one doubt.
What is the maximum number of concurrent threads (or tcp/ip connections awaiting for messages) that a single Windows Azure Worker Role instance can have?
Windows Azure instances inside Worker Roles are regular Windows Server VM's that are managed by the Azure AppFabric controller.
As such, there is no Azure-specific limitation on the number of threads or connections that each serer can support logically.
However, be advised that servers within Azure can be of different size (power) and would physically be able to handle different number of running threads or open connections.
The theoretical max number also depends on the threads/connection themselves (how many resources they take is key).
Running a load-test on a deployed solution may help with the maximum number of threads/connections that you can open and perform adequately.
Furthermore, since Windows Azure supports scaling out, you can use something like AzureWatch to monitor performance counters for number of running threads or TCP/IP connections and automatically add/remove servers to your VM.

Can I Programmatically Discern IIS Server Settings? (MaxConnections, etc.)

I am attempting to run a SignalR application (.NET 4.5) on an inexpensive shared hosting provider, and I have read that IIS can begin to throttle connections if the server settings are allocated too low. Since I don't have admin control of this server, is there a way I can programmatically check the server settings like the maximum connections per CPU, etc?

What is the difference between DefaultAppPool and Classic .NET AppPool in IIS7?

I have a problem with timeouts in IIS. In the web.config the session timeout was set to 60 minutes but after 20 minutes the session ends.
This problem only occurs in IIS7 and not in IIS5.
After some investigation, I discovered it was due to the application pool's timeout. If the App Pool is left 20 minutes without doing anything, IIS ends the session.
If the application is using the defaultAppPool this always happens but if I change the App Pool to the classic .NET App Pool, the timeout does not occur.
Both modes have idle timeout but only in the DefaultAppPool this occurs.
Why is this?
What is the difference between be a Classic .NET AppPool and DefaultAppPool?
What is the difference in the pipeline, between Classic and Integrated?
IIS7 has some major changes to better support WCF and one of the key pieces is the new integrated application pool. This session from PDC talks about some of these challenges from the perspective of making WCF services perform better: http://channel9.msdn.com/pdc2008/TL38/
This page has a good overview of IIS7 architecture: http://learn.iis.net/page.aspx/101/introduction-to-iis7-architecture/.
I've included some of the key information from this article on the purpose of the two different kinds of app pools below:
Integrated application pool mode
When an application pool is in
Integrated mode, you can take
advantage of the integrated
request-processing architecture of IIS
and ASP.NET. When a worker process in
an application pool receives a
request, the request passes through an
ordered list of events. Each event
calls the necessary native and managed
modules to process portions of the
request and to generate the response.
There are several benefits to running
application pools in Integrated mode.
First the request-processing models of
IIS and ASP.NET are integrated into a
unified process model. This model
eliminates steps that were previously
duplicated in IIS and ASP.NET, such as
authentication. Additionally,
Integrated mode enables the
availability of managed features to
all content types.
Classic application pool mode
When an application pool is in Classic
mode, IIS 7.0 handles requests as in
IIS 6.0 worker process isolation mode.
ASP.NET requests first go through
native processing steps in IIS and are
then routed to Aspnet_isapi.dll for
processing of managed code in the
managed runtime. Finally, the request
is routed back through IIS to send the
response. This separation of the IIS
and ASP.NET request-processing models
results in duplication of some
processing steps, such as
authentication and authorization.
Additionally, managed code features,
such as forms authentication, are only
available to ASP.NET applications or
applications for which you have script
mapped all requests to be handled by
aspnet_isapi.dll. Be sure to test your
existing applications for
compatibility in Integrated mode
before upgrading a production
environment to IIS 7.0 and assigning
applications to application pools in
Integrated mode. You should only add
an application to an application pool
in Classic mode if the application
fails to work in Integrated mode. For
example, your application might rely
on an authentication token passed from
IIS to the managed runtime, and, due
to the new architecture in IIS 7.0,
the process breaks your application.
The classic pool processes the requests in the app pool by using seperate processing pipelinesfor IIS and ISAPI. integrated uses an integrated pipeline, IIS and ASP.NET a(better performance) takes advantage of the improved features of IIS 7.0 using only the one process.
Good practise is to create a new application pool for each application, then configure sepeerately according to application requirements.
Classic mode follows the steps below :
1.The incoming HTTP request is received through the IIS core.
2.The request is processed through ISAPI.
3.The request is processed through ASP.NET.
4.The request passes back through ISAPI.
5.The request passes back through the IIS core where the HTTP response finally is delivered
Integrated mode uses:
1.The incoming HTTP request is received through the IIS core and ASP.NET.
2.The appropriate handler executes the request and delivers the HTTP response
Increase the session timeout in web.config as per
remember increasing this causes the application to consume more resource, eg memory
I think your question has the answer in it. IIS 6 and 7 have a concept of Application Pool timeout, this is different from session timeout.
What is the difference between modes ... already addressed. I'm uncertain about how your questions regarding pipelines and differences in modes relate to your problem - the timeouts.
Some perspective: Idle timeout won't occur on a web site with any traffic. You've probably got a problem that only occurs in a QA site or your dev box. The idle timeout setting exists to save resources on your dev box and $5/month hosting companies with lots of underused web sites (e.g. my blog). You probably do not want idle timeout on a public site.
Session timeout - set in web config, if a user doesn't hit the server, their session times out.
Idle timeout Noone touches the web server at all for 20 minutes, so shut down to save resources. In IIS 6, this is on the performance tab of the app pool - and is easy to disable. In IIS 7, you can set in in application pool advanced settings or in the processModel element. I don't run as much IIS 7 as IIS 6, but it looks like removing the element from web.config, or setting to 0, gets infinite idle timeout.
The DefaultAppPool ignores settings for session timeout in web.config, but ASPNet App Pool will use the settings in web.config.

Resources