We are receiving the feedback that users are randomly logged out. Some are logged out after 5 minutes other after 30 or more.
Strange thing is, we only have this on one server. There is absolutely no problem on our test- and staging server, but those are under much lower load.
We are thinking that this is related to thread shutting down in the background, causing the logoff of our users. Strang thing is, we don't receive notification about the recycling in the event logs even if we enabled it. We also haven't specified an expiration timeout in the code for the session.
What is the default expiration time in (identity) dotnet core 3.1? The docs aren't clear about this.
How can we increase the amount of threads in IIS10 (on Windows server 2016)?
Related
I'm having an issue where every so often Akka messages are being held in an actor's mailbox for longer than necessary, sometimes for up to around 3 seconds.
This is only happening when I run the app (ASP.NET web app) in Azure - there are no delays when I run the code on my laptop.
The app is fairly lightweight, and the CPU and memory on the service plan never go above 60% and 70% respectively. I even tried temporarily upgrading the service plan but that had no impact on the wait times.
I've added in some custom logging to flag when a message has waited in the mailbox for a significant amount of time. (Note the ordering is in descending timestamp)
I've highlighted the messages that are causing the issue - these ones are NOT be waiting for a previous message to finish processing in the actor, so for these the MessageEntered time should be the same as the MessageCreated time.
I'm wondering if there is some kind of thread limit being hit within the app, as lots of other actors are running threads in parallel...?
Any ideas on what's going on / how to resolve this?
Thanks
I have a website in IIS 8.5.9600.16384, we communicate with thousands of mobile devices through cyclic synchronisation and through SignalR 2.3.0.
This morning we had an application pool reset during working hours, which caused the SignalR to call "OnReconnect" of all our mobile devices at the same time.
I though that IIS started new processes first and then killed the old, not having downtime.
Can somebody tell me exactly what happens when IIS recycles it's application pool on the SignalR side? And in which cases can there be a connection downtime? (ex : if the server is busy?)
Edited : The application pool was recycled by IIS because of the "time limit". The IT team will change this setting so that the application pools reset every day at night time when it will have a lower impact on our applications.
A worker process with process id of '8720' serving application pool 'DefaultAppPool' has requested a recycle because the worker process reached its allowed processing time limit.
Also confirmed that disallowOverlappingRotation is not set to True. Any hint would help.
A few years later, I'm still getting some problems with the application pool recycle and SignalR. We are occasionally seeing thousands of re-connections of SignalR while the application pool recycle occurs, opening more than 60k TCPIP ports and causing a crash in IIS.
We managed to have it run "okay" for quite some time but it still crashes. Any hint would help. thanks
I'd first identify how IIS was reset. If you experienced a crash or performed an IISReset, the processes would be down before a new one stood back up. If on the other hand you configured AppPool recycling, then the overlapping processes should occur as you mention. I would check the System Event Log for recycling messages. Note that not all recycle reasons are logged by default.
You may also check to make sure disallowOverlappingRotation is not set to True.
Specifies whether the WWW Service should start another worker process to replace the existing worker process while that process is shutting down. The value of this property should be set to true if the worker process loads any application code that does not support multiple worker processes.
https://learn.microsoft.com/en-us/iis/configuration/system.applicationhost/applicationpools/add/recycling/
In our still-in-development project we have noticed sudden delays when accessing our ASP.NET Web API services. Using the awesome Mini Profiler we nailed it that these delays are caused when connections to the Azure Data Cache (Preview) services are dropped and they have to be reestablished. This process takes about 3.3 seconds. After reconnecting, getting an object from the cache takes 1.4 ms.
When I increased maxConnectionsToServer from 1 to 20, I noticed another thing. If I don't make requests to the Web API for 1 or 2 minutes (that's usually when the connections are dropped) and then start making calls, next 20 requests are delayed for 3.3 seconds, which is how connection pooling works I guess (round-tripping the connections from the pool).
Both the Web API and Caching service are hosted in the East US region, we have disabled local cache, SSL is disabled, auto discover is enabled.
So, I'm wondering if something is wrong with our configuration or is this a thing because Azure Cache is still in preview?
Any information will be valued.
Thanks!
It sounds like your shared cache is being offloaded due to inactivity. One way to test this would be to add an In-Role Cache to an existing service (if available) and swap your cache usage to this new cache. In-Role cache is described here.
Once the cache is moved off of a shared offering, wait the requisite 1-2 minutes for idle time out and retry the connection, the delay should not be present.
Assuming you want to stick with the shared cache option after isolating the problem, the only current workaround that I am aware of is running a background task that will periodically ping the cache to keep it alive.
If you are running a full Web role you can launch a background task on application start up.
If you deploying via Mobile Services, then you can run the "ping" via Scheduled Jobs. The only issue you may run into here is that the minimum time for a scheduled job is 1 minute, which may not be aggressive enough to keep your cache alive 100% of the time.
Nothing that I see points to you doing anything wrong per se. It may be the Azure is genuinely having problems getting the cache connections up and running quickly. According to several best practices documents and MSDN posts, you want to increase your number of connections to caches to allow for a failover to an active connection, which you've effectively done with your configuration change.
Try making sure that your cache accessor is a static object (another MSDN recommendation) and this may be a long shot but consider using the Sliding Window option for object expiration and see if that not only tells the countdown for the object store to reset, but also prompts the cache service to reset the connection.
I am working on windows azure cloud service. First time when i click on login button it takes 6 to 7 seconds but after sometime when i click on same login button it takes 2 seconds. I am not able to understand why it is happening so though the server side code is same for both processing but subsequent calls are quiet faster than first call ?.
"First-hit" delay is very common with ASP.NET applications. There is the overhead of JIT compilation, and various "pools" (database connections, threads, etc) may not be initialized. If you have an ASP.NET Web Forms application, each .aspx page is compiled the first time it is accessed, not when the server starts up. Also the various caching mechanisms (server or client) that make subsequent requests faster are not initialized on that first hit. And on the very first hit, any code in Application_Start will be run, setting up routing tables and doing any other initialization.
There are various things you can do to prevent your users from seeing this delay. The simplest is to write some kind of automated process that hits every page and run it after deploying a new release. There are also modules for IIS that will run code ahead of the Application_Start, when the site is actually deployed. Search for "ASP.NET warmup" to find those.
You may also experience delays after a period of inactivity, if your ASP.NET App Pool is recycled - this resets a bunch of things and causes start-up code to be run again on the next request. You can ameliorate this effect by setting up something to ping a page on your site frequently so that if the app pool is recycled it is warmed up again automatically, instead of on the next actual user request. Using an uptime monitoring service will work for this, or a Scheduled Task within the Azure ecosystem itself.
I have a classic ASP application that has been stable for years and now we're having all kinds of problems with it. After moving the app between machines and wiping the original so we could have a fresh install of windows, we've come to the following "symptom". The app pools do not appear to allow for multiple simultaneous requests. Here's what we are seeing:
The app runs normally for most people, but when someone within one of the app pools accesses a long-running script (usually one with lots of DB access) all of the other users in the pool must wait for that script to complete. Once the script completes, everyone else's requests run. This initially made us suspect the DB connection string or something.
UNTIL... we noticed also that large file uploads into our system also cause the app pool to stop responding. What's interesting about this is that we're using the SAFileup COM+ object to do our uploads, which has a progress display in a pop-up window. When you go to upload the file, the progress display comes up, but then never refreshes to show upload progress. If you wait it out, however, the file will eventually upload and the other pending requests will process as normal.
Our app pools are in the default configuration, using the IWAM account to launch. I checked to ensure that the IWAM account has all the appropriate permissions. It does.
We've tried a variety of DB connection strings, none solved the problem (though I'm thinking it's not the DB connection string). Just in case someone thinks it is, here's our connection string: "Provider=SQLNCLI;Trusted_Connection=yes;Server=(local);Database=demo;". It couldn't be simpler. This string was previously not a problem.
I fussed with the web gardens thing and it does, indeed, make the system respond to multiple requests, but each worker thread in the garden has its own session state which causes our users to get booted when their request gets randomly assigned to a new worker thread. Only having a single worker process in the garden was never an issue before anyway.
I've used SQL Profiler and sp_who2 to see if during the long-running scripts there are any deadlocks or blocks on the SQL Server. There are not.
The issues initially started after we had installed some patches from Microsoft. We wiped a machine clean and installed Win2k3 server, then SP2, and then didn't patch anymore after that. The problem remained, so it doesn't appear to have been a patch.
I'm pretty much at a loss now... does anyone have any experience with similar issues? If so, how were they fixed?
Check that you don't have ASP debugging enabled on the server. This will force the ASP script engine to run on a single thread.
Sounds like an limit on the number of concurrent incoming requests to the IIS or the Windows Server..
Check out http://blogs.msdn.com/b/david.wang/archive/2006/04/12/howto-maximize-the-number-of-concurrent-connections-to-iis6.aspx and http://forums.iis.net/p/1152112/1880908.aspx#1880908 on how to tweak the settings.