We have a classic ASP application running on Windows Server 2012 and IIS (version 8) web server and had to modify a page to allow retrieval of a larger data set from the database. When we run this without amending any IIS settings we receive the error below in IE;
We have tried amending the buffer level at the site level and IIS application level from the standard 4194304 (4Mb) limit to 20971520 (20Mb) but when we do the output changes to the image below in IE and in chrome it continually asks for credentials every 20 seconds or so.
Why is this happening? How do we resolve please?
You're probably best disabling the buffer using Response.Buffer = False
By default, IIS buffers all output, which means as a webpage is being built it get stored in memory (a buffer) until your script has finished executing, and then the whole page is sent from the buffer to the clients machine as one file. If you're constructing a very large page with a lot of data you risk overflowing the buffer. Increasing the buffer size limit is one solution, although I can't see why it would start asking for credentials, you must have misconfigured something in IIS.
Another solution would be to use Response.Flush() to intermittently
flush data from the buffer and send the HTML to the clients machine in chunks. But disabling the buffer entirely will do this for you without the need for Response.Flush().
Related
We're sending rather large chunks of data over websockets from an Azure Web App. It all works fine, but for some reason, the outgoing buffer size is 4096 bytes, which gives a lot of overhead for large data transmissions.
On a local developer machine this IIS/.Net buffer seems to be 16384 (or possible 16383 cause i'm getting the stream in one frame with 1 byte, and the next frame 16383 and on it goes). The reading buffer in the client in 65536 for reach frame.
All code is written in .NET so the sending side is simply creating a large ArraySegment and sending it with the ClientWebSocket.SendAsync which is much to high up the chain to actually decide how it's sent.
My question is, is it possible to change the size of the frames/buffers on the Azure Web App?
Clearly it's possible to change it on either OS or IIS level (http.sys?), since our Windows 10 dev machines have a different send buffer, but I really can't find where and how.
I have been asked to create a message processing system as following. As I am not sure if this is the right place to post this, feel free to move it to any other appropriate SC group.
Problem
Server have about 100 to 500 clients connected at every moment. When a client connects to server, server loads part of their data and cache it in memory for faster access. Server will receive between 200~1000 messages per second for all clients. These messages are relatively small (about 500 bytes). Any changes to data in cache should be saved to disk as soon as possible. When client disconnects all their data is saved to disk and removed from cache. each message contains some instruction and a text message which will be saved as file. Instructions should be executed as fast as possible (near instant) and all clients using that file should get the update. Only writing the modified message to disk can be delayed.
Here is my solution in a diagram
My solution consists of a web server (http or socket) a message queue and two or more instances of file server and instruction server.
Web server grabs client messages and if there is message available for client in message queue, pushes it back to client.
Instruction processor grabs instructions from queue and creates necessary message to be processed by file server (Get/set file) and waits for the file to be available in queue and more process to create another message for client.
File server only provides the files, either from cache or physical file depending on type of file.
Concerns:
There are peak times that total connected clients might go over 10000 at once and total messages received from clients increase to 10~15K.
I should be able to clear the queue and go back to normal state as soon as possible (with processing requests obviously).
I should be able to add extra instruction processors and file servers on the fly without having to shut down the other instances.
In case file server crashes it shouldn’t lose files so it has to write files to disk as soon as there are any changes and process time is available.
File system should be in b+ tree format so some applications (local reporting apps) could easily access files without having to go through queue server
My Solution
I am thinking of using node.js for socket/web server. And may be a NoSQL database for file server and a queue server such as rabbitMQ or Node_Redis and Redis.
Questions:
Is there a better way of structuring this system?
What are my other options for components of this system?
is it possible to run all the instances in same server machine or even in same application (in different thread)?
You have a couple of holes here, mostly around the web server "pushing" the message back to the client. That doesn't really work in a web-based world. You can try and use websockets, but generally, this ends up being polling based.
I don't know what the "instructions" are to be executed, but saving 1000 500byte messages is trivial. Many NoSQL solutions boast million+ write per second capacity. Especially if you let committing to disk to lag.
Don't bother with the queue for the return of the file. A good NoSQL solution will scale better. Build out a Cassandra cluster, load test it until it can handle your peak load.
This simplifies your architecture into a 1 or more web servers, clients polling that server for file updates, a queue for submitting "messages" to the "instruction server" (also known as an application server in web-developer terms), and a no-sql database for the instruction server to write files to.
This makes scaling easy, you can always add more web servers, and with a decent cluster size for your no-sql server, you should get to scale horizontally there as well. Your only real bottleneck is your instruction server queue, which you could always throw more instruction servers at.
Good Afternoon Everyone,
I am load testing my .NET Web API which is hosted on a Windows 2008 Server virtual machine. I am using Visual Studio 2012 Load Test. However, once my load test reaches 780 concurrent users, the CPU % starts to decrease as shown in the image attached. The load test reaches a maximum of 1000 concurrent users but the CPU % is still decreasing at the highest user load. I cannot explain why. Is any kind of IIS limit being reached? Why does this occur? Is the maximum user load reached for this function?
Just looking for an explanation to this result and some guidance.
Thank you
IIS does have separate output cache settings which are enabled by default which does start to make sense after considering how it handles dynamic content with static response and cache worthiness:
The IIS output caching feature targets semi-dynamic content. It lets
you cache static responses for dynamic requests and increase
scalability
Configuring Cache Worthiness:
Even if you enable output caching, IIS does not immediately cache a
request. It must be requested a few times before IIS considers a
request to be “cache worthy.” Cache worthiness can be configured via
the ServerRuntime section.
Two properties determine cache worthiness:
frequentHitTimePeriod
frequentHitThreshold
A request is only cached if more than frequentHitThreshold requests for a cacheable URL arrive within the frequentHitTimePeriod.
This was a good explanation: http://www.iis.net/learn/manage/managing-performance-settings/configure-iis-7-output-caching
This maybe related to platforms other than ColdFusion.
IIS 6 Log reports "time-taken" much longer (30 minutes) than 120 seconds set in Connection Timeout for several requests to ColdFusion page.
I assume that ColdFusion was unresponsive at the moment. I would like IIS to stop the request rather than wait this long.
Is there an IIS setting that would force this?
Not really because iis is no longer handling the request once it has been passed to cf. You could try playing with application pool timeout and see if you can get that to throw an error.
This scenario can also be considered as the slow HTTP DoS attack when caused by the client. IIS doesn't provide much protection against it (at least for slow POST body) because Microsoft considers it a protocol bug, not an IIS weakness. Although I think in this case it is your server doing it to itself.
Things to check:
You didn't mention whether it is the request that is slow or the
server's response. You could try tweaking your
MinFileBytesPerSec parameter if it's the response that is slow. By
default it will drop the connection if the client is downloading at
less than 240 bytes per second.
Remember, that 120 second IIS timeout is an idle timeout. As long as the client sends or receives a few bytes inside 120 seconds, that timer will keep getting reset.
You didn't mention if this long wait is happening on all pages or always in a few specific ones. It is possible that your CF script is making another external
connection, e.g. CFQUERY, which is not subject to CF timeouts, but to the timeouts
of the server it is connecting to. Using the timeout attribute inside CFQUERY may prevent this.
You also didn't mention what your Coldfusion settings are. Maybe the IIS timeout setting is being ignored by the Coldfusion
JRUN Connector ISAPI filter, so you should check the settings in
Coldfusion Administrator. Especially check if Timeout Requests
after has been changed. If its still at the default of 60
seconds, check your code to see if it has been overridden there, e.g.
<cfsetting requestTimeOut = "3600">
Finally, there is the matter of the peculiar behavior of CF's requestTimeout that you might have to workaround by replacing some cfscript tags with CFML.
I have a intranet SiteCore website set up on IIS 7 which randomly throws the following error message
HTTP Error 503.2 - Service Unavailable
The serverRuntime#appConcurrentRequestLimit setting is being exceeded.
To fix this issue, I have made following changes
Increased the Queue Length of application pool myrjetAppPool from 1000 to 65535.
Modified Machine.Config to increase requestQueueLimit property of ProcessModel element to 100000
Increased appConcurrentRequestLimit to 10000 by running
C:\Windows\System32\inetsrv\appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000
But I'm still getting the same error. ANy help is greatly appreaciated.
You might check to see where all your threads are going. We had occurrences where threads for Media Library assets were hanging and blocking up the queue.
In IIS Manager, select the server node from the tree, then the "Worker Processes" feature icon, then right-click the application pool of interest and select "View current requests". You might find something is getting stuck. I sometimes hit F5 on this screen a few dozen times in very quick succession to see the rate the requests are going through (of course Performance Monitor is better for viewing metrics but it won't tell you what URLs are being processed).
Investigate references in the linked url to 'MaxConcurrentReqeustsPerCPU' which you may need to set by creating a new registry key, depending on your OS and framework.
https://learn.microsoft.com/en-us/archive/blogs/tmarq/asp-net-thread-usage-on-iis-7-5-iis-7-0-and-iis-6-0
As already commented - check the actual concurrent request count using performance counters to determine which limit you're hitting i.e. it could be a limit of 5000 or maybe 12 (per cpu).
Edit: I realise this may look like I'm talking about a different setting entirely, but I believe there is overlap here.
We got this problem after an installation of an IIS plugin. After long investigating we saw that the config-file C:\Windows\System32\inetsrv\config\applicationHost.config had an extra location tag for the site with the problem. After removing the extra entry and an iisreset, the site/server worked normally againg. So something must went wrong during the installation....