Status 200 with sc-win32-status of 64 large request - iis

windows server 2016
.net core 3.1 first request more than 25S.but my machine only 4S.
client only request 1/s.but iis log record more than 200/s.
log record is:
sc-win32-status of 64,timetoken 0
sc-win32-status of 64,timetoken 10ms
....
like this.

SC-status(http status code) and sc-win-32 status=64 may happen in the following scenario.
Scenario 1:
IIS get request from client and execute the request without problem. Then send back response, sc-status is 200, but we don’t know if the client has received the response and sc-win32-status.
Then IIS try to send the response to the client, but the connection is already lost (during the execution of the requestor) or gets lost during the transmission (this is a network issue). IIS log will record sc-win32-status=64, means the specified network is no longer available.
Scenario 2:
IIS get request from client and execute the request without problem. Then send back response, sc-status is 200, but we don’t know if the client has received the response and sc-win32-status.
Then IIS send response to the server and waits for a ACK message from the client, but client is unwilling of sending this(client gets a response). Instead, client resets the connection, to free up resources(instead of leaving the connection in a TIME_WAIT/CLOSE_WAIT state, which is more common). Since IIS did not get any ACK message, it logs a sc-win32-status code of 64.
As a result.
If you see sc-status = 200, sc-win32-status = 64 and a large time-taken value in the first access after site published, it is normally to take long time.
You can try to change other network, sometimes network causes many problems but we can do nothing.
You can check your code and connection to database, maybe some resources, connections and logical processing take long time.

maybe server2016,
I have observed that only https can cause this problem. http is normal.

Related

Azure API management gives 200 [not sent in full (see exception telemetries)]

We have a few APIs that are being long polled through Azure API Management. For some reason, we are receiving a response of 200 [not sent in full (see exception telemetries)] and then a System.Exception: A task was canceled. exception in App Insights.
Taking a look at the server app service telemetry, the requests were completed without any exception there.
Can anyone help me figure out what this status response means and why are we getting this exception?
These errors mean that APIM started to send response to client, sent status code and description, and some portion of headers and body. These traces must be accompanied by exception telemetry as response code suggests. Depending on what you see there it may be:
Client connectivity error - client terminated connection before response was sent in full
Backend connectivity error - backend terminated connection before providing full response
The reasons for both may vary a lot, but given small duration I'd suspect that it's client closing connection. One of the reasons for this, for example, is if this API is used from browser it is normal for browser to terminate connection and abort reading response if user navigates away from page that made the call.

Poco::Net::HTTPSClientSession receiveResponse always truncated abnormally

Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.

Is there a timeout on warp server response?

I have a web application using warp and while trying to query some large-ish using curl I noticed the connection get shutdown exactly after 1 minute transfer. I increased curl's own timeout but this did not changed anything so I assume this is set on the server side.
Is this actually the case there is a 60s timeout on sending response in warp, and if yes, how can I control it?

Set timeout for request to my server

I want that all requests to my server will get response in 2 seconds.
If my server have an issue (for example: it's turned off), the user will get an error response after 2 seconds.
The status now is that if there is an issue in my server, the user and browser, try for long time to connect. I don't want this.
Currently I am not using any load-balance or CDN.
Sometimes my server fall down. I don't want my users to wait forever for response, and hangout the browser.
I Think that load balance service OR CDN can help.
What I want it that after 2 seconds, the service before my server will return default error message.
Which service can handle it for me?
I checked out CloudFront and CloudFlare, and didn't found something like that.
More info:
1. Cache cannot help, because my server return different results for every request.
2. I cannot use async code.
Thank you.
You can't configure 2 second timeout in CloudFront, however you can configure it return some error page (which you might host anywhere outside of your server) if server is not responding properly.
Take a look here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
Moreover, these error responses are cached (you can specify for how long they will be cached), so next users will get errors right away

IIS ARR - Is it possible forward/stream post request as is without buffering or adding headers

I'm trying to implement WebSocket type of functionality, but using only plain http. So there's a connection over GET that gets data from server, and a connection with POST that sends data to server. Both connections send/receive streaming data, and use content length = 1 GB, type = application octet stream. The two connections remain open till the server sends an "end" command over the GET connection.
Without ARR things work (the back-end server is not IIS).
I added ARR to the setup, did all the configuration steps (including response buffer threshold = 0, max request size = 2GB). Hitting normal urls, submitting forms from browser over ARR works perfectly.
However the POST connection to send data to server does not work. The client sends the data, but the back-end server does not get anything (verified using wireshark). It remains hung for a while, if I terminate the client then ARR failure trace log is generated, and it shows 400 error response, this seems to be because the actual amount of data was not 1 GB in length.
The GET connection works. (If I point the POST connection to directly talk to the back-end server, things work)
So my question is: Is it possible to send streaming data as POST request via ARR? Any suggestions on how to achieve it?
Thanks

Resources