524 response has no CORS headers - couchdb

I have a Couch/Pouch app that seems to work correctly, but suffers strange delays and fills the browser log with CORS errors. The CORS errors only occur on timed out GETs, because their responses don't supply CORS headers.
Using browser dev tools I can see many successful polling requests that look like this:
GET https://couch.server/mydb/_changes
?style=all_docs
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
... and a bad one like this ...
OPTIONS https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
GET https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 524
So there are just two differences between the good case and the bad case. In the bad case PouchDB:
precedes the GET request with an OPTIONS request
specifies a longpoll feed with a 10 second timeout
The defect, apparently, is that CouchDB's 524 response has no CORS headers!
I have four such live: true, retry: true replicators, so my browser logs are showing four red-inked errors every ten seconds.
I would post this as an issue in CouchDB repository, but I wanted some feedback here first; I could easily be misunderstanding something,
Other factors:
I host the client code in GitHub pages and serve it through CloudFlare.
Access to CouchDB from clients also goes through CloudFlare
CouchDB sits behind NGinx on my VPS.
Let me know if there are further details I should be providing, please.

Credit for answering this question should actually go to #sideshowbarker, since he made me realize that the error was not in CouchDB but in my Cloudflare settings.
In those settings I had my CouchDB site set to use DNS and HTTP proxy (CDN) (orange cloud icon) mode rather then DNS only (grey cloud icon). Switching to DNS only and, perhaps unnecessarily, wiping the cache, solved the problem after a considerable delay (over an hour?).

Related

REST API In Node Deployed as Azure App Service 500 Internal Server Errors

I have looked at the request trace for several requests that resulted in the same outcome.
What will happen is I'll get a HttpModule="iisnode", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus=500, HttpReason="Internal Server Error", HttpSubstatus=1013, ErrorCode="The pipe has been ended. (0x6d)"
This is a production API. Fewer than 1% of requests get this result but it's not the requests themselves - I can reissue the same request and it'll work.
I log telemetry for every API request - basics on the way in, things like http status and execution time as the response is on its way out.
None of the requests that get this error are in telemetry which makes me think something is happening somewhere between IIS and iisnode.
If anyone has resolved this or has solid thoughts on how to pin down what the root issue is I'd appreciate it.
Well for me, what's described here covered the bulk of the issue: github.com/Azure/iisnode/issues/57 Setting keepAliveTimeout to 0 on the express server reduced the 500s significantly.
Once the majority of the "noise" was eliminated it was much easier to correlate the remaining 500s that would occur to things I could see in my logs. For example, I'm using a 3rd party node package to resize images and a couple of the "images" that were loaded into the system weren't images. Instead of gracefully throwing an exception, the package seems to exit out of the running node process. True story. So on Azure, it would get restarted, but while that was happening requests would get a 500 internal server error.

ForEach Bulk Post Requests are failing

I have this script where I'm taking a large dataset and calling a remote api, using request-promise, using a post method. If I do this individually, the request works just fine. However, if I loop through a sample set of 200-records using forEach and async/await, only about 6-15 of the requests come back with a status of 200, the others are returning with a 500 error.
I've worked with the owner of the API, and their logs only show the 200-requests. So I don't think node is actually sending out the ones that come back as 500.
Has anyone run into this, and/or know how I can get around this?
To my knowledge, there's no code in node.js that automatically makes a 500 http response for you. Those 500 responses are apparently coming from the target server's network. You could look at a network trace on your server machine to see for sure.
If they are not in the target server logs, then it's probably coming from some defense mechanism deployed in front of their server to stop misuse or overuse of their server (such as rate limiting from one source) and/or to protect its ability to respond to a meaningful number of requests (proxy, firewall, load balancer, etc...). It could even be part of a configuration in the hosting facility.
You will likely need to find out how many simultaneous requests the target server will accept without error and then modify your code to never send more than that number of requests at once. They could also be measuring requests/sec to it might not only be an in-flight count, but could be the rate at which requests are sent.

Apache & Node Reverse proxy, Socket Timeout, Keepalive

I have an API application built with node and expressjs. Using Apache for reverse proxy, with keepalive enabled.
Some requests (specifically POST/PUT), will end up hanging for 2 minutes due to the default 2 minute socket timeout. Doesn't happen always, but often. As soon as the timeout is hit, the Client then gets the response and continues sending other requests.
It seems to be due to Keep-Alive, although I'm not 100% sure.
Adding the header:
res.set('Connection', 'close');
Makes the problem go away, that's why I think this is related to keep-alive.
Been researching the issue for 2 days with no success.
Is it worth setting the header and accepting the consequences, or is there any other solution/explanation to this behaviour?
Turns out this was all caused by a "204 - No Content" response on DELETE requests sent before the PUT/POST requests. Changing from
res.send(data)
To
res.status(204).end()
Fixed the issue for me.

Response body missing characters

I've seen this issue happen on multiple machines, using different languages and server-side environments. It seems to always be IIS, but it may be more widespread.
On slower connections, characters are occasionally missing from the response body. It happens somewhere between 25% and 50% of the time but only on certain pages, and only on a slow connection such as VPN. A refresh usually fixes the issue.
The current application in question is .NET 4 with SQL Server.
Example:
<script>
document.write('Something');
</script>
is being received by the client as
<scrit>
document.write('Something');
</script>
This causes the JavaScript inside the tag to instead be printed to the page, rather than executing.
Does anyone know why this occurs? Is it specific to IIS?
Speaking generally, the problem you describe would require corruption at the HTTP layer or above, since TCP/IP has checksums, packet lengths, sequence numbers, and re-transmissions to avoid this sort of issue.
That leaves:
The application generating the data
Any intermediate filters between the application and the server
The HTTP server returning the data
Any intermediary HTTP proxies, transparent or otherwise
The HTTP client requesting the data
The user-agent interpreting the data
You can diagnose further based off of a network capture performed at the server edge, and at the client edge.
Examine the request made by the client at the client edge to verify that the client is making a request for the entire document, and is not relying upon cache (no Range or If-* headers).
If the data is correct when it leaves the server (pay particular attention to the Content-Length header and verify it is a 200 response), neither the server nor the application are at fault.
If the data is correct as received by the client, you can rule out intermediary proxies.
If there is still an issue, it is a user-agent issue
If I had to psychically debug such a problem, I would look first at the application to ensure it is generating the correct document, then assume some interloper is modifying the data in transit. (Some HTTP proxy for wan-acceleration, aggressive caching, virus scanning, etc...) I might also assume some browser plugin or ad blocker is modifying the response after it is received.
I would not, however, assume it is the HTTP server without very strong evidence. If corruption is detected on the client, but not the server, I might disable TCP Offload and look for an updated NIC Driver.

Set timeout for request to my server

I want that all requests to my server will get response in 2 seconds.
If my server have an issue (for example: it's turned off), the user will get an error response after 2 seconds.
The status now is that if there is an issue in my server, the user and browser, try for long time to connect. I don't want this.
Currently I am not using any load-balance or CDN.
Sometimes my server fall down. I don't want my users to wait forever for response, and hangout the browser.
I Think that load balance service OR CDN can help.
What I want it that after 2 seconds, the service before my server will return default error message.
Which service can handle it for me?
I checked out CloudFront and CloudFlare, and didn't found something like that.
More info:
1. Cache cannot help, because my server return different results for every request.
2. I cannot use async code.
Thank you.
You can't configure 2 second timeout in CloudFront, however you can configure it return some error page (which you might host anywhere outside of your server) if server is not responding properly.
Take a look here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
Moreover, these error responses are cached (you can specify for how long they will be cached), so next users will get errors right away

Resources