I want that all requests to my server will get response in 2 seconds.
If my server have an issue (for example: it's turned off), the user will get an error response after 2 seconds.
The status now is that if there is an issue in my server, the user and browser, try for long time to connect. I don't want this.
Currently I am not using any load-balance or CDN.
Sometimes my server fall down. I don't want my users to wait forever for response, and hangout the browser.
I Think that load balance service OR CDN can help.
What I want it that after 2 seconds, the service before my server will return default error message.
Which service can handle it for me?
I checked out CloudFront and CloudFlare, and didn't found something like that.
More info:
1. Cache cannot help, because my server return different results for every request.
2. I cannot use async code.
Thank you.
You can't configure 2 second timeout in CloudFront, however you can configure it return some error page (which you might host anywhere outside of your server) if server is not responding properly.
Take a look here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
Moreover, these error responses are cached (you can specify for how long they will be cached), so next users will get errors right away
Related
I have built a simple Python/Flask app for sending automatic messages in Slack and Telegram after receiving a post request in the form of:
response = requests.post(url='https://my-cool-endpoint.a.run.app/my-app/api/v1.0/',
json={'message': msg_body, 'urgency': urgency, 'app_name': app_name},
auth=(username, password))
, or even a similar curl request. It works well in localhost, as well as a containerized application. However, after deploying it to Cloud Run, the requests keep resulting in the following 503 Error:
POST 503 663 B 10.1 s python-requests/2.24.0 The request failed because either the HTTP response was malformed or connection to the instance had an error.
Does it have anything to do with a Flask timeout or something like that? I really don't understand what is happening, because the response doesn't take (and shouldn't) take more than a few seconds (usually less than 5s).
Thank you all.
--EDIT
Problem solved after thinking about AhmetB reply. I've found that I was setting the host as the public ip address of the SQL instance, and that is not the case when you post it to Cloud Run. For that to work out, you must replace host by unix_socket and then set its path.
Thanks you all! This question is closed.
I have a Couch/Pouch app that seems to work correctly, but suffers strange delays and fills the browser log with CORS errors. The CORS errors only occur on timed out GETs, because their responses don't supply CORS headers.
Using browser dev tools I can see many successful polling requests that look like this:
GET https://couch.server/mydb/_changes
?style=all_docs
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
... and a bad one like this ...
OPTIONS https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 200
GET https://couch.server/mydb/_changes
?style=all_docs
&feed=longpoll
&heartbeat=10000
&filter=_view
&view=visible/person
&since=369-69pI ...... bTC_376CZ
&limit=100
response 524
So there are just two differences between the good case and the bad case. In the bad case PouchDB:
precedes the GET request with an OPTIONS request
specifies a longpoll feed with a 10 second timeout
The defect, apparently, is that CouchDB's 524 response has no CORS headers!
I have four such live: true, retry: true replicators, so my browser logs are showing four red-inked errors every ten seconds.
I would post this as an issue in CouchDB repository, but I wanted some feedback here first; I could easily be misunderstanding something,
Other factors:
I host the client code in GitHub pages and serve it through CloudFlare.
Access to CouchDB from clients also goes through CloudFlare
CouchDB sits behind NGinx on my VPS.
Let me know if there are further details I should be providing, please.
Credit for answering this question should actually go to #sideshowbarker, since he made me realize that the error was not in CouchDB but in my Cloudflare settings.
In those settings I had my CouchDB site set to use DNS and HTTP proxy (CDN) (orange cloud icon) mode rather then DNS only (grey cloud icon). Switching to DNS only and, perhaps unnecessarily, wiping the cache, solved the problem after a considerable delay (over an hour?).
We are working with the following stack:
A node express middleware running on Nginx is communicating with an Apache, which proxies the requests to Tomcat, that are located on another server. Now, when requesting an operation that takes more than 15 seconds to complete, another identical request will be sent. There is obviously a 15-second retry policy somewhere.
So far, I have been unable to detect exactly who is doing this and my Google searches have also been fruitless. So, my question is if anyone has experience with something like this and could it be Node, Nginx or Apache that is sending the second request.
Any suggestions on where the double requests are coming from and what property I need to adjust to turn them off would be greatly appreciated.
The solution was to set the socket timeout property in apache's jk_mod to 0.
I could be overthinking this, but I just wanted a sanity check:
I'd like my slackbot to ping my server every minute
On receiving a 404, it will stop pinging the server and message me to inform me that the server is down.
Would I just...have a setTimeOut func that makes a request and handle errors/success from there?
Or am I missing something...?
Thanks!
Yes, this is called a healthcheck.
Typically what you want is to add a route to your server, say /healthcheck which just returns a 200 status and empty page. No need to overload your server by requesting a full set of assets every minute for no reason.
Then as you said, something like :
setInterval(()=>{
checkStatus();
},60000);
function checkStatus(){
request.get(options,(err,res,body)=>{
if(res.statusCode!==200){
//handle statuscode error
}
});
}
Instead of using a custom script to ping and message you could also use a uptime service to monitor your bot. There are many to choose from, some are even free for small scale use like uptimerobot.com. I use it for all of my Slack bots and apps and it works pretty well.
You can also use Google Stack Driver (not sure if it's free). It pings your server in a given time interval from various location around the globe. You can integrate it with your slack work space too, and stack driver will post a message just like your custom slack bot whenever it doesn't receive 200 OK from your server.
Hope this help!
Currently, I am working on a REST API using the Node hapijs framework. The API is deployed on Heroku.
There is a GET endpoint in the API that makes a get request to retrieve data from a third party and processes the data before sending a reply. This particular endpoint times out from time to time. When the endpoint times out, Heroku returns an H12 error. Once it has timed out, subsequent requests to that endpoint result in the H12 errors. I have to restart the application on Heroku to get that endpoint working again. No other endpoints in the API are affected in any way by this error and continue to work just fine even after the error has ocurred.
In my debugging process and looking through the logs, it seems that there are times when a response is not returned from the third party API, causing the error.
I've tried the following solutions to try and solve the problem:
I am using the request library to make requests. Hence, I've tried setting a timeout to 5000 ms as part of the options passed in to the request. It has worked at times... the timeout is triggered and the endpoint sends the timeout error associated with request. This is the kind of behavior that I would like, since subsequent requests to the endpoint work. However, there are times when the request timeout is not triggered but Heroku still returns an H12 error (always after 30 seconds, the Heroku default). After that, subsequent requests to that endpoint return the H12 error (also after 30 seconds). It seems that some sort of process gets "stuck" on Heroku and is not terminated until I restart the app.
I've tried adding a timeout to the hapi.js route config object. I get the same results as above.
I've continued doing research and suspect that the issues has to do with the description given here and here. It seems like setting a timeout at the app server level that can send a SIGKILL to the Heroku worker might do the trick. It seems fairly straightforward in Ruby but I cannot find much information on how to do this in Node.
Any insight is greatly appreciated. I am aware that a timeout might occur when making a request to a third party. That is not the issue. The issue is that the endpoint seems to get "stuck" on Heroku after a timeout and it becomes unresponsive.
Thanks for the help!
I had a similar issue and after giving up for a day, I came back to it and found my error. I wasn't sending a response to the client when an error occurred on the server side. Make sure you are returning a response no matter what the result of your server side algorithm is. If there is an error, return that. If the request was successful, return that response. I hope that helps.
If that doesn't help, check heroku's guides on handling Request Timeouts, especially the Debugging request timeouts section could be of help: