504:Gateway Timeout in Web App using Docker - azure

I am Deploying a web application When executing the app url its showing this error. In logs, displaying this error, "Waiting for response to warmup request for container". Please help me with this.
Building a Python Based App. Tried using Docker Container in Azure Portal.

Try to increase the timeout on your webserver (e.g. nginx) or/and at python level, to allow your page to execute for longer.
You will receive a 504 error when the webserver waits too long for a page to be generated. The webserver closes the http request because it has reached a timeout.
Here some Nginx configurations to increase the timeout :
server {
server_name _;
root /var/www/html/public/;
###
# increase timeouts to avoid 504 gateway timeouts, digits are seconds
# for big files uploads
###
client_max_body_size 0;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
fastcgi_send_timeout 3600;
fastcgi_read_timeout 3600;
# ... your config
}
I'm not a Python developer but you may find some timeout configurations at Python level too (because there's a config in PHP to define the max_execution_time).

Related

How to solve AWS Elastic Beanstalk 504 Timeout Error?

I am using AWS Elastic Beanstalk for hosting Express/Node.js API server.
It's working well with just normal APIs but I am getting this 504 Timeout error with only one API which may take time for more than 20 mins at max.
So, I thought I needed to increase max request time of Nginx and Node.js server and I did it by configuring AWS EB .extensions and .platform variables.
Here is what I did.
.platform/nginx/conf.d/timeout.conf
client_header_timeout 3000s;
client_body_timeout 3000s;
send_timeout 3000s;
proxy_connect_timeout 3000s;
proxy_read_timeout 3000s;
proxy_send_timeout 3000s;
.ebextensions/network.config
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 3000
But I am still getting this error and I can't understand why this is happening.
Plus Note: Elastic Beanstalk server is covered by CloudFront and AWS Route 53 for giving it public domain address and HTTPS connection.
If somebody knows how to fix this, it will be appreciated a lot.
In case you are using a "Load balanced" environment type, check the "Connection idle timeout" setting of the Load Balancer.
To validate if your env uses a ELB go to "Elastic Beanstalk" -> "<your_environment> -> "Configuration" section and check if the "Load balancer" category is present. Here you can find also the type of ELB you are using. Then change the Connection idle timeout setting in the EC2 console to a proper value.

Nginx cause CORS error after 1 minute from start uploading a file

I am using NestJs as backend with Nginx, I am getting a CORS error after 1 minute from starting uploading files, I was getting error when I start upload but I solve that after editing nginx config and increase client_max_body_size but the error occur after 1 minute since I upload a file, I tried to increase timeout by adding
server{
...
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
...
}
but this not solving my problem
I found the problem was in client side React app in axios that was setting 60s timeout

Nginx as proxy for node.js - failing requests

I have nginx set up as proxy to node.js for long polling like this:
location /node1 {
access_log on;
log_not_found on;
proxy_pass http://127.0.0.1:3001/node;
proxy_buffering off;
proxy_read_timeout 60;
break;
}
Unfortunately about half of long poll requests returns with error and empty response. My version of nginx is the one dreamhost offers v.0.8.53 and long poll request should be queues on the server for about 30seconds.
The case is that:
querying node.js directly like:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://127.0.0.1:3001/node/poll/2/1373730895/0/0
works fine, but going through nginx:
curl --connect-timeout 60 --max-time 60 --form "username=User" http://www.mydomain.com/node1/poll/2/1373730895/0/0
is failing in half cases - the failed cases do not appear in the nginx access_log (the successful ones are there) and the curl returns:
curl: (52) Empty reply from server
It might be connected with higher traffic volume as well as I don't see that yet on other site that has lower traffic and should have pretty much similar settings.
I will be very grateful for any help on that issue or hints how to further debug it.

Nginx upstream configuration

I am trying to configure nginx with upstream.
We have 3 machines where we run application server and proxy passing all requests from nginx to application serves.
I used following configuration in nginx:
upstream appcluster {
server host1.example.com:8080 max_fails=2 fail_timeout=300s;
server host2.example.com:8080 max_fails=2 fail_timeout=300s;
}
Now issue is if the request comes to nginx when one server is down due to unknown reasons it's waiting for a long time getting response or sometimes its getting connection timeout.
Can someone suggest me the right configuration to get a response from the appcluster without latency or connection timeout whenever a server won't respond?
Then these can help, check the proxy_next_upstream
These directive determines in what cases the request will be transmitted to the next server.
Your server block should look like for example:
server {
location / {
proxy_pass http://appcluster;
proxy_next_upstream error timeout http_404;
}
}

socket.io slow response after using nginx

I have used my local setup without nginx to serve my node.js application, I was using socket.io and the performance was quite good.
Now, I am using nginx to proxy my request and I see that socket.io has a huge response time, which means my page is getting rendered fast, but the data rendered by socket.io is order of magnitude slower than before.
I am using NGINX 1.1.16 and here is the conf,
gzip on;
server {
listen 80;
server_name localhost;
#charset koi8-r;
access_log logs/host.access.log main;
location / {
proxy_pass http://localhost:9999;
root html;
index index.html index.htm;
}
Even though everything is working, I have 2 issues,
socket.io response is slower than before. With NGINX, the response
time is around 12-15sec and without, it's hardly 300ms. tried this
with apache benchmark.
I see this message in the console, which was not there before using
NGINX,
[2012-03-08 09:50:58.889] [INFO] console - warn - 'websocket connection invalid'
You could try adding:
proxy_buffering off;
See the docs for info, but I've seen some chatter on various forums that buffering increases the response time in some cases.
Is the console message from NGINX or SocketIO?
NGINX proxy does not talk HTTP 1.1, which may be why web socket is not working.
Update:
Found a blog post about it: http://www.letseehere.com/reverse-proxy-web-sockets
A proposed solution:
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
Nginx only supports websocket starting from 1.3.13. It should be straightforward to set it up. Check the link below:
http://nginx.org/en/docs/http/websocket.html

Resources