I have an endpoint in my API which takes some time to return a response (>1 min).
I have deployed my API to Elasticbeanstalk and now when I try to access it I get a 504 Gateway Timeout from Nginx
<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
</body>
</html>
How can I fix that?
Timeout errors such as yours should ideally fixed by improvement to the software itself, but if it cannot be done for any reason, then you can increase the timeout of your nginx and Load Balancer.
In previous versions of Amazon Linux you would need to deploy your code with custom nginx configuration inside a directory named .ebextensions
With Amazon Linux 2 things are quite the same with a slight difference, instead of using the .ebextensions you need to use the .platform folder for your platform's configurations.
So, inside your app's intended ElasticBeanstalk package create the following structure -
eb-package
└── src
└── .ebextensions
└── .platform
└── nginx
└── conf.d
└── timeout.conf
And add the following content to your timeout.conf file
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
You should be aware that in some cases you'll need to increase your Load Balancer's timeout by either manually configuring it with the AWS Console (Under EC2) or by providing a configuration file inside the .ebextensions directory
For example (Note: this configuration will vary by the type of Load Balancer which you use):
option_settings:
- namespace: aws:elb:policies
option_name: ConnectionSettingIdleTimeout
value: 300
See Classic Load Balancer vs Application Load Balancer configuration
In AWS's docs (per 08/29/2021) the newer Application Load Balancer has no default timeout.
The Classic Load Balancer has a timeout of 60 seconds.
aws:elb:policies
vs
aws:elbv2:loadbalancer
For more info, see
Extending Elastic Beanstalk Linux platforms
AWS Environments Options
Related
I am using AWS Elastic Beanstalk for hosting Express/Node.js API server.
It's working well with just normal APIs but I am getting this 504 Timeout error with only one API which may take time for more than 20 mins at max.
So, I thought I needed to increase max request time of Nginx and Node.js server and I did it by configuring AWS EB .extensions and .platform variables.
Here is what I did.
.platform/nginx/conf.d/timeout.conf
client_header_timeout 3000s;
client_body_timeout 3000s;
send_timeout 3000s;
proxy_connect_timeout 3000s;
proxy_read_timeout 3000s;
proxy_send_timeout 3000s;
.ebextensions/network.config
option_settings:
- namespace: aws:elasticbeanstalk:command
option_name: Timeout
value: 3000
But I am still getting this error and I can't understand why this is happening.
Plus Note: Elastic Beanstalk server is covered by CloudFront and AWS Route 53 for giving it public domain address and HTTPS connection.
If somebody knows how to fix this, it will be appreciated a lot.
In case you are using a "Load balanced" environment type, check the "Connection idle timeout" setting of the Load Balancer.
To validate if your env uses a ELB go to "Elastic Beanstalk" -> "<your_environment> -> "Configuration" section and check if the "Load balancer" category is present. Here you can find also the type of ELB you are using. Then change the Connection idle timeout setting in the EC2 console to a proper value.
I have a Node 10 app running on Elastic Beanstalk, and it throws 413 errors when the request payload is larger than ~1MB.
<html>
<head>
<title>413 Request Entity Too Large</title>
</head>
<body>
<center>
<h1>413 Request Entity Too Large</h1>
</center>
<hr>
<center>nginx/1.16.1</center>
</body>
</html>
The request is not hitting my app at all; it's being rejected by nginx.
I have tried configuring AWS to increase the size of the allowed request body based on this answer, to no avail.
I've tried adding a file at .ebextensions/01_files.config with the contents:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
That didn't work, so I tried adding the file directly to .ebextensions/nginx/conf.d/proxy.conf with only:
client_max_body_size 20M;
And this also didn't work. Then I SSH'ed into the instance and added the file directly. Upon re-deploy, the entire conf.d directory was deleted and re-written, without this file.
How can I get AWS Elastic Beanstalk with Node.js 10 running on 64bit Amazon Linux 2/5.1.0 to accept nginx configuration?
The nginx setting you are trying to use (/etc/nginx/conf.d/proxy.conf) is for Amazon Linux 1.
Since you are using Amazon Linux 2 you should be using different files for setting nginx. For AL2, the nginx settings should be in .platform/nginx/conf.d/, not in .ebextentions as shown in the docs.
Therefore, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
client_max_body_size 20M;
The above is an example only of the config file. I can't verify if the setting will actually work, but you are definitely using wrong folders to set nginx options.
My recommendation would be to try to make it work manually through ssh as you are attempting now. You may find that you need to overwrite entire nginx setting if nothing works by providing your own .platform/nginx/nginx.conf file.
adding the client_max_body_size 20M; in the folder
.platform/nginx/conf.d/proxy.conf fixed it for me. Please make sure to check whether you are using Amazon Linux 2
I have a node js application in Elastic Bean stalk.We are considering using socket io for a feature .
I read in some places that socket io support has to be manually enabled in AWS elasticbeanstalk. Specifically when it uses the default NGINX proxy.
I read By default an elastic beanstalk instance has an nginx proxy in front of it that is not configured to allow webSockets.
Is this correct information? If so , how to enable socket io support in AWS EB?
This is correct information. You'll need to do some additional configuration for your Elastic Beanstalk deployment to get WebSockets(Socket.io or otherwise) to work.
Once you create your Elastic Beanstalk Environment, you'll need to configure your load balancer to accept TCP connections, and add a configuration file to your node project's root directory:
Configure Load Balancer:
Head over to your EC2 console and select the Load Balancers tab
Select the load balancer that belongs to your ELB environment from
the list
Select the Listeners tab
Change the default entries' Instance Protocol to TCP
Add the Configuration File:
In the root directory of your node project, create a folder called
.ebextensions
Create a file called enable-websockets.config in your new .ebextensions folder with the following contents:
container_commands:
enable_websockets:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
proxy_pass_request_headers on;\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
This file tells the NGINX reverse proxy how to handle the HTTP 101 upgrade status code that WebSockets need to communicate with your application server.
I have a small sample project located here that illustrates the problem I am seeing when working with a nginx + node + host docker stack.
I have 2 containers:
A node (express) application that simply returns a json object. It is CORs enabled based on this website. It has it's port published to host via 3000:80
An nginx server that is also CORs enabled based on this website. It only serves static content (index.html and main.js files) from the default location (/usr/shared/nginx/html). Its port is published via 8080:80.
When running the containers individually from host I can access the node server and see the JSON object being returned. When I access the nginx server, I see my index.html and the javascript code from main.js runs.
Now I have the node app container linked to the nginx server container. From inside my main.js file of the nginx container, I attempt to access the server at http://nodeapp/api. I am seeing a CORs error
XMLHttpRequest cannot load http://nodeapp/api. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:8080' is therefore not allowed
access.
The strange thing is, the response header indicates it is coming from nginx and not my express application as I would expect. The nginx container is also not logging anything.
Things that worked
If I change the url for the XMLHttpRequest to the node container's IP (say 172.17.0.2) it works as expected and the response header indicates it is coming from the express server. In my /etc/hosts file there is an entry:
172.17.0.2 nodeapp abc123ContainerId quickserve_nodeapp_run_1
When I curl the node container from an interactive tty container it also works as expected.
If I load the node container and use http-server (server on host) it works as expected and the response header indicates it is coming from the express server.
Just in case it has an incluence, that old thread (2013) mentioned a cor option on the docker daemon.
Nowadays (Q4 2015), the docker daemon includes:
--api-cors-header="" Set CORS headers in the remote API
To set cross origin requests to the remote api please give values to --api-cors-header when running Docker in daemon mode. Set * (asterisk) allows all, default or blank means CORS disabled
$ docker -d -H="192.168.1.9:2375" --api-cors-header="http://foo.bar"
That might be a setting to use in your case.
I deploy a nodejs application on the aws beanstalk servers and want to use socket.io feature based on WebSocket protocol. I know there's a discussion here to directly connect to nodejs servers instead of using nginx as an proxy server. But if I still want to have the nginx as proxy server because of extra features provide by nginx, such as static files, ...etc.
I find it's already support WebSocket proxying on nginx 1.3.13 and I found it seems aws elastic-beanstalk still use the 1.2.x nginx.
So I am wondering if there's any way to upgrade nginx version under beanstalk and how to enable WebSocket proxying to nodejs server.
Thanks
We use elastic beanstalk with multiple docker containers(allows you custom nginx version) with following
1.Nginx config
location /ws/
{
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/<<socket>>;
}
Enable TCP mode load balancing in elastic load balancer if you are using one.
You would need additional module enabled, which can be done during nginx compilation.
To do that you would need to add below line to your configuration script.
--add-module=/root/nginx_patched/nginx_tcp_proxy_module
It is required if you would like to get sockets enabled, for example for node.js socket.io. Full tutorial can be found here.
Sorry for the link but it is quite broad topic. You might need step by step guide if you starting from the scratch.
Hope it helps.