I have a Node 10 app running on Elastic Beanstalk, and it throws 413 errors when the request payload is larger than ~1MB.
<html>
<head>
<title>413 Request Entity Too Large</title>
</head>
<body>
<center>
<h1>413 Request Entity Too Large</h1>
</center>
<hr>
<center>nginx/1.16.1</center>
</body>
</html>
The request is not hitting my app at all; it's being rejected by nginx.
I have tried configuring AWS to increase the size of the allowed request body based on this answer, to no avail.
I've tried adding a file at .ebextensions/01_files.config with the contents:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
That didn't work, so I tried adding the file directly to .ebextensions/nginx/conf.d/proxy.conf with only:
client_max_body_size 20M;
And this also didn't work. Then I SSH'ed into the instance and added the file directly. Upon re-deploy, the entire conf.d directory was deleted and re-written, without this file.
How can I get AWS Elastic Beanstalk with Node.js 10 running on 64bit Amazon Linux 2/5.1.0 to accept nginx configuration?
The nginx setting you are trying to use (/etc/nginx/conf.d/proxy.conf) is for Amazon Linux 1.
Since you are using Amazon Linux 2 you should be using different files for setting nginx. For AL2, the nginx settings should be in .platform/nginx/conf.d/, not in .ebextentions as shown in the docs.
Therefore, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
client_max_body_size 20M;
The above is an example only of the config file. I can't verify if the setting will actually work, but you are definitely using wrong folders to set nginx options.
My recommendation would be to try to make it work manually through ssh as you are attempting now. You may find that you need to overwrite entire nginx setting if nothing works by providing your own .platform/nginx/nginx.conf file.
adding the client_max_body_size 20M; in the folder
.platform/nginx/conf.d/proxy.conf fixed it for me. Please make sure to check whether you are using Amazon Linux 2
Related
I'm running into status code 413 Request Entity Too Large. I'm running an Amazon Linux 2 AMI instance on AWS's Elastic Beanstalk, which is running an express server with a post route that uploads files to an S3 Bucket and then both adds some data to a table and produces a kafka message. Everything is working properly with files below 1MB size.
I understand nginx's default max-size value is 1MB and that I must change it.
I tried every answer in this thread Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk but despite getting the client_max_body_size 10M; inside the nginx.conf file, and restarting nginx everytime I changed a configuration, using nginx -t to see if anything was wrong with the syntax, resulting in everything being ok, and finally proving via this command that the client_max_body_size 10M; line was in fact there, when it accused of there being a duplicate of it inside the file, all of these configs seemed to be completely ignored by my micro-service whenever I try to post a file greater than 1MB.
i added client_max_body_size 10M; manually to show that, when testing, nginx tells me it's duplicate, proving it was already included in the nginx.conf file
I also tried to put my conf files inside a .platform/conf.d/ structure, which did make the client_max_body_size 10M; go inside the nginx.conf file, but still it made no difference for my request.
I've also tried reload and restarting the nginx service, both to no avail.
I don't have much ideas on where to proceed from here. Any tips?
The link you are giving is for Amazon Linux 1 (AL1). These days all EB platform are based on AL2, and nginx is set differently. Namely, you should create .platform/nginx/conf.d/myconfig.conf file in the root of your application, with the content of:
client_max_body_size 10M;
I have an endpoint in my API which takes some time to return a response (>1 min).
I have deployed my API to Elasticbeanstalk and now when I try to access it I get a 504 Gateway Timeout from Nginx
<html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
</body>
</html>
How can I fix that?
Timeout errors such as yours should ideally fixed by improvement to the software itself, but if it cannot be done for any reason, then you can increase the timeout of your nginx and Load Balancer.
In previous versions of Amazon Linux you would need to deploy your code with custom nginx configuration inside a directory named .ebextensions
With Amazon Linux 2 things are quite the same with a slight difference, instead of using the .ebextensions you need to use the .platform folder for your platform's configurations.
So, inside your app's intended ElasticBeanstalk package create the following structure -
eb-package
└── src
└── .ebextensions
└── .platform
└── nginx
└── conf.d
└── timeout.conf
And add the following content to your timeout.conf file
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
You should be aware that in some cases you'll need to increase your Load Balancer's timeout by either manually configuring it with the AWS Console (Under EC2) or by providing a configuration file inside the .ebextensions directory
For example (Note: this configuration will vary by the type of Load Balancer which you use):
option_settings:
- namespace: aws:elb:policies
option_name: ConnectionSettingIdleTimeout
value: 300
See Classic Load Balancer vs Application Load Balancer configuration
In AWS's docs (per 08/29/2021) the newer Application Load Balancer has no default timeout.
The Classic Load Balancer has a timeout of 60 seconds.
aws:elb:policies
vs
aws:elbv2:loadbalancer
For more info, see
Extending Elastic Beanstalk Linux platforms
AWS Environments Options
I need the right configuration of nginx for my problem.
Suppose the nginx + nodejs serverprograms are running on the same debian machine.
Domain name for my website is for simplicity just webserver.com (and www.webserver.com as alias)
Now, when someone surfs on the internet to "webserver.com/" it should pass the request to the nodejs application which should run on a specific port like 3000 for example. But the images and css files should get served by nginx as static files and the filestructure should looke like webserver.com/images or webserver.com/css .. images + css should get served by nginx like a static server
Now it gets tricky:
But when someone surfs on webserver.com/staticsite001 or webserver.com/staticsite002 then it should get served by the nginx server only. no need for nodejs then.
And for the nodejs server, I am just setting up my nodejs application with port 3000 for example to receive the bypass from nginx for webserver.com/
to put it in a more understandable language: when someone surfs to webserver.com/staticsite001 it should NOT pass it to the node application. It should only pass it to the node application if its inside of the first webserver.com/ directory that the outsiders can see. The webserver.com/staticsite001 should only get serverd by nginx.
How, how do I do that ? And what should the http and server block look like for the nginx configuration look like?
I am familiar with nodejs. But I am new to nginx and new to reverse proxying.
thanks
the file structure on the debian hard drive looks like:
/home/wwwexample/staticsite001 (for www.webserver.com/staticsite001/) only handled by nginx
/home/wwwexample/staticsite002 (for www.webserver.com/staticiste002/) only handlex by nginx
/home/wwwexample/images
/home/wwwexample/css
and in
/home/nodeapplication is my node js application
This server block should work:
server {
listen 80;
server_name webserver.com www.webserver.com;
root /home/wwwexample;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
location /staticsite001 {
}
location /staticsite002 {
}
location /images {
}
location /css {
}
}
First location makes nginx to proxy everything to localhost:3000. Following empty locations instruct nginx to use default behavior, that is to serve static files.
Put this code into file /etc/nginx/sites-available/my-server and create a symlink to it in /etc/nginx/sites-enabled. There is a default config, which you could use as a reference.
After that you could use command sudo /usr/sbin/nginx -t to check configuration. If everything is OK use /etc/init.d/nginx reload to apply new configuration.
i dont have a write access to /etc/nginx/nginx.conf file and i can see that client_max_body_size is set to a lower value than i need. Before i can contact the server administrator and get that bumped up, is it possible to override that using php.ini?
the setup is Drupal using Nginx on a CentOS machine.
No way.
This is an nginx related setting, PHP is not inside nginx (it is another server, quite certainly a php-fpm daemon), so there is absolutely no way for any PHP manipulation to alter the web server settings.
Alright, so I setup a node.js server quite a while ago on a AWS EC2 micro server. I was completely new to it and followed various tutorials to get it up and running. It used nginx as a reverse proxy (I believe) and the server was listening on port 8124.
Now, the instance got restarted and I can't for the life of me get access to my server back. I can ssh to it. I can start the server. I can send POST/PUT requests to it through my local command line, but my web browser gives me the 404 nginx page.
This is driving me up the wall - where in the browser/nginx/nodejs chain are things breaking down?
Please help - I'm horribly new at this at it must be a single line somewhere that's broken. I just don't know enough to find it.
My /etc/nginx/sites-enables/default file simply contains
location / {
proxy_pass http://127.0.0.1:8124/;
}
Okay I figured it out. I had to go directly into /etc/nginx/nginx.conf and in the server that was there
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
I added the line
proxy_pass http://127.0.0.1:8124/;
Oh thank god. That was going to kill me.