Sources: AWS Elastic Beanstalk Linux 2 .NET Core API
After receiving the 413 Request Entity Too Large error, I made researches for a solution. Everything works when I connect into the machine with ssh and do client_max_body_size 20M with the nano /etc/nginx/nginx.conf command.
But when a new deploy comes, it goes back to the old one.
With my research, I created .platform/nginx/conf.d/proxy.conf files in the root directory of the project.
proxy.conf content:
client_max_body_size 1024M;
I also created the .platform/00_myconf.config file.
00_myconf.config content:
container_commands:
01_reload_nginx:
command: "service nginx reload"
I keep getting the same error (413 Request Entity Too Large) when I upload and deploy after adding these.
By connecting to the machine with ssh
I can't find the max_body_size line in the file with the nano /etc/nginx/nginx.conf command.
This is old and hopefully you figured it out already.
If not, the reason you can't find the client_max_body_size line is that it isn't there by default, you have to add it.
ssh into the instance then:
cd /etc/nginx/conf.d/
sudo nano proxy.conf
paste in:
client_max_body_size 1024M;
and save and exit.
finally run:
sudo service nginx restart
Unfortunately the files stored on ec2 are ephemeral and will be lost on each deploy or if an instance goes down. As yet I have not been able to find an in-code solution for dotnet, but will update when I do.
Edit: Okay so in code solution (ie, not dependent on EC2 ephemeral storage):
Make a file in /.platform/nginx/conf.d/proxy.conf with the content:
client_max_body_size 100M;
Make sure in properties on Visual Studio that Copy to output directory is set to "Always" for this file and that the build action is "content."
Related
I have a Node 10 app running on Elastic Beanstalk, and it throws 413 errors when the request payload is larger than ~1MB.
<html>
<head>
<title>413 Request Entity Too Large</title>
</head>
<body>
<center>
<h1>413 Request Entity Too Large</h1>
</center>
<hr>
<center>nginx/1.16.1</center>
</body>
</html>
The request is not hitting my app at all; it's being rejected by nginx.
I have tried configuring AWS to increase the size of the allowed request body based on this answer, to no avail.
I've tried adding a file at .ebextensions/01_files.config with the contents:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20M;
That didn't work, so I tried adding the file directly to .ebextensions/nginx/conf.d/proxy.conf with only:
client_max_body_size 20M;
And this also didn't work. Then I SSH'ed into the instance and added the file directly. Upon re-deploy, the entire conf.d directory was deleted and re-written, without this file.
How can I get AWS Elastic Beanstalk with Node.js 10 running on 64bit Amazon Linux 2/5.1.0 to accept nginx configuration?
The nginx setting you are trying to use (/etc/nginx/conf.d/proxy.conf) is for Amazon Linux 1.
Since you are using Amazon Linux 2 you should be using different files for setting nginx. For AL2, the nginx settings should be in .platform/nginx/conf.d/, not in .ebextentions as shown in the docs.
Therefore, you could have the following .platform/nginx/conf.d/myconfig.conf with content:
client_max_body_size 20M;
The above is an example only of the config file. I can't verify if the setting will actually work, but you are definitely using wrong folders to set nginx options.
My recommendation would be to try to make it work manually through ssh as you are attempting now. You may find that you need to overwrite entire nginx setting if nothing works by providing your own .platform/nginx/nginx.conf file.
adding the client_max_body_size 20M; in the folder
.platform/nginx/conf.d/proxy.conf fixed it for me. Please make sure to check whether you are using Amazon Linux 2
I've been moving over to ElasticBeanstalk using Amazon Linux 2 and I'm having a problem overwriting the default nginx.conf file. I'm following the AL2 docs for the reverse proxy.
They say, "To override the Elastic Beanstalk default nginx configuration completely, include a configuration in your source bundle at .platform/nginx/nginx.conf:"
My apps folder structure
When I run my deploy though, I get the error
CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment: Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.","timestamp":1598554657,"severity":"WARN"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1598554682,"severity":"ERROR"}]}]}
The main part of the error is
"Elastic Beanstalk ignored your '.ebextensions/nginx' configuration directory. To include these configuration files, move them to '.platform/nginx'.""
Which I'm confused about because this is where I've put the file/folder.
I've tried completely removing the .ebextensions folder and got the same error.
I've tried starting from a completely fresh beanstalk environment and still got that error. I'm not understanding how beanstalk is managing this.
Based on the comments.
The issue was caused by duplicate locations of the nginx config file. This was due to deleting the nginx default path in .ebextensions, while EB re-creating it.
Since this seems as a bug, AWS support ticked was created.
I have looked on all possible stackoverflow posts and have tried all the different aproaches. None worked. There seems also no official documentation on this. Everything works fine in my local app, and I can upload images of any size, but as soon as its deployed in my elastic beanstalk, I seem to have a limit of 1M per image upoad.
The Problem:
Every time a user posts an image that is larger than 1MB, I receive the 413 error message with nginx.
Elastic Beanstalk Log:
2020/07/28 17:22:53 [error] 10404#0: *62 client intended to send too large body: 2800500 bytes, client: 172.31.18.162, server: , request: "POST /comment/image_post/11003031 HTTP/1.1", host: "myapp.com", referrer: "https://myapp.com/11003031"
What I did to try to solve the problem:
I created a .ebextensions folder in my node,js application root folder, added the below code and called it proxi.config, and pushed it to my github which deploys to Elastic Beanstalk via pipeline. I can see the proxi.config in my repository but for some reason it is automatically overwritten by the load balancer (I guess from what I have been reading).
proxi.config
container_commands:
01_reload_nginx:
command: "service nginx reload"
files:
"/etc/nginx/conf.d/proxy.conf":
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 25M;
If this is complicated to solve, is there no other way to increase the 1M limit?
The probable reason why your proxy.conf is not being used is because you are using current version of EB, which runs on Amazon Linux 2 (AL2). However, the proxy settings you are trying to use are for old version of EB running on AL1.
For AL2, the nginx settings should be placed in .platform folder, not in .ebextenations as shown in the docs.
Thus you can try the following:
File .platform/nginx/conf.d/myconfig.conf with the content of
client_max_body_size 25M;
Please note that I can't verify the nginx setting it self. It still may not work as it may be wrong setting or have wrong form. But the use of .ebextenations instead of .platform is definitely an issue on AL2 EB environments.
So I've just pushed my app to Dokku (Digital Ocean) - and get the following error returned from an ajax post:
POST http://example.com/foo 413 (Request Entity Too Large)
A quick google shows this problem is due to client_max_body_size being too low. So I've SSH'd into the server, opened up the apps nginx.conf and increased it as per instructions here:
client_max_body_size 100M;
https://github.com/progrium/dokku/issues/802
However, I still have the same problem... Do I need to restart a process or something? I tried restarting the dokku app - but all this did was to overwrite my nginx.conf file.
#Rob s answer is correct, but has the problem that the changes are not persisted, because the nginx.conf might become regenerated e.g. when deploying.
The solution I use is outlined in this github commit https://github.com/econya/dokku/commit/d4ea8520ac3c9e90238e75866906a5d834539129 .
Basically, dokkus default nginx templates include every file in the nginx.conf.d/ subfolder into the main server configuration block, thus
mkdir /home/dokku/myapp/nginx.conf.d/
echo 'client_max_body_size 50M;' > /home/dokku/myapp/nginx.conf.d/upload.conf
chown dokku:dokku /home/dokku/myapp/nginx.conf.d/upload.conf
service nginx reload
Will create a file that is merged into the nginx.conf (at nginx startup time I believe) and kept untouched by dokku as long as you do not use interfering plugins or define another nginx template (as of 2017/08).
This has been updated in Dokku and can be done from the CLI: dokku nginx:set node-js-app client-max-body-size 50m. https://dokku.com/docs/networking/proxies/nginx/#specifying-a-custom-client_max_body_size
I figured it out - I had to cd into the apps directory (as per github instructions: https://github.com/progrium/dokku/issues/802
The right file to modify is /home/dokku//nginx.conf and as #dipankar mentioned, you should add a client_max_body_size 20M; line to the server scope.
and then I typed
reload nginx
into the command line. All works :)
I have an Ubuntu VirtualBox that's setup by Vagrant. Its running NGINX to serve some static files and a Django app.
I have the source folder synced via vagrant to the repo in my host (windows). I can make changes to a Javascript file in Windows and verify that the changes are made to my file in the VM by SSH'ing in and opening the file in nano.
However, when I make the changes remotely, NGINX seems to serve up the unchanged version with "illegal" characters added to the end (which really freaks out browsers). I get the same file when I CURL localhost while ssh'd into the vm. EDIT It actually does the same thing when I edit the file via SSH
I can reload the vm via vagrant (which re-syncs the folders) and it works fine until the next remote change.
Restarting nginx and gunicorn doesn't help.
Does vagrant lock the files so that nginx has to rely on a cache? What might be going on here?
Thanks!
Apparently my coworker has better Google-foo than I.
This is apparently a known issue with virtualbox and nginx that has to do with the nginx's sendfile. You can simply add "sendfile off;" in either your server or location blocks in the nginx config. Here's a blogpost about it: nginx virtualbox static files