If I spin up a nodejs server and use busboy then I am able to upload large files (10+ gbs) but when i use the same nodejs code and use nginx as a reverse proxy then nginx throws "413 request entity too large"
Has anyone encountered such issue? How do we solve this? I know i can set a "client_max_body_size" variable but this would mean there will still be a hard limit of the file.
My nginx config looks like the following:
server {
listen 80;
server_name *.example.local;
location /api {
proxy_pass http://host.docker.internal:5000;
proxy_pass_header Accept;
proxy_pass_header Server;
keepalive_requests 1000;
proxy_redirect off;
proxy_buffering off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
client_max_body_size 100M;
}
}
client_max_body_size size;
Sets the maximum allowed size of the client request body. If the size
in a request exceeds the configured value, the 413 (Request Entity Too
Large) error is returned to the client. Please be aware that browsers
cannot correctly display this error. Setting size to 0 disables
checking of client request body size.
Source
In your nginx configuration you have client_max_body_size=100M. You need to adjust it according to your needs like 15G etc. This property of configuration defines the max size of body payload a client can send to Nginx. If payload size is greater than this, a 413 http status is sent to client. Setting it to 0 (as suggested by Taxel) will disable the payload size check but this will expose the server to outside attack where a malicious user keeps your server busy by sending random big files which can deteriorate server performance.
Answering my own question:
In order to disable request/proxy buffering in my nginx reverse proxy, I had to add the following directives in the nginx config file
proxy_buffering off;
proxy_request_buffering off;
So, now my nginx config file looks like the following:
server {
listen 80;
server_name *.example.local;
location /api {
proxy_pass http://host.docker.internal:5000;
proxy_pass_header Accept;
proxy_pass_header Server;
keepalive_requests 1000;
proxy_redirect off;
proxy_buffering off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
client_max_body_size 0;
proxy_buffering off; # <-----
proxy_request_buffering off; # <-----
}
}
Related
I am hosting my web application on NGINX server.Till now it worked fine, but I don't know why I am getting the errors present in the image below.
I don't know why these errors occur, but as a trial and error method I thought my ssl certificated got expired so I updated it. Same errors got repeated.And also checked my conf.d file, not sure that everything is good.
Here is my conf file
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
send_timeout 100s;
keepalive_timeout 95;
#ssl_session_cache shared:SSL:10m;
#ssl_session_timeout 10m;
client_body_in_file_only clean;
client_body_buffer_size 32K;
client_max_body_size 300M;
server {
listen 80;
listen 443 ssl;
server_name sample.com;
ssl_certificate ..\ssl\mbxxxx.crt;
ssl_certificate_key ..\ssl\mbkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_http_version 1.1;
client_max_body_size 300M;
proxy_read_timeout 300s;
proxy_connect_timeout 95s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $http_referer;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header content-type "application/json";
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Connection 'upgrade';
proxy_pass http://127.0.0.1:xxxx;
}
location /api {
proxy_http_version 1.1;
client_max_body_size 300M;
proxy_read_timeout 300s;
proxy_connect_timeout 95s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $http_referer;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header content-type "application/json";
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header Access-Control-Allow-Origin *;
proxy_set_header Connection 'upgrade';
proxy_pass http://127.0.0.1:xxxx;
}
error_page 405 =200 $uri;
# error_page 500 502 503 504 /50x.html;
#location = /50x.html {
#root html;
#}
}
}
enter code here
And there are no CORS restrictions.Any suggestions and reference docs would be great help.
And I don't know that this question servers my request or not.
Thanks in Advance.
So when I am doing some research on how to solve this issue, I found an answer that I have to remove passphrase in SSL certificate.I didn't get it. So what I have done is, I updated SSL certificate then I run my application. But not succeeded. So I thought nginx should be restarted after updating SSL certificate. Shockingly after restarting nginx, it worked fine.
You can specify passphrase in text file, and connect it via ssl_password_file directive. Something like this:
listen 3001 ssl;
ssl_certificate cert.pem;
ssl_certificate_key key.pem;
ssl_password_file pass.txt
I'm using Node server with an express app which handles a Server Sent Events stream. This is proxied via NginX with http2 enabled. The SSE events are consumed via EventSource in a React app. I'm sending a heartbeat message every 10 seconds to keep the connection alive.
This all works great until there is some form of network interruption such as putting my laptop to sleep then re-awaking it.
Then from that point on the stream will error every 40 or so seconds with the net::ERR_HTTP2_PROTOCOL_ERROR 200 error and then reconnect instead of just reconnecting once with a steady stream.
Firefox works correctly. It doesn't error and reconnects only once.
If I configure Node to serve http2 directly instead of via NGinx as a test (via spdy library) then all works as expected so I don't think this is a Node issue and I must be missing something with my Nginx configuration and Chrome.
Nginx config as follows (location /stream is the SSE proxy)
server {
listen 28443 ssl http2;
listen [::]:28443 ssl http2;
server_name example.com;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
http2_max_field_size 16k;
http2_max_header_size 128k;
root /var/www/example.com;
index index.html index.htm;
location / {
client_max_body_size 100M;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://localhost:28080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'Upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /stream {
proxy_http_version 1.1;
# proxy_request_buffering off;
# proxy_buffering off;
# proxy_cache off;
# chunked_transfer_encoding off;
proxy_set_header Connection '';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:28080/stream;
}
}
I've tried various combinations with proxy_buffers off and keepalive settings ect.. which seems to only affect the time between errors i.e. 5 minutes instead of 40 seconds.
Not sure if you figure this out. I got the same issue recently, by increasing the size:
http2_max_field_size 64k;
http2_max_header_size 512k;
The chrome's net::ERR_HTTP2_PROTOCOL_ERROR has gone.
Also, not sure if it applied to you. If I use firefox, I can actually visit my site correctly.
http2_max_field_size and http2_max_header_size directives are obsolete since version 1.19.7
Instead, something like the following could be used:
large_client_header_buffers 4 64k;
Now token is named
In my environment just set to:
large_client_header_buffers 10 512k;
And error is gone
Source:
http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers
Trying to get nginx to proxy my node.js app and use a domain with it. I'm going to have many domains mapped to the server so i'm using separate .conf files for each server block. The issue i'm having right now is that I can only seem to get the default nginx page to show up when i go to the domain. I'll try to explain the current setup as clearly as possible, and if you need any more information please let me know.
nginx.conf changes
I set the root path to where my apps files are, root /var/www; so for example, an app would be deployed to the folder /var/www/example.com.
server block config
I created a new file for the server block /etc/nginx/conf.d/example_com.conf which contains
server
{
listen 80;
listen [::]:80;
server_name example.com www.example.com;
location /var/www
{
proxy_pass http://localhost:3103;
include /etc/nginx/proxy_params;
}
}
please note that going to my http://myip:3103 renders the app as it should and the file /etc/nginx/proxy_params contains
proxy_buffers 16 32k;
proxy_buffer_size 64k;
proxy_busy_buffers_size 128k;
proxy_cache_bypass $http_pragma $http_authorization;
proxy_connect_timeout 59s;
proxy_hide_header X-Powered-By;
proxy_http_version 1.1;
proxy_ignore_headers Cache-Control Expires;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
proxy_no_cache $http_pragma $http_authorization;
proxy_pass_header Set-Cookie;
proxy_read_timeout 600;
proxy_redirect off;
proxy_send_timeout 600;
proxy_temp_file_write_size 64k;
proxy_set_header Accept-Encoding '';
proxy_set_header Cookie $http_cookie;
proxy_set_header Host $host;
proxy_set_header Proxy '';
proxy_set_header Referer $http_referer;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Original-Request $request_uri;
Is there anything I am doing wrong here? Do you need any more info? Please let me know! nginx is pretty new for me and i feel like i'm super close i'm just note understanding something. Thanks!
Your configuration provides requests processing like this:
requests to http://[www.]example.com/var/www[*] will be proxy_passed to
you app
all other requests will be processed like static/direct
requests in default nginx directory
If you haven't static files and all request has to processing by app, then you should fix your configuration like this:
location /
{
proxy_pass http://localhost:3103;
include /etc/nginx/proxy_params;
}
If you have static files that can be served by nginx, then you should to complicate you configuration a bit like here or here.
Here is documentation for understanding how to nginx works.
I want to setup nginx server listening on one port, proxying the connection to a different port to a nodejs application. The problem is that I get 500 error - worker_connections are not enough while connecting to upstream.
Nginx config:
upstream node {
server 127.0.0.1:1235;
keepalive 8;
}
server {
listen 1234;
server_name http://123.123.123.123:1234 node;
access_log off;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://123.123.123.123:1234/;
proxy_redirect off;
}
}
What's wrong?
You should correct your proxy_pass since you are proxying requests back to nginx itself.
According to your config it must be
proxy_pass http://node/;
You may need to add:
proxy_responses 0;
to you nginx config.
I have a simple example running ngnix that proxies to my node.js app running on localhost:3001. Now I want to add some optimizations and the problem is I'm not sure I completely understand the way ngnix config files work.
What I want to do is to serve index.html, about.html and main.js from the CDN via a proxy-forward through ngnix. I imagine I need to add something like a rewrite just for those two files (and an entire images and css directory eventually)
So user goes to mydomain.com .. ngnix kicks in and delivers index.html from cdn.mydomain.com/index.html.
Here is what I have now:
===================
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 600;
proxy_buffering off;
####
# the IP(s) on which your node server is running i choose the port 3001
upstream app_yourdomian {
server 127.0.0.1:3001;
}
# the nginx server instance
server {
listen 0.0.0.0:80;
server_name ec2-75-101-203-200.compute-1.amazonaws.com ec2-75-101-203-200.compute-1.amazonaws;
access_log /var/log/nginx/yourdomain.log;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:3001;
proxy_redirect off;
}
}
============================
If you really need to proxy (not redirect to) index, about and main.js, then it would be something like having three more simple locations for each of the above
location = /index.html {
proxy_pass ...
}
You might also want to take a look at http://wiki.nginx.org/HttpCoreModule#location
For locations without regex the most specific match is used.
Feel free to ask more in the mailing list http://mailman.nginx.org/mailman/listinfo/nginx