nginx client_header_buffer_size Not effective - node.js

An API service I built is runnig over HTTPS, which has a user get request parameter too long, and then the browser is directly inaccessible, and the parameters in the URL can be shortened.
The server uses the nodejs local test to see how long the parameters can be processed. Finally, it is possible to be a nginx problem, 'cause setting up client_header_buffer_size and large_client_header_buffers, restarting then nginx, I got the same result.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
...
client_body_buffer_size 600k;
client_max_body_size 600k;
client_header_buffer_size 600k;
large_client_header_buffers 4 600k;
...
}
Is there anybody in my situation that can provide solutions?
In another case, I used Safari browser to access the URL with a long parameter. Page prompt 303 error? (shortening the web site normal)

From the description of client_max_body_size, you can set allowed size of the client request body. Try to set it to a max value say
client_max_body_size 100M;
or setting it to 0 will remove the check, which is not recommended.
client_max_body_size 0;

Related

Jelastic Enforce SSL on NodeJS

I'm trying to enforce the SSL protocol in a Jelastic Enviroment.
My setup is:
one node, with a Nginx Load balancer (+ public ip + custom ssl certificate) and a NodeJS application server.
The SSL setup is working, but i want to enforce the use of HTTPS no HTTP (a redirect).
I've tried to modify the nginx.conf but no success.
Any ideas how should I do that?
Create the config file /etc/nginx/conf.d/nginx_force_https.conf and add the lines below:
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
It will redirect all configured sites to https.
If you want only exact site example.com:
server {
listen 80;
server_name example.com;
return 301 https://example.com$request_uri;
}
Make sure that you have these includes enabled in /etc/nginx/nginx.conf
include /etc/nginx/nginx-jelastic.conf;
in /etc/nginx/nginx-jelastic.conf:
include /etc/nginx/conf.d/*.conf;
Check for errors in the configuration:
sudo service nginx configtest
Reload configuration (this would be enough to make changes "work"):
sudo service nginx reload
Check if all works as expected. Restart the whole webserver (if needed):
sudo service nginx restart
The detailed answer can be found in this post Force www. and https in nginx.conf (SSL)

Block direct IP access using nginx

I have following nginx configurations
if ($host != mydomain.com) {
return 403;
}
When I hit the url http://127.0.0.1/test/test2/index.php (from POSTMAN) I get 403. Fine. But adding a Host -> mydomain.com in Headers I get 200.
When I added add_header Host "$host"; in nginx configurations I noticed in response that nginx has mydomain.com in its host variable. I know intentionally mentioning Host header in http request overrides 127.0.0.1 according to nginx documentation.
But in this way an attacker can send requests direct to web server by bypassing Cloudflare WAF. so what's the solution to block such requests from nginx?
I have tried following solutions but didn't work for me.
https://www.digitalocean.com/community/questions/how-to-block-access-using-the-server-ip-in-nginx
https://blog.knoldus.com/nginx-disable-direct-access-via-http-and-https-to-a-website-using-ip/
When I hit the url http://127.0.0.1/test/test2/index.php (from POSTMAN) I get 403. Fine. But adding a Host -> mydomain.com in Headers I get 200.
If I understand correctly, you seem to think that "adding a Host" header in your request is somehow a bypass. And it's not ... it's how hostnames work in HTTP.
A server doesn't magically know that you typed http://domain.tld/test/ in your browser address bar. Your browser makes a DNS lookup for domain.tld and establishes a TCP connection with the resolved IP address; it then sends headers, which is where the server gets the information from:
GET /test/ HTTP/1.1
Host: domain.tld
That's the only way the server knows you requested http://domain.tld/test/.
add this block:
server {
listen 80 default_server;
server_name "";
return 444;
}
OR
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 444;
}
The “default_server” parameter cannot be present in any other server block. NGINX Block direct IP access.

HTTP to HTTPS redirect - Nginx vs Node Express

I want to redirect the HTTP traffic to the secure HTTPS version of my website. I am running NodeJS Express on an nginx server. What would be the best way to do the redirect: using nginx or Express? Is there any significant difference between the two options, like performance for example?
It all depends on how you do it but the performance difference will likely be insignificant. What I usually do is when nginx handles the SSL keys and certificates then I also let it take care of the redirects. That way the Node app doesn't even need to know about the HTTP - all it cares is serving the requests coming from the reverse proxy.
Example nginx config:
server {
listen 80;
server_name example.com;
add_header Strict-Transport-Security "max-age=3600";
root /www/example.com/html;
index index.html index.htm;
location / {
return 302 https://example.com$request_uri;
}
}
Keep in mind that you will need to temporarily turn off the redirect to HTTPS if you're using Let's Encrypt but only for the time of certification renewal - something worth noting because it can be hard to diagnose when your certification renewal fails.

nginx responding "301 moved permanently"

Consider the following nginx config file:
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://api.default.svc.cluster.local;
}
}
All incoming TCP requests on 443 should redirect to my server running on api.default.svc.cluster.local:80 (which is a node REST-Api btw). This works fine, I can curl https://<nginx-IP>/ nginx and get a correct response, as expected.
Now, I'd like to change the location from / to /api, so I can fire a curl https://<nginx-IP>/api in order to get the same response as before.
1. Attempt
So I change the location line in the config to:
location /api {
Unfortunately this won't work, instead I get an error Cannot GET /api which is a node error, so obviously it gets routed to the api but something's still smelly.
2. Attempt
It seems as the trailing slash in an URI is required so I added it to the location:
location /api/ {
Now something changed. I won't get the same error as before, instead I get an "301 moved permanently". How can I fix my nginx config file?
Additional information regarding the environment
I'm using a kubernetes deployment that deploys the nginx reverse proxy incl. the config introduced. I then expose nginx using a kubernetes service. Also, I tried using kubernetes ingress to deal with this situation, using the same routes, however, the ingress service would respond with a default backend - 404 message.
As mentioned in the question, trailing slashes in URIs are important. I fixed this in the location, however, I didn't add it to the URI I pass using proxy_pass.
As for the nginx proxy I got it to work using the following config:
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location /api/ {
proxy_pass http://api.default.svc.cluster.local/;
}
}
Concerning the ingress solution, I was not able to get it to work by adding the missing trailing slash to the path. The service is specified due its name and therefore no trailing slash can be added (i.e. it would result in an error).

node.js with nginx, how to remove direct ip:port access

I inherited a node.js project and I am very new to the platform/language.
The application I inherited is in development so it is a work in progress. In its current state it runs off port 7576 so you access it this way: server_ip:7576
I've been tasked with putting this "prototype" on a live server so my boss can show it to investors etc. But I have to password protect it.
So what I did is I got it running on the live server. And then I made it use a nginx vhost like this:
server {
listen 80;
auth_basic "Restricted";
auth_basic_user_file /usr/ssl/htpasswd;
access_log /etc/nginx/logs/access/wip.mydomain.com.access.log;
error_log /etc/nginx/logs/error/wip.mydomain.com.error.log;
server_name wip.mydomain.com;
location / {
proxy_pass http://127.0.0.1:7576;
root /var/app;
expires 30d;
#uncomment this is you want to name an index file:
#index index.php index.html;
access_log off;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ {
root /var/app/public;
}
}
`
This got the job done, I can now access my app by going to wip.mydomain.com
And I can easily password protect it via nginx.
My problem is the app is still accessible via the ip:port and I don't know how to prevent that.
Any help is appreciated.
Thanks
In your node javascript code, you need to explicitly bind to the loopback IP:
server.listen(7576, '127.0.0.1');
(You are looking for a call to .listen(<port>) to fix. The variable may be called app or something else though).
Any IP address starting with 127. is a loopback address that can only be accessed within a single machine (doesn't actually use the network).

Resources