I am trying to fix a security vulnerability of 500 internal server error disclose location of the file
My issue is similar to that of (https://cdn-images-1.medium.com/max/1600/1*2DAwIEJhgLQd82t5WTgydA.png)
(https://medium.com/volosoft/running-penetration-tests-for-your-website-as-a-simple-developer-with-owasp-zap-493d6a7e182b)
I am tried with
proxy_intercept_errors on;
and
error_page 500
redirection but it didnt help.
Any help on this ?
This is a basic example of implementing proxy_intercept_errors on;
upstream foo {
server unix:/tmp/foo.sock;
keepalive 60;
}
server {
listen 8080 default_server;
server_name _;
location = /errors/5xx.html {
internal;
root /tmp;
}
location / {
proxy_pass http://foo;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_intercept_errors on;
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
}
}
Notice the:
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
This will intercept some 5xx errors and the 404 except and return with a 200
Also, check the /errors/5xx.html location, is using root /tmp; therefore you still need to create the file errors/5xx.html:
$ mkdir /tmp/errors
$ echo "intercepting errors" > /tmp/errors/5xx.hml
You don't necessarily need to a file to reply you request you could also use something like this:
location = /errors/5xx.html {
internal;
default_type text/plain;
return 200 'Hello world!';
}
In your case the 404 File not found could be handle different, for example:
upstream failover{
server server2:8080;
}
server {
listen 80;
server_name example.com;
root /tmp/test;
location ~* \.(mp4)$ {
try_files $uri #failover;
}
location #failover {
proxy_pass http://failover;
}
}
In this case if the file ending with .mp4 not found it will try another server, then if required you still can intercep the error there.
Related
I'm setting up a webapps using Node JS + React + NGINX on AWS
and then when i want to access the url /auth
it return me some HTML code instead of JSON like i wanted to be
i tested the code on LOCALHOST and it works fine
I tried to set the folder permission, because i think maybe user permission is the problem
I also tried editing some stuff on the Nginx.conf
below is my app.conf for the nginx
upstream webapp{
server 127.0.0.1:3018;
}
server_names_hash_bucket_size 64;
server_names_hash_max_size 512;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name 127.0.0.1;
server_name_in_redirect off;
if ($http_x_forwarded_proto = 'http'){
return 301 https://$host$request_uri;
}
location / {
root /home/website/client/build;
try_files $uri /index.html;
log_not_found off;
access_log off;
}
#error_page 405 =200 $uri;
if ( $http_user_agent ~* (nmap|nikto|wikto|sf|sqlmap|bsqlbf|w3af|acunetix|havij|appscan) ) {
return 403;
}
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block;";
add_header Strict-Transport-Security "max-age=2592000; includeSubDomains" always;
}
action {…}
payload: Object { isAuth: false, error: true }
type: "auth_member"
i expect the output code like this one
but instead the web give me something else
you have to define the upstrem location to be proxed to:
# this is an example you can modify the path on as you get it
location /api {
proxy_pass http://webapp;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
I have two Vue.js apps that I want to run on the same domain (e.g., https://localhost:8080/app1 and https://localhost:8080/app2). Both apps run in separate docker containers, and i have set up a third docker container running nginx with a reverse proxy in order to have ssl.
I am able to visit the apps at the wanted locations, but there are some resources missing (images, fonts etc). I realize that my nginx server looks for them at https://localhost:8080/my_resource, but I can't figure out how to forward these to the correct locations (i.e., https://localhost:8080/app1/my_resource, and similar for app2).
I've tried using the "try_files" directive in nginx, like so:
location / {
try_files $uri $uri/ http://app1:8080 http://app2:8080
}
but it does not work.
Here is my nginx config file
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Change the default configuration to enable ssl
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
if ($http_referer = "https://localhost:8080/app1/") {
proxy_pass http://app1:8080;
break;
}
if ($http_referer = "https://localhost:8080/app2/") {
proxy_pass http://app2:8080;
break;
}
}
location /app1/ {
proxy_pass http://app1:8080/;
}
location /app2/ {
proxy_pass http://app2:8080/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
And this is my docker-compose
version: "3.6"
services:
app1:
image: "app1"
expose:
- "8080"
command: ["serve", "-s", "/app/app1/dist", "-l", "8080"]
app2:
image: "app2"
expose:
- "8080"
command: ["serve", "-s", "/app/app2/dist", "-l", "8080"]
nginx:
image: "nginx"
ports:
- "8080:443"
depends_on:
- "app1"
- "app2"
Thanks for any input :)
After a lot of trial and error, I found a solution. I do not think this is the optimal solution, but it's working. Here is my nginx configuration:
# Pass any http request to the https service
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Configure the ssl service
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_intercept_errors on;
error_page 404 = #second;
proxy_pass http://app1:80;
}
location #second {
proxy_pass http://app2:80;
}
location /app1/ {
rewrite ^/app1(.*) /$1 break;
proxy_pass http://app1:80;
}
location /app2/ {
rewrite ^/app2(.*) /$1 break;
proxy_pass http://app2:80;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I am trying to proxy pass a node.js-socket.io app with nginx.
The client is an html file with some javascript in it;
<html>
<head>
<script src="socket.io.js"></script>
<script>
var socket = io('http://localhost:80');
socket.on('welcome', function(data){
console.log('Server says:' + data);
socket.emit('client-response', 'thank you!');
});
</script>
</head>
<body>
Socket.io
</body>
</html>
And the server block that supposed to proxy pass in nginx.conf file is this;
server {
listen 80;
listen [::]:80;
#root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass "http://localhost:2156";
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I have my node.js app up and running in port "2156".
When I test this, the client tries to reach the socket.io on the port 80 and fails with a 404 error (Because nginx was supposed the pass the proxy to the port 2156 but it didn't).
What am I missing here?
Edit: I've changed the client to connect at "http://localhost/socket.io/" and rewrote the nginx.conf like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /usr/share/nginx/html;
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://localhost:2156/socket.io/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
And it worked.
If I understand things correctly I can setup nginx in a way that it handles crawlers (instead of nodejs doing it). So I removed app.use(require('prerender-node').set('prerenderToken', 'token')) from express configuration and made the following nginx setup (I do not use prerender token):
# Proxy / load balance (if more than one node.js server used) traffic to our node.js instances
upstream my_server_upstream {
server 127.0.0.1:9000;
keepalive 64;
}
server {
listen 80;
server_name test.local.io;
access_log /var/log/nginx/test_access.log;
error_log /var/log/nginx/test_error.log;
root /var/www/client;
# Static content
location ~ ^/(components/|app/|bower_components/|assets/|robots.txt|humans.txt|favicon.ico) {
root /;
try_files /var/www/.tmp$uri /var/www/client$uri =404;
access_log off;
sendfile off;
}
# Route traffic to node.js for specific route: e.g. /socket.io-client
location ~ ^/(api/|user/|en/user/|ru/user/|auth/|socket.io-client/|sitemap.xml) {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass_header X-CSRFToken;
sendfile off;
# Tells nginx to use the upstream server
proxy_pass http://my_server_upstream;
}
location / {
root /var/www/client;
index index.html;
try_files $uri #prerender;
access_log off;
sendfile off;
}
location #prerender {
set $prerender 0;
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator") {
set $prerender 1;
}
if ($args ~ "_escaped_fragment_") {
set $prerender 1;
}
if ($http_user_agent ~ "Prerender") {
set $prerender 0;
}
#resolve using Google's DNS server to force DNS resolution and prevent caching of IPs
resolver 8.8.8.8;
if ($prerender = 1) {
#setting prerender as a variable forces DNS resolution since nginx caches IPs and doesnt play well with load balancing
set $prerender "127.0.0.1:3000";
rewrite .* /$scheme://$host$request_uri? break;
proxy_pass http://$prerender;
}
if ($prerender = 0) {
rewrite .* /index.html$is_args$args break;
}
}
}
But when I test it by curl test.local.io?_escaped_fragment_= I get got 504 in 344ms for http://test.local.io
Node version is 6.9.1. I use vagrant to setup environment.
The above configuration works fine. All it was missing is an entry in /etc/hosts : 127.0.0.1 test.local.io
I have the following nodejs structure which is resides in /home/ubuntu/project directory:
sever
site
|-css
| |-styles.css
|-img
| |-sprite.png
|-js
|-script.js
I'm trying to serve static assets by nginx, so I wrote the following location:
upstream myapp_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
listen 80;
server_name www.myapp.com;
error_page 400 404 500 502 503 504 /50x.html;
location /50x.html {
internal;
root /usr/share/nginx/www;
}
location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico|home/|html|xml) {
root /home/ubuntu/project/site;
access_log off;
expires max;
}
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass http://myapp_upstream;
proxy_intercept_errors on;
}
}
But when I try to open up my site in a browser I get failed status on all requested assets. Whet's the problem?
EDIT:
My route to css for example is:
http://www.myapp.com/css/styles.css
Well,
Add a / to the root path.
root /usr/share/nginx/www;
should be
root /usr/share/nginx/www/;
Use an alias for the assets like:
alias /home/ubuntu/project/site/; (again, add the last /)
These is a mess for me:
location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico|home/|html|xml)
You should check these http://wiki.nginx.org/NginxHttpCoreModule#location
I dont see these folders images/, javascript/, stylesheets/, flash/, media/, static/ and home/ in your sitemap.
And these both |html|xml are looking for the route /html or /xml not the .html or .xml files.
Then try:
location ~ ^/(robots.txt|humans.txt) {
alias /home/ubuntu/project/site/;
access_log off;
expires max;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { //add here all the file extensions needed.
alias /home/ubuntu/project/site/;
access_log off;
expires max;
}