Linux Nginx Reverse Proxy Does Not Serve Custom error.html [duplicate] - linux

I have a Sinatra application hosted with Unicorn, and nginx in front of it. When the Sinatra application errors out (returns 500), I'd like to serve a static page, rather than the default "Internal Server Error". I have the following nginx configuration:
server {
listen 80 default;
server_name *.example.com;
root /home/deploy/www-frontend/current/public;
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_read_timeout 240;
proxy_pass http://127.0.0.1:4701/;
}
error_page 500 502 503 504 /50x.html;
}
The error_page directive is there, and I have sudo'd as www-data (Ubuntu) and verified I can cat the file, thus it's not a permission problem. With the above config file, and service nginx reload, the page I receive on error is still the same "Internal Server Error".
What's my error?

error_page handles errors that are generated by nginx. By default, nginx will return whatever the proxy server returns regardless of http status code.
What you're looking for is proxy_intercept_errors
This directive decides if nginx will intercept responses with HTTP
status codes of 400 and higher.
By default all responses will be sent as-is from the proxied server.
If you set this to on then nginx will intercept status codes that are
explicitly handled by an error_page directive. Responses with status
codes that do not match an error_page directive will be sent as-is
from the proxied server.

You can set proxy_intercept_errors especially for that location
location /some/location {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_connect_timeout 5;
proxy_read_timeout 240;
proxy_pass http://127.0.0.1:4701/;
proxy_intercept_errors on; # see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors
error_page 400 500 404 ... other statuses ... =200 /your/path/for/custom/errors;
}
and you can set instead 200 other status what you need

People who are using FastCGI as their upstream need this parameter turned on
fastcgi_intercept_errors on;
For my PHP application, I am using it in my upstream configuration block
location ~ .php$ { ## Execute PHP scripts
fastcgi_pass php-upstream;
fastcgi_intercept_errors on;
error_page 500 /500.html;
}

As mentioned by Stephen in this response, using proxy_intercept_errors on; can work.
Though in my case, as seen in this answer, using uwsgi_intercept_errors on; did the trick...

Related

nginx throws 404 on redirecting through proxy_pass to nodejs app

I'm serving multiple nodejs apps on a single server through pm2 and using nginx to manage reverse proxies. Right now if I use the server's ip and app port to reach the apps directly it all works fine. But if I try to navigate to my apps through the location paths set in the nginx config then I get 404 errors.
Below is my nginx default config:
upstream frontend {
server localhost:3000;
}
upstream backend {
server localhost:8000;
}
server {
listen 443 ssl;
server_name <redacted>;
ssl_certificate <redacted>.cer;
ssl_certificate_key <redacted>.key;
error page 497 301 =307 https://$host:$server_port$request_uri;
location /app/frontend {
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
location /api {
proxy pass http://backend;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
server {
listen 80;
server_name <redacted>;
return 301 https://$server_name$request_uri;
}
Now when I try to go to https://<server ip>:3000, the frontend loads just fine but if I go to https://<server ip>/app/frontend, I get the following 404 error:
Although the index.html loads up, it tries to find the static assets on https://<server ip>/ but rather should try to find them on https://<server ip>:3000. This is the exact behaviour that I'm trying to achieve.
What I have tried so far:
Using rewrites
Adding trailing slashes to both location path and proxy_pass
I know this can be solved by changing the app's base url or the build directory but that is not what I'm looking for.
Any help would be highly appreciated.

can Nginx randomly stop working by certain requests?

I'm currently having issues with my website. Sometimes, after a fresh restart of nginx service the url of my website works just fine in the browser, It redirects successfully to the .NET Core webapp running on Kestrel.If I type the IP of my vps it also works just fine. But suddenly and randomly nginx stops serving the website and the browser just shows err_connection_closed.
Some technical information:
Kestrel is running on localhost:5000, Nginx TCP ports are managed by ufw and opened for: 80 and 443.
I'm using: Ubuntu 16.04, nginx and a .NET Core 3.1 web app. Steps were followed as next url Host and Deploy using Linux and Kestrel
Something that I have noticed in syslog file is that some IPs are blocked by ufw, but I'm not sure why they are coming from China, Mongolia or even Poland, as the initial marketing campaign is currently located for Mexico.
Other log file that I searched in was /var/log/nginx/access.log Here, some IPs try to request random urls in my website like GET /Telerik.Web.UI.WebResource.axd?type=rau HTTP/1.1" 404 0 "-" or even like "GET /phpmyadmin/ HTTP/1.1" 301 178 "-" which is absolutely not me because I'm using PostgreSQL. Although, I have to say that I've seen that after this requests are randomly made, the nginx stops working but I'm not 100% sure if this is accurate, as seen in the title, this is very random.
Some config files for nginx:
/etc/nginx/sites-available/default
# Default server configuration
#
server {
listen 80;
server_name keecheeapp.com *.keecheeapp.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
/etc/nginx/proxy_conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;
/etc/nginx/nginx.conf
#other directives
events {
worker_connections 768;
# multi_accept on;
}
http {
include /etc/nginx/proxy.conf;
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;
server_tokens off;
sendfile on;
keepalive_timeout 29; # Adjust to the lowest possible value that makes sense for your use case.
client_body_timeout 10; client_header_timeout 10; send_timeout 10;
upstream keecheeapp{
server localhost:5000;
}
server {
listen *:80;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
server {
listen *:443 ssl;
server_name keecheeapp.com;
ssl_certificate /etc/ssl/certs/keecheeapp.com-concat-certs.crt;
ssl_certificate_key /etc/ssl/certs/private_new.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
#Redirects all traffic
location / {
proxy_pass http://www.keecheeapp.com;
limit_req zone=one burst=10 nodelay;
}
}
}
There are several issues with your Nginx configuration:
In the file /etc/nginx/nginx.conf
The combination of limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s; and limit_req zone=one burst=10 nodelay; will limit the request processing rate per client to 5 requests/second. If you send too many requests per second then you will get error messages from Nginx. So if you want to keep the limit feature, try to increase the existing value to, for example, rate=50r/s and burst=100. If you want to disable this feature, delete or comment out those lines. You can learn more about this feature here.
The value http://www.keecheeapp.com for the proxy_pass directive is wrong . The correct value is keecheeapp as defined by the upstream keecheeapp {...} block. So change proxy_pass http://www.keecheeapp.com; to proxy_pass http://keecheeapp;
The server block in the file /etc/nginx/sites-available/default instructs Nginx to serve your website using HTTP.
The following server block in the file /etc/nginx/nginx.conf instructs Nginx to serve your website using HTTPS.
server {
listen *:443 ssl;
server_name keecheeapp.com;
...
}
So your website is accessible over both HTTP and HTTPS. It's not a good idea. You should redirect all HTTP requests to HTTPS as follows:
Delete or comment out the server block in in the file /etc/nginx/sites-available/default
Modify the following server block in the file /etc/nginx/nginx.conf
server {
listen *:80;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
To:
server {
listen *:80;
server_name keecheeapp.com *.keecheeapp.com;
add_header Strict-Transport-Security max-age=15768000;
return 301 https://$host$request_uri;
}
With your given configuration, Nginx is passing all requests to Kestrel, including static file requests (image, JS, CSS, etc.). This is unrealistic. Let Nginx handle static files, and Kestrel handles dynamic requests. Please change the following configuration block:
#Redirects all traffic
location / {
proxy_pass http://www.keecheeapp.com;
limit_req zone=one burst=10 nodelay;
}
To:
root /path/to/your/static/folder;
# Serve static file requests
location / {
try_files $uri $uri/ #kestrel;
}
# Pass dynamic requests to Kestrel
location #kestrel {
proxy_pass http://keecheeapp;
limit_req zone=one burst=10 nodelay;
}
Change /path/to/your/static/folder to the actual folder on your server.
After editing, don't forget to test Nginx configuration with sudo nginx -t, then reload it with sudo systemctl reload nginx.service.

Lazyload in Safari not working when Nginx reverse proxy to node.js

Here is the problem:
I have a node.js server which is behind a Nginx proxy. Nginx is configured to serve the static contents and proxy_pass others to node.js
Also, in the node app, the images are loaded by a lazyloading solution.
In Chrome, when I access my app (example.com), everything works as it should. The lazyload works fine and images are loaded and served by nginx.
In Safari, my app (example.com) loads fine (which means that node.js server and nginx proxy are working). But images are not loaded ! It seams the request from lazyload are not sent or did not get any response.
If I enter the images' uri directly in Safari, they loads fine.
I should mention that when I locally use node.js server (without nginx) there is no problem even in Safari.
So It seems there is problem between lazyload, Nginx, Node.js and Safari, as everything is ok in Chrome.
Below you find my nginx.conf:
http {
log_format main ‘$remote_addr - $remote_user [$time_local] “$request” ’
‘$status $body_bytes_sent “$http_referer” ’
‘“$http_user_agent” “$http_x_forwarded_for”’;
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
server_name example.com www.example.com;
access_log /var/log/nginx/example.com/nginx.access.log;
error_log /var/log/nginx/example.com/nginx.error.log;
location /img/ {
root /var/nginx/html;
}
location / {
proxy_pass http://127.0.0.1:3000; #nodejs server
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I would first try removing these if you're also using X-Forwarded-Proto
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
This allows you to terminate the SSL with the proxy, while speaking plain http downstream, therefore you should not need to upgrade the connection.
I just found the answer to my question myself. I post here in case it could be helpful for others.
I noticed in Safari console that it is blocking images from loading as their uri begin with http not https (insecure content over https), so they are not loaded
But Chrome shows insecure contents over https, so images are loaded. It just shows a warning beside the url field.
So, it has nothing to do with node.js or nginx reverse proxy :)

nodejs nginx 502 gateway error

I am trying to use a nodejs app behind an nginx reverse proxy to handle the ssl
I have my app running on localhost:2000. I can confirm this as working with a curl command.
This is my nginx setup:
# the IP(s) on which your node server is running. I chose port 3000.
upstream dreamingoftech.uk {
server 127.0.0.1:2000;
keepalive 16;
}
# the nginx server instance
server {
listen 0.0.0.0:80;
server_name dreamingoftech.uk;
return 301 https://$host$request_uri;
}
#HTTPS
server {
listen 443 ssl http2;
server_name dreamingoftech.uk;
access_log /var/log/nginx/dreamingoftech.log;
error_log /var/log/nginx/dreamingoftech.error.log debug;
ssl_certificate /etc/letsencrypt/live/dreamingoftech.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dreamingoftech.uk/privkey.pem;
include snippets/ssl-params.conf;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://dreamingoftech.uk/;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "";
proxy_ssl_session_reuse off;
proxy_cache_bypass $http_upgrade;
}
}
if I now curl https://dreamingoftech.uk, it takes a while but I do get the webpage delivered. albeit with the message:
curl: (18) transfer closed with 1 bytes remaining to read
However when viewed from a browser I get a 502 gateway error.
I have checked the error log and this is the result: ERROR LOG
I can't understand why the reverse proxy is adding such a time delay into the process. Any ideas would be greatly appreciated.
PS: in the upstream config I have tried localhost instead of 127.0.0.1 to no avail
I have almost the same configuration. Can you try the following
You can redirect all http to https
server {
listen 80;
return 301 https://$host$request_uri;
}
or for a specific site like this
server {
server_name dreamingoftech.uk;
return 301 https://dreamingoftech.uk$request_uri;
}
but choose only one for your case
and then you make sure you node server is running on http mode and not https.
Also you mentioned that you run node on port 3000, then use port 3000 and not 2000 as I can see in your config.
After you confirm the above redirect all packets into localhost like this
server {
listen 443;
server_name dreamingoftech.uk;
ssl_certificate /etc/letsencrypt/live/dreamingoftech.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dreamingoftech.uk/privkey.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:3000;
proxy_read_timeout 90s;
proxy_redirect http://localhost:3000 https://dreamingoftech.uk;
}
}
Create a file and sum the above code put it in sites-available with a name like dreamingoftech.uk and the use ln -s to create a softlink into sites-enabled. go to your nginx.conf and make sure you include folder sites-enabled
Then must restart nginx to check if it works
#Stamos Thanks for your reply. I tried that but unfortunately it didn't work. I decided to try the most basic node app I could still using the basic modules I am using.
I tried this and it worked straight away.
The problem is with my app therefore. I will spend time rebuilding and testing step by step until I find the issue,
Thanks for your time!

nginx not caching static assets

I have a nodejs server and SSL enabled nginx on 2 separate machines. Request/response all work properly, however I have some problems getting nginx to cache stuff. My server configuration is below. Initially, I had the proxy cache statement in the 'location /' block, and at the time it was caching only my index page. I read that nginx won't cache requests with set-cookie headers, so I ignored them as well (although it didn't stop my index page from getting cached earlier). I tried fiddling with this for a whole day, but couldn't get nginx to cache my js and css files. All such requests are getting routed back to my node server. Access logs and error logs don't have any unusual entries. What am I doing wrong?
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /webserver/nginx/credentials/cert;
ssl_certificate_key /webserver/nginx/credentials/key;
ssl_session_cache shared:SSL:10m;
location ~ .*\.(ico|css|js|gif|jpe?g|png)$ {
proxy_pass http://somewhere:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect http:// https://;
proxy_ignore_headers "Set-Cookie";
proxy_cache one;
proxy_cache_valid 200 1d;
proxy_cache_valid any 1m;
expires 7d;
proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;
}
location / {
proxy_pass http://somewhere:80;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect http:// https://;
}
}
This is what I'm using (I don't have SSL enabled but I don't think that is the problem). You're missing the try_files line that tells nginx to look for the files in the root before passing off to the proxy. Also, it's not really a caching problem - none of the static file requests should ever be hitting your node.js backend with this configuration.
server {
root /public;
listen 80;
server_name _;
index index.html index.htm;
charset utf-8;
# proxy request to node
location #proxy {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3010;
proxy_redirect off;
break;
}
location / {
try_files $uri.html $uri $uri/ #proxy;
}
# static content
location ~ \.(?:ico|jpg|css|png|js|swf|woff|eot|svg|ttf|html|gif)$ {
access_log off;
log_not_found off;
add_header Pragma "public";
add_header Cache-Control "public";
expires 30d;
}
location ~ /\. {
access_log off;
log_not_found off;
deny all;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
error_page 404 /404.html;
location = /404.html {
}
}

Resources