I am trying to setup Nginx Proxy for multiple application from multiple servers.
server {
listen 80;
listen 443 ssl;
server_name 192.168.2.28;
ssl on;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location /dashboard/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
proxy_connect_timeout 300;
port_in_redirect off;
proxy_pass http://192.168.1.250/;
}
}
While running https://192.168.2.28/dashboard in browser I am getting the only root files i.e /favicon.png But inside subfolders like js/css are not resolving with location.
How to resolve domain with location with inside directories. I also attached the screenshot. Please, anyone, check and resolve.
Nginx SSL proxy error
if i understand your problem correctly, you want nginx to answer requests for static files directly while proxying everything else to your django-backend.
try adding this to your server config:
location /static/ {
alias /path/to/static/directory/;
}
as described in detail here
in case you want a location to represent a remote path,
nginx can rewrite requests like so:
location ~ /static/ {
rewrite (.*)/(.*) http://external.tld/static/$2;
}
more on that option here
Related
I'm serving multiple nodejs apps on a single server through pm2 and using nginx to manage reverse proxies. Right now if I use the server's ip and app port to reach the apps directly it all works fine. But if I try to navigate to my apps through the location paths set in the nginx config then I get 404 errors.
Below is my nginx default config:
upstream frontend {
server localhost:3000;
}
upstream backend {
server localhost:8000;
}
server {
listen 443 ssl;
server_name <redacted>;
ssl_certificate <redacted>.cer;
ssl_certificate_key <redacted>.key;
error page 497 301 =307 https://$host:$server_port$request_uri;
location /app/frontend {
proxy_pass http://frontend;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
location /api {
proxy pass http://backend;
proxy_redirect off;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Ssl on;
}
}
server {
listen 80;
server_name <redacted>;
return 301 https://$server_name$request_uri;
}
Now when I try to go to https://<server ip>:3000, the frontend loads just fine but if I go to https://<server ip>/app/frontend, I get the following 404 error:
Although the index.html loads up, it tries to find the static assets on https://<server ip>/ but rather should try to find them on https://<server ip>:3000. This is the exact behaviour that I'm trying to achieve.
What I have tried so far:
Using rewrites
Adding trailing slashes to both location path and proxy_pass
I know this can be solved by changing the app's base url or the build directory but that is not what I'm looking for.
Any help would be highly appreciated.
I have 2 services backend and frontend (nodejs) in docker that processed via nginx (also in docker).
Nginx config:
server {
listen 80;
listen 443 http2;
set_real_ip_from 0.0.0.0/0;
real_ip_header X-Forwarded-For;
server_name example.com;
location /backend/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend-admin:2082/;
}
location ^~ / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://frontend-admin:8080;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|ttf|svg|js)$ {
expires 2d;
add_header Cache-Control public;
}
}
I use nginx location /backend/ in order to proxy all request to example.com/backend/... to example.com:2082/... that nodejs listening.
Main problem is my static files from proxy_pass backend-admin:2082 nginx doesn't want to process.
I have path for uploaded images in my backend service /uploads/events/1.jpg if I open it like http://example.com:2082/uploads/events/1.jpg it works. But via nginx it doesn't http://example.com/backend/uploads/events/1.jpg. I think here nginx event doesn't try to reach image via proxy_pass.
Any ideas?
The reqular expression for the static files takes precedence over the /backend/ as it's longer match. The location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|ttf|svg|js)$ means, match any query that ends with the listed suffixes. The .+ is the greedy part of the regex which will match anything that is written before the suffix is reached. Therefore anything that has the suffix from the list will be mathced as a longest match and sent to read from the local files, instead of sending the request to the '/backend/'.
There are multiple ways to fix this problem, depending on what you actually want to achieve. One approach is to add another location, just for the static files that are served from the container as follows:
location ~* ^/backend/.+\.(jpg|jpeg|gif|png|ico|css|pdf|ppt|txt|bmp|rtf|ttf|svg|js)$ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
rewrite ^/backend/(.*) /backend/$1 break;
proxy_pass http://backend-admin:2082;
expires 2d;
add_header Cache-Control public;
}
This fix will allow you to still have local files cached, but divert all other files that have /backend/ prefix to the container.
To better understand how the matching is done, you can use some of the online matching simulators. Here is the one I use Nginx location match tester
I have a digital ocean droplet running Ubuntu 18.04 and inside is is an lxc container. I have two applications in that container.
The first application (a client) lives at /var/www/html and the second one is the NodeJS application that lives at /var/www/my-site/. The Node application inside the container is managed by pm2 and everything seems to be working fine thus far because when I type in curl http://localhost:3000 at the container terminal, I get back the desired output.
Inside the main droplet (not the container) under /etc/nginx/sites-available, I have the following two server blocks - default and my-site.
The first app works fine when I try to access it through the browser via my domain but the NodeJS application returns a 502 Bad Gateway when I try to access it through sub.mydomain.com. pm2 start inside the container tells me that the node application status is online.
Here is my default server block file. This works. When I visit mydomain.com, my site shows up fine.
# HTTP — redirect all traffic to HTTPS
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydomain.com;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://container_ip_address /;
}
}
Now here is the other server block - my-site.
# Upstream config
upstream site_upstream {
server 127.0.0.1:3000;
keepalive 64;
}
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name sub.mydomain.com www.sub.mydomain.com;
root /var/www/my-site;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://site_upstream;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
I have set the A Record for my subdomain on my domain's DNS settings, to my droplet's IP address and I have also created a symbolic link to /etc/nginx/sites-enabled for the my-site server block.
I have scoured the internet for a solution to this problem but nothing seems to be working. What am I missing?
Your help would be greatly appreciated. Thanks.
The problem here was that requests to the sub domain were not being directed to the lxc container.
I solved this by adding the following inside the my-site server block.
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://container_ip/;
}
After that I added an asterisk to the next location block.
location /* {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://site_upstream;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
Another way of getting around this issue was by including the sub-domain in the server_name directive for the default server block. This worked but the only problem was that nginx would complain that it had to ignore the server I had set up in the my-site server block when you ran nginx -t, otherwise, it worked just fine.
I am setting up a Ghost blog which works as a React-based SPA. Everything's hosted on DO.
That means I don't have a great way to tool around with Express, which is what powers Ghost.
For my frontend, I need to always serve my index response, regardless of the URL.
This is my nginx config now, with everything working except the SPA component.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name SITE;
root /var/www/ghost/system/nginx-root;
ssl_certificate /etc/letsencrypt/SITE/fullchain.cer;
ssl_certificate_key /etc/letsencrypt/SITE/SITE.key;
include /etc/nginx/snippets/ssl-params.conf;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:2369;
}
location ~ /.well-known {
allow all;
}
client_max_body_size 50m;
}
I've tried many of the existing answers here on setting up Nginx for SPAs with no success. For example, try_files $uri $uri/index.html =404;
Because I'm using proxy_pass, am I limited to this sort of behavior in my Express app? That would not be ideal because editing the blog code will break my upgrades.
I am trying to use a nodejs app behind an nginx reverse proxy to handle the ssl
I have my app running on localhost:2000. I can confirm this as working with a curl command.
This is my nginx setup:
# the IP(s) on which your node server is running. I chose port 3000.
upstream dreamingoftech.uk {
server 127.0.0.1:2000;
keepalive 16;
}
# the nginx server instance
server {
listen 0.0.0.0:80;
server_name dreamingoftech.uk;
return 301 https://$host$request_uri;
}
#HTTPS
server {
listen 443 ssl http2;
server_name dreamingoftech.uk;
access_log /var/log/nginx/dreamingoftech.log;
error_log /var/log/nginx/dreamingoftech.error.log debug;
ssl_certificate /etc/letsencrypt/live/dreamingoftech.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dreamingoftech.uk/privkey.pem;
include snippets/ssl-params.conf;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://dreamingoftech.uk/;
proxy_redirect off;
#proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "";
proxy_ssl_session_reuse off;
proxy_cache_bypass $http_upgrade;
}
}
if I now curl https://dreamingoftech.uk, it takes a while but I do get the webpage delivered. albeit with the message:
curl: (18) transfer closed with 1 bytes remaining to read
However when viewed from a browser I get a 502 gateway error.
I have checked the error log and this is the result: ERROR LOG
I can't understand why the reverse proxy is adding such a time delay into the process. Any ideas would be greatly appreciated.
PS: in the upstream config I have tried localhost instead of 127.0.0.1 to no avail
I have almost the same configuration. Can you try the following
You can redirect all http to https
server {
listen 80;
return 301 https://$host$request_uri;
}
or for a specific site like this
server {
server_name dreamingoftech.uk;
return 301 https://dreamingoftech.uk$request_uri;
}
but choose only one for your case
and then you make sure you node server is running on http mode and not https.
Also you mentioned that you run node on port 3000, then use port 3000 and not 2000 as I can see in your config.
After you confirm the above redirect all packets into localhost like this
server {
listen 443;
server_name dreamingoftech.uk;
ssl_certificate /etc/letsencrypt/live/dreamingoftech.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dreamingoftech.uk/privkey.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:3000;
proxy_read_timeout 90s;
proxy_redirect http://localhost:3000 https://dreamingoftech.uk;
}
}
Create a file and sum the above code put it in sites-available with a name like dreamingoftech.uk and the use ln -s to create a softlink into sites-enabled. go to your nginx.conf and make sure you include folder sites-enabled
Then must restart nginx to check if it works
#Stamos Thanks for your reply. I tried that but unfortunately it didn't work. I decided to try the most basic node app I could still using the basic modules I am using.
I tried this and it worked straight away.
The problem is with my app therefore. I will spend time rebuilding and testing step by step until I find the issue,
Thanks for your time!