How to use TOTP codes for NGINX authentication? - security

I have a very basic NGINX configuration (I've removed the irrelevant parts of the config):
events { }
http {
include /etc/nginx/mime.types;
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
server_name files.example.org;
include nginx-wildcard-ssl.conf;
root /files;
autoindex on;
location / {
try_files $uri $uri/ =404;
}
}
}
nginx-wildcard-ssl.conf is a simple file for doing SSL. Here it is if you're wondering (I've removed the paths to the certificates)
listen 443 ssl;
ssl_certificate /.../cert.pem;
ssl_certificate_key /.../privkey.pem;
That configuration serves the files in /files at files.example.org (but with my domain), and gives a directory listing, and I can view the files as expected.
However, some of those files contain private information. NGINX has a guide to using basic HTTP authenticatoin. But, in addition to using a password with basic authentication, I'd also like to require a 2FA TOTP code in addition to the password to sign in. If it matters, the server is running Debian 11, and I am the sole user of it (and so have root privileges). I'm already using SSL, so I'm not too concerned with using basic authentication.
How can I configure NGINX to require TOTP codes for 2FA combined with basic authentication?

Related

ReactJS Router Dom not navigating to dashboard after logging in

Please help. I have my ReactJS NodeJS deployed on Linux NGINX and I can login on my development machine, on my phone, and on Linux OS running on my computer Oracle VM; i can accessed the application on these devices and it worked fine even when I changed IP address but immediately another device e.g somebody else computer or phone tries to login, it would always not redirect to dashboard but my three devices(laptop, phone, and Linux OS running on my computer Oracle VM) can redirect successfully to dashboard after logging in.
I have tried everything on this page below that people provided as solution but to no success :
https://stackoverflow.com/questions/43951720/react-router-and-nginx
Infact, i don't even see their login attempt in the NodeJS API running. My ReactJS build file contents are in /etc/nginx/sites-available/mydomain/ and this is my nginx.config below:
user www www; ## Default: nobody
server { # simple reverse-proxy
listen 80;
server_name mydomain;
# serve static files
#location ~ ^/(images|javascript|js|css|flash|media|static)/ {
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
root /etc/nginx/sites-available/mydomain;
index index.html;
try_files $uri /index.html;
}
location /api {
proxy_pass https://localhost:8084;
root /etc/nginx/sites-available/mydomain;
index index.html;
}
}
server {
listen 443 ssl;
server_name mydomain;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
root /etc/nginx/sites-available/mydomain;
index index.html;
try_files $uri /index.html;
}
location /api {
proxy_pass https://localhost:8084;
root /etc/nginx/sites-available/mydomain;
index index.html;
}
ssl_certificate /etc/letsencrypt/live/mydomain/fullchain.pem; # managed by Certbot;
ssl_certificate_key /etc/letsencrypt/live/mydomain/privkey.pem; # managed by Certbot
#...
}
Please fams, what difference does it make as I have my NGINX configuration in nginx.conf and not default?
As I configured my nginx.conf do I still need to go do the same in /etc/nginx/sites-available/default and what extension is this default config file?

Nginx With SSL and Frontend + Backend same server

I have a server (VPS Amazon) with Ubuntu. In this server is running my backend node and my frontend React.
My ReactApp is running over nginx and my backend over pm2.
In my react app I defined REACT_APP_BASE_URL: http://[my_ip_server]:4000.
So everything was working OK but after I configured SSL in nginx, I can access my frontend login page but when I send the request, I catch the following errors:
a) If I set https in my REACT_APP_BASE_URL (https://[my_ip_server]:4000), I get this error: ERR_SSL_PROTOCOL_ERROR.
b) If I let with http, I get Mixed Content error
Someone know How I do this work?
Thanks a lot!
My nginx.conf. At moment I'm using just port 80 until I solve my problem.
server {
#listen [::]:443 ssl ipv6only=on; # managed by Certbot
#listen 443 ssl; # managed by Certbot
#ssl_certificate /etc/letsencrypt/live/mysite.com.br/fullchain.pem; # managed by Certbot
#ssl_certificate_key /etc/letsencrypt/live/mysite.com.br/privkey.pem; # managed by Certbot
#include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
#ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
#if ($host = surveys.alcancenet.com.br) {
# return 301 https://$host$request_uri;
#} # managed by Certbot
listen 80 default_server;
listen [::]:80 default_server;
server_name mysite.com.br;
#return 404; # managed by Certbot
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri /index.html;
}
}
With help from #Markido. I managed to solve that.
I added in my backend the default route "/api" and after that I put in my nginx config the following
location /api {
proxy_pass http://localhost:4000;
}
Tks!!!
First off, there is a difference between running the applications (which is what i assume you are using PS2 for), and exposing them through an nginx proxy.
It would be most helpful to show us your nginx config file, and also tell us which port your backend runs on (assuming the frontend runs on port 4000).
Edit;
Thanks for the config and backend port.
I don't think you need to set the create react app base url to https, just set the port and run it on the VPS using PS2.
I can't see how you have any proxy at all pointing to 4000 in your config - do you not expose the backend?
The only exposed part is static html files. The relevant code is;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri /index.html;
}
If you want to call the backend using https, or generate your site using some tool with a process which entails HTTPS calls, you need to do so correctly in the frontend. IE something doesn't add up here.
The usual approach is;
Expose the backend and the frontend on port 443 SSL only, using different sub-domains (eg. api.mydomain.com), and then use the proxy in nginx to redirect 443 traffic for each domain to the corresponding local ports (4000, and the frontend port or static files directory more likely).
Instead of:
location / {
try_files $uri $uri /index.html;
}
Use something like:
location / {
proxy_pass http://localhost:4000;
}

How to remove the port number from URL in a nginx site

In an URL, I see something like: http://example.com:8000/page. Even though I manage to access the site by typing URL without the port, it is added there when I go to any sub-page.
I used to have a Python-based web server instead of nginx, it worked awfully, however, the port number was never there. This is why I still hope there is a way to hide a non-standard port (without changing it to standard) because the Python web server didn't show one.
I redirect port 80 >> 8000, by the way.
Is there a feature in the nginx config to hide non-standard port from a site? If no, maybe some other method?
My config:
server {
listen 8000;
access_log /logs/access.log;
error_log /logs/error.log;
index index.html;
server_name example.com;
error_page 404 errors/404.html;
location / {
try_files $uri $uri/ =404;
}
}
I've just figured out that
port_in_redirect off;
solves it.

Redirection issue with two websites running Node.js on Nginx and CloudFlare

I have two files in sites-available, one for each website running on the machine. Both have identical code with only the domain name and port for the website replaced. Both sites are also symlinked to sites-enabled.
reesmorris.co.uk (which works fine):
# Remove WWW from HTTP
server {
listen 80;
server_name www.reesmorris.co.uk reesmorris.co.uk;
return 301 https://reesmorris.co.uk$request_uri;
}
# Remove WWW from HTTPS
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/reesmorris.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/reesmorris.co.uk/privkey.pem;
server_name www.reesmorris.co.uk;
return 301 https://reesmorris.co.uk$request_uri;
}
# HTTPS request
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name reesmorris.co.uk;
ssl_certificate /etc/letsencrypt/live/reesmorris.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/reesmorris.co.uk/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
include /etc/nginx/proxy_params;
}
}
francescoiacono.co.uk (has a redirection loop):
# Remove WWW from HTTP
server {
listen 80;
server_name www.francescoiacono.co.uk francescoiacono.co.uk;
return 301 https://francescoiacono.co.uk$request_uri;
}
# Remove WWW from HTTPS
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/francescoiacono.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/francescoiacono.co.uk/privkey.pem;
server_name www.francescoiacono.co.uk;
return 301 https://francescoiacono.co.uk$request_uri;
}
# HTTPS request
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name francescoiacono.co.uk;
ssl_certificate /etc/letsencrypt/live/francescoiacono.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/francescoiacono.co.uk/privkey.pem;
location / {
proxy_pass http://127.0.0.1:4000;
include /etc/nginx/proxy_params;
}
}
To experiment, I replaced the return value in the first server block of the broken website to be a 403, which seems to be shown even if the site is using HTTPS. Additionally, removing the first server block on the broken website altogether will cause the website to completely redirect to the already-working website.
Both websites use CloudFlare for the DNS routing. CloudFlare is 'paused' on both websites which means that it only handles the routing, in which both websites have identical routing to the server with AAAA and A records being the same.
I'm not too familiar with server blocks, so if anybody has any ideas as to what is happening then it would be greatly appreciated.
It appears as though this issue was resolved by making both websites 'paused' in the CloudFlare DNS. This was done originally, though may have not had enough time to propagate.
I had not modified the code since this post was created, though I had ensured that both sites were 'paused' on CloudFlare (reesmorris.co.uk was, however francescoiacono.co.uk was not) - and so it seems to have been a misconfiguration issue with CloudFlare.

NginX : Serve static content and proxy pass to Node API

I'm running a website composed of an API in a docker on port 8080, and a static folder containing the front-end react app.
The api is at /api
I'm trying to configure NginX to correctly serve this configuration, but I can't seen to figure out how to have both working.
If I setup my default like this:
server {
root /var/www/html;
server_name DOMAIN; # managed by Certbot
location #nodeapp {
# Redirect to the api
proxy_pass http://localhost:8080;
}
location / {
# Map all the routes to the index.html
try_files $uri #nodeapp;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/DOMAIN/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/DOMAIN/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
Then everything is redirected to my API, and the static content is not delivered if not explicitly specified (e.g. going to https://host/index.html)
If I add try_files $uri $uri/ #nodeapp; to serve the root folder as a directory, the index gets served, but I can't access the API anymore, it always serve the react app
I also tried to add
location /api/ {
proxy_pass http://localhost:8080
}
But no difference
What am I doing wrong ?
I found the answer, the react app I was using was setup with a service worker, which was messing with the routing. I removed it and it works like a charm now !

Resources