my nginx server is actually proxying my node backend (which listens on port 3000) with a simple:
location /api/ {
proxy_pass http://upstream_1;
}
Where upstream_1 is my node cluster defined in nginx.conf (on port 3000).
I'm gonna have to add SSL over http connections, so I have the following question: do I only need to configure nginx to enable ssl? And it will automatically "uncrypt" the request and pass it uncrypted to Node which will be able to handle it normally? Or do I need to configure Nodejs to support ssl as well?
If you're using nginx to handle SSL, then your node server will just be using http.
upstream nodejs {
server 127.0.0.1:4545 max_fails=0;
}
server {
listen 443;
ssl on;
ssl_certificate newlocalhost.crt;
ssl_certificate_key newlocalhost.key;
server_name nodejs.newlocalhost.com;
add_header Strict-Transport-Security max-age=500;
location / {
proxy_pass http://nodejs;
proxy_redirect off;
proxy_set_header Host $host ;
proxy_set_header X-Real-IP $remote_addr ;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for ;
proxy_set_header X-Forwarded-Proto https;
}
}
Related
im learning reverse proxy w/ nginx for the first time, and the following isnt working for me
im trying to reroute requests from http://localhost to an api server i have running at http://localhost:8080
server {
listen 80;
location / {
proxy_pass http://localhost:8080;
}
}
when i hit http://localhost, I simply get shown the welcome to nginx splash screen.
if i hit http://localhost:8080, i see my api
I have a node express service running at :8080, which i can hit manually, but shouldn't http://localhost be proxied there too?
When I setup a nginx domain that forwards requests to a node server, it looks like this, for the server_name, you can use localhost as a parameter for accessing it via localhost. You can also pass default_server to make this the default server config.
Note: Only one active config can contain default_server otherwise Nginx will throw errors.
Note: When using default_server, Nginx will catch localhost in that server config. Otherwise you need to specify localhost in the list of server_name's (separated by a space).
server {
# Setup the domain name(s)
server_name example.com;
listen 80 default_server;
# If you would like to gzip your stuff
gzip on;
gzip_min_length 1;
gzip_types *;
# Setup the proxy
# This will forward all requests to the server
# and then it will relay the servers response back to the client
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
Found out that adding this to my nginx.conf fixes the issue:
listen [::]:80;
For some reason listen 80; doesn't catch my http://localhost requests.
I have a node express webserver starting up on my debian linux box on port 8080-8083 using pm2 cluster.
I have an nginx reverse proxy server setup on the server to redirect correctly to the node-express server, with the following /etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name a.registered.dns.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name a.registered.dns.domain.com;
ssl_certificate /home/admin/certs/a.registered.dns.domain.com.chained.crt;
ssl_certificate_key /home/admin/certs/a.registered.dns.domain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
}
location /socket.io {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
upstream nodes {
# enable sticky session based on IP
ip_hash;
server 127.0.0.1:8080 fail_timeout=20s;
server 127.0.0.1:8081 fail_timeout=20s;
server 127.0.0.1:8082 fail_timeout=20s;
server 127.0.0.1:8083 fail_timeout=20s;
}
This creates the websocket just fine between the server and the client as seen here.
upgrades the connection from the long polling to the websocket with the status 101. If I do something from the site that sends an emit over the socket, the server receives it and acts on it appropriately. So Far So Good.
However, if I do something elsewhere that causes the server to emit out to the client, I can see using DEBUG='socket.io*' pm2 restart http-server --update-env on the server that the socket information is received and emitted out, the client never receives the data packet it should. can confirm this by running localStorage.debug = '*'; from the console in my chrome dev tools.
Saw the emit out and nothing but ping and pong packets on the websocket.
This all works correctly if I open ports 8080-8083 and use only an http connection. So it feels as if there is some issue with the nginx reverse proxy for the ssl connection of my site.
My current NGInx setup is such that it takes all requests from Http and redirects them to HTTPS and then it passes the request to my Node server running on Unbutu localhost.
My quest is, how do I make it so it only accepts requests coming from my app.domain.com (which is hosted somewhere else) and my api.domain.com which is hosted on my Ubuntu cloud. So if you visit api.domain.com you will never get passed to the Node server or if you send a request from anywhere else than app.domain.com you will also never get passed to the Node server.
server {
listen 80;
listen [::]:80 default_server ipv6only=on;
return 301 https://$host$request_uri;
}
# HTTPS — proxy all requests to the Node app
server {
# Enable HTTP/2
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name api.maindomain.com;
# Use the Let’s Encrypt certificates
ssl_certificate /etc/letsencrypt/live/api.maindomain.com/xxxx.pem;
ssl_certificate_key /etc/letsencrypt/live/api.maindomain.com/xxx.pem;
# Include the SSL configuration from cipherli.st
include snippets/ssl-params.conf;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:xxxx/;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
proxy_redirect off;
}
}
I have a backend server on Node.js and I am trying to setup 2 way SSL between Nginx and this backend server.
But I get an error as:
2015/11/02 06:51:02 [error] 12840#12840: *266 upstream SSL certificate does not match "myLocalMachine" while SSL handshaking to upstream,
and this is when I set proxy_ssl_verify on. If its off then it works fine. Following is my Nginx setup:
upstream myLocalMachine {
server MyPublicIP:8888;
}
server {
listen 8222 ssl;
proxy_cache two;
ssl_certificate /etc/nginx/ssl/server-cert.pem;
ssl_certificate_key /etc/nginx/ssl/server-key.pem;
ssl_client_certificate /etc/nginx/ssl/client-cert.pem;
ssl_verify_client on;
location / {
proxy_ssl_session_reuse off;
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/nginx/ssl/backend-server-cert.pem;
proxy_ssl_certificate /etc/nginx/ssl/server-cert.pem;
proxy_ssl_certificate_key /etc/nginx/ssl/server-key.pem;
proxy_ssl_password_file /etc/nginx/ssl/pwd.pass;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass https://myLocalMachine;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_valid any 1m;
proxy_cache_min_uses 1;
#proxy_cache_bypass $cookie_nocache $arg_nocache$arg_comment;
proxy_cache_methods GET HEAD POST;
proxy_cache_key "$request_body";
}
}
Solution:
I had used url in certificates lets say: example.com but I was using some
custom name as myLocalMachine in upstream and proxy_pass which I replaced it with url.
Use url in upstream block & proxy_pass as below
upstream example.com {
# ip & ports are for examples
server 11.11.11.11:2222;
}
proxy_pass https://example.com;
I'm using this buildpack to serve static files on Heroku with a node + nginx setup. While static assets are served properly, trying to serve content through node results in a 502 Bad Gateway. Node on its own works fine and so does nginx. The problem is when the two need to work together which I guess is because I haven't configured the nginx upstream settings right.
Here's my nginx conf:
worker_processes 1;
error_log /app/nginx/logs/error.log;
daemon off;
events {
worker_connections 1024;
}
http {
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
upstream node_conf {
server 127.0.0.1:<%= ENV['PORT'] %>;
keepalive 64;
}
server {
listen <%= ENV['PORT'] %>;
server_name localhost;
location / {
root html;
index index.html index.htm;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_http_version 1.1;
proxy_pass http://node_conf;
}
location ~* ^.+\.(jpg|gif|png|ico|css|js|html|htm)$ {
root /app;
access_log off;
expires max;
}
location /static {
root /app;
index index.html index.htm;
}
}
}
.
My _static.cfg:
SERVER_TYPE="nginx"
BUILD_WEB_ASSETS="true"
.
My node server:
var app = require( 'express ')()
app.get( '/', function(req, res) { res.send( 'This is from Node.' ) })
app.listen( process.env.PORT )
.
I also have a sample html file in /static to test if nginx works:
<html>This is from nginx.</html>
.
With this config, appname.herokuapp.com should display "This is from Node." but instead I get the 502.
appname.herokuapp.com/static displays "This is from nginx" as it should, so no problems with nginx and static content.
I have tried every combination of values for upstream in nginx server settings but none have worked. What else can I try to make nginx proxy requests to node?
Here's my Heroku Procfile in case it helps: web: bin/start_nginx
I am not really familiar with Heroku, and pretty new to Nginx, but I'll give it a shot: To me it looks like the Nginx-config is saying that Nginx and the node.js app are using the same port (<%= ENV['PORT'] %>).
What you want is Nginx to listen to incoming connections (usually port 80), and have it forward them to the node.js app. Here is an example Nginx config:
# the IP(s) on which your node server is running. I chose port 4000.
upstream xxx.xxx.xxx.xxx { #Your IP adress as seen from the internet
server 127.0.0.1:4000; #Your local node.js process
}
# the nginx server instance
server {
listen 0.0.0.0:80; #Have Nginx listen for all incoming connections on port 80
server_name my-site;
access_log /var/log/nginx/my-site.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://xxx.xxx.xxx.xxx/; #Your IP adress as seen from the internet
proxy_redirect off;
}
}
This config is working for me on a webserver I am hosting in my living-room. Good luck!
Here's a README with information to get nginx and Node.js working together from a project I created a while back. Also included is an example nginx.conf.
As an overview, basically you're just creating sockets with Node then setting nginx to pipe those upstream. I commonly use this when I want to run multiple Node processes and have nginx stand in front of it.
It also includes working with socket.io out of the box, so you can use those to see how to configure your Node instance as well.