I'm running two websites on a DigitalOcean droplet, and now I've upgraded to https on both of them. On one of the sites I'm doing a request to a node server I'm running on there to send emails, but since I've enabled https it blocks this request because it isn't https (logically). So I tried to use the node mailserver as a proxy pass on one of the sites that has https enabled.
To test, my node server I want to proxy pass looks like this:
exports.start = () => {
app.use(bodyParser.json());
app.use(cors());
app.get('/', (req, res) => res.send('Hello World!'));
app.listen(4004, () => console.log('Server listening on port 4004'));
};
this.start();
My Nginx sites-enabled/default looks like this:
server {
server_name mywebsite.nl www.website.nl;
location / {
proxy_pass http://<PIVATE IP>:4000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /ws {
proxy_pass http://<PIVATE IP>:4002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
location /mailmeister {
proxy_pass http://<PIVATE IP>:4004;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mywebsite.nl/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mywebsite.nl/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
So the /mailmeister is where I want to redirect all requests to the node server to.
When I run curl localhost:4004 or curl PRIVATE IP:4004 I get the expected
Hello World! response. But when I run curl https://mywebsite.nl/mailmeister It returns a webpage that says
Cannot GET /mailmeister
Anyone know how to fix this?
In a nutshell, since you have:
location /mailmeister {
proxy_pass http://<PIVATE IP>:4004;
}
It should be handled like so:
app.get('/mailmeister ', (req, res) => res.send('Hallo World!'));
Not tested this particular setup. There is a related answer where this is explained further. Once you get it, the documentation makes more sense.
Related
I have a websocket server (node.js) which runs fine on localhost and on a previous heroku deployment. I am now migrating to google compute engine and running into some issues.
The websocket handshake is failing returning a 301 error. As pointed out in this answer this can be due to the request going through front end servers that do not support a websocket connection and can be worked around by targeting ws://my_external_gce_ip directly. I am wondering if there is some load balancing configuration I can update so that I can address my backend using the custom domain name.
While I understand the problem, it seems to me that the domain should be resolved to the external ip after a dns lookup so I don't understand the constraint really.
Sorry if this is very obvious. I'm new to GCE and have been googling all day trying to get this. I will paste my code below as well as the NGINX config but I don't think either are particularly helpful as all works fine addressing using the IP
index.js:
/* requirements */
var bodyParser = require("body-parser");
const WebSocket = require("ws");
const http = require("http");
const express = require("express");
const port = process.env.PORT || 3000;
/*
server definition and config
*/
const app = express();
app.use(bodyParser.json());
const server = http.createServer(app);
/*
web socket stuff
*/
const webSocketServer = new WebSocket.Server({
server,
});
webSocketServer.on("connection", (webSocket) => {
console.log("board trying to connect...");
webSocket.on("message", (data) => {
webSocketServer.clients.forEach((client) => {
if (client === webSocket && client.readyState === WebSocket.OPEN) {
client.send("[SERVER MESSAGE]: You are connected to the server :)");
}
});
});
});
/*
activate server
*/
server.listen(port, () => {
console.log(`Server is now running on port ${port}\n`);
});
nginx config
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location \ {
# we're actually going to proxy all requests to
# a Nodejs backend
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# I added this baby in
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server
{
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name my_domain; # managed by Certbot
location / {
# we're actually going to proxy all requests to
# a Nodejs backend
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my_domain/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my_domain/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server
{
if ($host = my_domain) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name my_domain;
return 404; # managed by Certbot
}
Thanks very much in advance and sorry if this is a noob question. Total noob when it comes to load balancing
Solved. After a lot more googling I found this thread of people facing the same problem most either using apache servers or elastic beanstalk so not using nginx.
It seems that a lot of people get websockets to "work" using socket.io but they don't really have a duplex connection as it is falling back to long polling.
In my case the answer was simple, I didn't include the server name in my nginx (facepalm) and I may have forgot to include a header. The https forwarding now looks like this (and addressing using my domain works)
location / {
# we're actually going to proxy all requests to
# a Nodejs backend
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 90;
}
Don't forget to restart your nginx after you update
sudo systemctl restart nginx
I have an node.js application that is running on port 3000.
Infront of it i run an nginx reverse proxy. It works fine for port 80. I have tried to install an certificate with certbot. Now i have the certificate and set up my proxy to redirect all non HTTPS traffic to HTTPS and on the port 443 i listent to it and pass my connection to my application. Somehow my browser is pending and i dont know why.
Here i have added 2 server blocks:
server {
server_name mywebsite.at www.mywebsite.at;
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/mywebsite.at/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mywebsite.at/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
server_name mywebsite.at www.mywebsite.at;
listen 80;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In this case i can enter http://mywebsite.at but i cant enter https://mywebsite.at. It says "cant reach the website". Any idea why this error appears?
I have runned sudo nginx -t there are no erros.
I have found the problem guys. My port 443 was not open. ufw allow 443 fixed the issue.
I am setting up nginx so that I can access my API built using express through a url like - example.com/api
Here is my nginx config
upstream appfrontend {
server localhost:9008 fail_timeout=0;
}
upstream api {
server localhost:3001;
}
server {
listen 80;
listen [::]:80;
server_name hospoline.com www.hospoline.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443;
server_name example.com; # replace this with your domain
root /var/www/html/example-certbot-webroot;
# The public and private parts of the certificate are linked here
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location /.well-known {
root /var/www/html/example;
}
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://appfrontend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 900s;
}
location /api {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
When I visit my site example.com the front end loads perfectly.
When I visit example.com/api/fetch_doctors, I get a 502 bad gateway error.
My API is working fine in localhost. When I send a request to localhost:3001 on my local computer, I get the list of doctors.
Both my front end server and backend server are run using forever.
I am really lost and breaking my head about this for one full day. I have no idea where I'm going wrong. Any help in the right direction would be great! Thank you all!
I have setup my express & node app to use my letsencrypt ssl certs as so
var options = {
key: fs.readFileSync('/etc/letsencrypt/live/example.com/privkey.pem'),
cert: fs.readFileSync('/etc/letsencrypt/live/example.com/cert.pem')
};
// Create an HTTP service.
http.createServer(app).listen(3060);
// Create an HTTPS service identical to the HTTP service.
https.createServer(options, app).listen(3061);
I can reach my API at domain.com (hitting port 80 in nginx)
But I can't reach it if I hit https://example.com (port 443)
However I can reach it if I open up port 3061 and request https://example.com:3061 and it works correctly with SSL
My question is, how do I setup nginx to correctly forward requests on port 443 to my server on port 3061 for SSL.
Do I need to include the cert information as suggested elsewhere, if my app is dealing with it?
My nginx config is like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://localhost:3060;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass https://localhost:3061;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Thanks
I need to access my nodejs api which is reverse proxied by a nginx server over https. To achieve that I generated a self signed cert at location /etc/nginx and then modified the nginx configuration like below:
server {
listen 443;
server_name api.something.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
add_header Strict-Transport-Security max-age=500;
location / {
proxy_pass http://10.132.176.xx:8888; #private ip
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Now whenever I try to access https://api.something.com I always get a connection refused error in google chrome. Plain http works fine. Any clues?