Proxy Passing Socket.IO connections on nginx not working - node.js

I am trying to proxy pass a node.js-socket.io app with nginx.
The client is an html file with some javascript in it;
<html>
<head>
<script src="socket.io.js"></script>
<script>
var socket = io('http://localhost:80');
socket.on('welcome', function(data){
console.log('Server says:' + data);
socket.emit('client-response', 'thank you!');
});
</script>
</head>
<body>
Socket.io
</body>
</html>
And the server block that supposed to proxy pass in nginx.conf file is this;
server {
listen 80;
listen [::]:80;
#root /usr/share/nginx/html;
include /etc/nginx/default.d/*.conf;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass "http://localhost:2156";
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I have my node.js app up and running in port "2156".
When I test this, the client tries to reach the socket.io on the port 80 and fails with a 404 error (Because nginx was supposed the pass the proxy to the port 2156 but it didn't).
What am I missing here?

Edit: I've changed the client to connect at "http://localhost/socket.io/" and rewrote the nginx.conf like this:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /usr/share/nginx/html;
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://localhost:2156/socket.io/;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
And it worked.

Related

Ubuntu + Nginx + NodeJS: 502 Bad Gateway

Currently I am trying to build a small nodejs API which should work on my server behind an already existing and working nginx setup.
nginx.conf:
server {
listen 80;
listen [::]:80;
server_name *.mydomain.com;
if ($host = www.mydomain.com) {
return 301 https://$host$request_uri;
}
if ($host = mydomain.com) {
return 301 https://$host$request_uri;
}
if ($host = hello.mydomain.com) {
return 301 https://$host$request_uri;
}
return 404;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mydomain.com www.mydomain.com;
root /var/www/html;
index index.html;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_client_certificate /etc/ssl/cloudflare.crt;
ssl_verify_client on;
location / {
try_files $uri/index.html $uri.html $uri/ $uri =404;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name hello.mydomain.com;
root /var/www/hello;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
ssl_client_certificate /etc/ssl/cloudflare.crt;
ssl_verify_client on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000$request_uri;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
index.js:
const express = require('express');
const app = express();
app.get("/", (request, response) => {
response.end("hello world");
});
app.listen(3000, () => console.log('listening'));
So i have mydomain.com and www.mydomain.com and they have nothing to do with nodejs and work fine.
The nodejs site lies behind hello.mydomain.com and just returns a 502: Bad Gateway error. When I am on my server (where everything lies) and just do:
curl localhost:3000
I get the right response. So the nodejs code works (I even approved it locally), but the nginx is not able to act as a proxy and "speak" with the local nodejs express.
Does anyone know why this does not work? I already searched through many tutorials, but I just cannot find the solution. :/

Timeout with socket io

I have an application using socket.io with Node and Express. I'm also using AWS EC2 and Nginx.
I'm getting a timeout with socket io.
The error is:
GET https://vusgroup.com/socket.io/?EIO=3&transport=polling&t=MnUHunS 504 (Gateway Time-out)
Express file:
var port = 8090;
host = 'https://18.237.109.96'
var app = express(host);
var webServer = http.createServer(app);
...
// Start Socket.io so it attaches itself to Express server
var socketServer = socketIo.listen(webServer, {"log level":1});
//listen on port
webServer.listen(port, function () {
console.log('listening on http://localhost:' + port);
});
Nginx file:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
}
server {
server_name vusgroup.com www.vusgroup.com; # managed by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
...
ssl stuff
...
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
location /socket.io/ {
proxy_pass http://18.237.109.96:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I've tried chaning the proxypass for socket.io to http://18.237.109.96:8090; but that gave me a 400 error.

How to set up nginx reverse proxy with multiple node apps

I have two Vue.js apps that I want to run on the same domain (e.g., https://localhost:8080/app1 and https://localhost:8080/app2). Both apps run in separate docker containers, and i have set up a third docker container running nginx with a reverse proxy in order to have ssl.
I am able to visit the apps at the wanted locations, but there are some resources missing (images, fonts etc). I realize that my nginx server looks for them at https://localhost:8080/my_resource, but I can't figure out how to forward these to the correct locations (i.e., https://localhost:8080/app1/my_resource, and similar for app2).
I've tried using the "try_files" directive in nginx, like so:
location / {
try_files $uri $uri/ http://app1:8080 http://app2:8080
}
but it does not work.
Here is my nginx config file
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Change the default configuration to enable ssl
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
if ($http_referer = "https://localhost:8080/app1/") {
proxy_pass http://app1:8080;
break;
}
if ($http_referer = "https://localhost:8080/app2/") {
proxy_pass http://app2:8080;
break;
}
}
location /app1/ {
proxy_pass http://app1:8080/;
}
location /app2/ {
proxy_pass http://app2:8080/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
And this is my docker-compose
version: "3.6"
services:
app1:
image: "app1"
expose:
- "8080"
command: ["serve", "-s", "/app/app1/dist", "-l", "8080"]
app2:
image: "app2"
expose:
- "8080"
command: ["serve", "-s", "/app/app2/dist", "-l", "8080"]
nginx:
image: "nginx"
ports:
- "8080:443"
depends_on:
- "app1"
- "app2"
Thanks for any input :)
After a lot of trial and error, I found a solution. I do not think this is the optimal solution, but it's working. Here is my nginx configuration:
# Pass any http request to the https service
server {
listen 80;
listen [::]:80;
server_name localhost;
return 301 https://$server_name$request_uri;
}
# Configure the ssl service
server {
listen 443 ssl;
listen [::443] ssl;
ssl_certificate /etc/nginx/certs/my_app.crt;
ssl_certificate_key /etc/nginx/certs/my_app.key;
server_name localhost;
server_tokens off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
proxy_intercept_errors on;
error_page 404 = #second;
proxy_pass http://app1:80;
}
location #second {
proxy_pass http://app2:80;
}
location /app1/ {
rewrite ^/app1(.*) /$1 break;
proxy_pass http://app1:80;
}
location /app2/ {
rewrite ^/app2(.*) /$1 break;
proxy_pass http://app2:80;
}
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

Nginx handle 500 internal server error security issue

I am trying to fix a security vulnerability of 500 internal server error disclose location of the file
My issue is similar to that of (https://cdn-images-1.medium.com/max/1600/1*2DAwIEJhgLQd82t5WTgydA.png)
(https://medium.com/volosoft/running-penetration-tests-for-your-website-as-a-simple-developer-with-owasp-zap-493d6a7e182b)
I am tried with
proxy_intercept_errors on;
and
error_page 500
redirection but it didnt help.
Any help on this ?
This is a basic example of implementing proxy_intercept_errors on;
upstream foo {
server unix:/tmp/foo.sock;
keepalive 60;
}
server {
listen 8080 default_server;
server_name _;
location = /errors/5xx.html {
internal;
root /tmp;
}
location / {
proxy_pass http://foo;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_intercept_errors on;
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
}
}
Notice the:
error_page 500 501 502 503 504 505 404 =200 /errors/5xx.html;
This will intercept some 5xx errors and the 404 except and return with a 200
Also, check the /errors/5xx.html location, is using root /tmp; therefore you still need to create the file errors/5xx.html:
$ mkdir /tmp/errors
$ echo "intercepting errors" > /tmp/errors/5xx.hml
You don't necessarily need to a file to reply you request you could also use something like this:
location = /errors/5xx.html {
internal;
default_type text/plain;
return 200 'Hello world!';
}
In your case the 404 File not found could be handle different, for example:
upstream failover{
server server2:8080;
}
server {
listen 80;
server_name example.com;
root /tmp/test;
location ~* \.(mp4)$ {
try_files $uri #failover;
}
location #failover {
proxy_pass http://failover;
}
}
In this case if the file ending with .mp4 not found it will try another server, then if required you still can intercep the error there.

How to use nginx proxy_pass subroutes from node app?

I have a node app running on port 8002 with different subroutes like '/login' or '/facebook', i also have nginx (v1.6.0) and the following config:
server {
listen 80;
server_name my-ghost-blog.com ;
client_max_body_size 10M;
location / {
proxy_pass http://localhost:2368/;
proxy_set_header Host $host;
proxy_buffering off;
}
location ~ ^/(sitemap.xml) {
root /var/www/ghost;
}
location ~ ^/(robots.txt) {
root /var/www/ghost;
}
#proxy to a node app running on 8002 port
location ^~ /auth/ {
proxy_pass http://localhost:8002/;
}
}
when i go to '/auth/' it works, but when i try to go to a node app'subroute, 404 appears because nginx dont know how to handle it.
any ideas?
thanks

Resources