We have recently configured https for our backend server and are now running into some issues with socket. It seems that the sockets sort of work but it doesn't seem to transfer data between devices as intended. The socket is on port 3000 and instead of configuring SSL certification with sockets I just proxy_passed HTTPS requests to localhost port 3000. My suspicion is that it is linked to my nginx config any ideas where I might be going wrong?
server {
server_name app.domain.com;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://localhost:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/app.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/app.domain.com/privkey.pem; # managed by Certbot
}
Related
This was working when testing out the app. When i switched the DNS over to the server and then added SSL cert, signalR stopped working (my chat). I presume it's to do with the proxy now redirecting to port 443. The rest of the website works, just not its' chat functionality.
Firefox can’t establish a connection to the server at wss://www.my-website.com/chatHub?id=qDsSrV-APYXpnyk_EfsrXw. signalr.min.js:16:110126
Uncaught (in promise) Error: Server returned handshake error: Handshake was canceled.
and the config in nginx:
server {
server_name www.my-website.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/www.my-website.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/www.my-website.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
}
server {
if ($host = www.my-website.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name www.my-website.com;
return 404; # managed by Certbot
}
Any help on getting signalR working again would be greatly appreciated, thanks.
So, turns out that when Certbot edited the config, it added an extra unncessary }. and that's all that was breaking it. The config was broken and was serving a cached state. So i was viewing the website via https:// but was trying to make a websocket connection on port 80, and was failing because it was unsecure.
I have bought a domain (http://qify.app) on google Domains
When opening Chromium / Firefox I don't have any thing coming out of it (ERR_CONNECTION_REFUSED).
My current setup:
An EC2 AWS machine running my nodeJS backend on port 3000 (localhost)
A nGinx reverse proxy to redirect all inbound port 80 to 3000 (the backend) current nginx config: at /etc/nginx/sites-enabled/default
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Also I can curl 15.237.134.217 just as much as curl qify.app (and get the correct html)
<!DOCTYPE html><html>
...
</html>
Final nginx version (working for me, I needed two server blocks)
server {
listen 443 ssl http2 ipv6only=off;
server_name qify.app;
ssl_certificate /etc/letsencrypt/archive/qify.app/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/archive/qify.app/privkey.pem;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
keepalive_timeout 70;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
server { # Redirects all port 80 to 443 (with a 301 redirect)
listen [::]:80 http2 ipv6only=off;
server_name qify.app www.qify.app;
return 301 https://qify.app$request_uri;
}
The .app TLD has a baked-in HSTS policy to always use HTTPS on any .app domain. Both Chrome and Firefox, along with several other browsers, include .app in their preloaded HSTS policy list. This means that these browsers will always lead with https on port 443. See https://blog.google/technology/developers/introducing-app-more-secure-home-apps-web/ as a reference to this https requirement.
The nginx config file you showed indicates that it is only listening on port 80. This is why the curl http://qify.app works, since it uses port 80, and doesn't have the preloaded HSTS list that those web browsers do.
Generate a certificate for your domain, and configure nginx to listen on port 443, and your browsers will be able to access it that way.
I am jumping in on a project with some socket issues over SSL and Cloudflare... I know.. I have read about 50 different stack overflow posts and 200 blog posts to try to figure this out. The project works on my local dev server/computer just fine...
I think I am on the right track - But could use some help/pointers if ya'll can.
First, I thought it was weird that the /socket-io/ proxy_pass was at port 6379, the same as redis... Maybe it should be? When this was set at 6379, the socket connection will not connect - With or Without Cloudflare enabled ( I paused cloudflare to test this out).
I read through the express server and saw that the socket server seems like it's linked to the express server at port 4000... so I changed the proxy_pass for /socket-io/ to port 4000 and it reconnects. This works with Cloudflare paused/running... so maybe it's not cloudflare after all. Still, even though it says the socket has reconnected in the browser, nothing is working.
I'll start by sharing my NGINX config - Let me know what else ya'll need to see, please. Thanks for taking your time to help me out/pointing me in the right direction! I really appreciate learning about this stuff.
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name dev-app.myapp.com;
location / {
root /var/www/myapp_frontend/build/;
try_files $uri $uri/ /index.html;
#proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /api/ {
proxy_pass http://localhost:4000/;
include /etc/nginx/proxy_params;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
location /socket.io/ {
proxy_pass http://localhost:6379;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_read_timeout 86400;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
ssl_certificate /etc/letsencrypt/live/dev-app.myapp.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/dev-app.myapp.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = dev-app.myapp.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name dev-app.myapp.com;
listen 80 default_server;
listen [::]:80 default_server;
return 404; # managed by Certbot
}
Edit-1
I did see that cloudflare requires certain ports... Am I wrong to think that these ports only refer to the initial listening port, for example 443 above, since the proxy_pass ports are all using localhost?
I have a node express webserver starting up on my debian linux box on port 8080-8083 using pm2 cluster.
I have an nginx reverse proxy server setup on the server to redirect correctly to the node-express server, with the following /etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
server_name a.registered.dns.domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name a.registered.dns.domain.com;
ssl_certificate /home/admin/certs/a.registered.dns.domain.com.chained.crt;
ssl_certificate_key /home/admin/certs/a.registered.dns.domain.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
}
location /socket.io {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://nodes;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
}
}
upstream nodes {
# enable sticky session based on IP
ip_hash;
server 127.0.0.1:8080 fail_timeout=20s;
server 127.0.0.1:8081 fail_timeout=20s;
server 127.0.0.1:8082 fail_timeout=20s;
server 127.0.0.1:8083 fail_timeout=20s;
}
This creates the websocket just fine between the server and the client as seen here.
upgrades the connection from the long polling to the websocket with the status 101. If I do something from the site that sends an emit over the socket, the server receives it and acts on it appropriately. So Far So Good.
However, if I do something elsewhere that causes the server to emit out to the client, I can see using DEBUG='socket.io*' pm2 restart http-server --update-env on the server that the socket information is received and emitted out, the client never receives the data packet it should. can confirm this by running localStorage.debug = '*'; from the console in my chrome dev tools.
Saw the emit out and nothing but ping and pong packets on the websocket.
This all works correctly if I open ports 8080-8083 and use only an http connection. So it feels as if there is some issue with the nginx reverse proxy for the ssl connection of my site.
I have a problem with socket.io. When I start my Nodejs App Sockets works correctly but after few minutes the connection to websocket is closed and after reconnecting Socket.io fires emit again.
I'm using NGINX Proxy and I have noticed that bypassing NGINX the problem is solved, which configuration I need to edit? I think that the problem is my nginx configuration.
This is my NGINX default config:
server {
listen 80; #listen for all the HTTP requests
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri; }
server {
server_name example.com;
listen 443 ssl http2;
#Optimize Webserver work
#client_max_body_size 16M;
keepalive_timeout 20;
ssl on;
ssl_certificate /root/social/ssl/cert.pem;
ssl_certificate_key /root/social/ssl/key.pem;
location / {
proxy_pass http://localhost:5430;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
upstream io_nodes {
server 127.0.0.1:5430;
keepalive 20;
}
Please help
You should add another parameter:
proxy_read_timeout 96000;
The default value is 60s. You will get a message "lost connect" after 60s idles with the default value.