I followed the instruction and installed Ogar on my CentOS server successfully. But every time when my friends want to play on my server they have to use a google chrome and go to command lines and type 'connect("ws://agar.davidchen.com:443")'. It's not cool, because they think how the things work is you type a domain name (like 'agar.davidchen.com') then you can play the game, just like typing 'agar.io'. Is there any solution to this issue? Thanks!
You need to proxy the requests from HTTP to your socket connection via a web server like Nginx, so you can use http://agar.davidchen.com to access your web socket.
Install Nginx (version >= 1.3), then configure your virtual host with something like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
# This is where your web socket runs
server 127.0.0.1:443;
}
server {
listen 80;
server_name agar.davidchen.com;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Reference: https://www.nginx.com/blog/websocket-nginx/
Related
I've been learning frontend development only and just recently went over basics of Nodejs. I know that I would connect to certain port number when developing in Nodejs alone. However, I'm confused about how I would connect Vue application (built with Vue CLI) to backend since npm run serve will automatically connect to port 8080 by default.
My ultimate goal is to connect MongoDB to my application. Current error I'm getting is Error: Can't resolve 'dns'.
TLDR: Could someone please explain in newbie term how I can connect Vue application with MongoDB?
In my opinion, you have two ways of solving this:
First, there is a field called devServer through which you can tweak the configuration of the dev server that starts up when you run npm run serve. Specifically, you want to pay attention to proxy field, using which you can ask the dev server to route certain requests to your node backend.
Second, depending on your setup, you could use a different host altogether to handle backend calls. For example, as you mentioned, the dev server runs on 8080 by default. You could set up your node backend to run on, say, 8081 and all backend requests that you make in your VueJS app will explicitly use the host of <host>:8081. When you decide to move your code into production, and get SSL certificates, you can have a reverse-proxy server like Nginx redirect all requests from say, api.example.com to port 8081.
As for connections to MongoDB, IMO, here's a question you should be asking yourself:
Is it safe to provide clients direct access to the database?
If the answer is yes, then by all means, ensure the mongoDB server starts with its HTTP interface enabled, set up some access restrictions, update the proxy and/or nginx and you're good to go.
If the answer is no, then you're going to have to write light-weight API endpoints in your NodeJS app. For example, instead of allowing users to directly talk to the database to get their list of privileges, you instead make a request to your NodeJS app via GET /api/privileges, and your NodeJS app will in turn communicate with your database to get this data and return it to the client.
Another added benefit to having the backend talk to your database rather than the client, is that your database instance's details are never exposed to malicious clients.
Here's a sample vue.config.js setup that I have on one of my websites:
const proxyPath = 'https://api.example.com'
module.exports = {
devServer: {
port: 8115, // Change the port from 8080
public: 'dev.example.com',
proxy: {
'/api/': {
target: proxyPath
},
'/auth/': {
target: proxyPath
},
'/socket.io': {
target: proxyPath,
ws: true
},
'^/websocket': {
target: proxyPath,
ws: true
}
}
}
}
Here's the nginx config for the same dev server. I quickly pulled what I could from our production config and obscured certain fields for safety. Consider this as pseudo-code (pseudo-config?).
server {
listen 443 ssl;
server_name dev.example.com;
root "/home/www/workspace/app-dev";
set $APP_PORT "8115";
location / {
# Don't allow robots to access the dev server
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|Googlebot") {
return 404;
}
# Redirect all requests to the vue dev server #localhost:$APP_PORT
proxy_pass $scheme://127.0.0.1:$APP_PORT$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 443 ssl;
server_name api.example.com;
set $APP_PORT "8240";
location / {
# Don't allow robots to access the dev server
if ($http_user_agent ~* "baiduspider|twitterbot|facebookexternalhit|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|Googlebot") {
return 404;
}
# Redirect all requests to NodeJS backend #localhost:$APP_PORT
proxy_pass $scheme://127.0.0.1:$APP_PORT$request_uri;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
i have simple nodejs app running on ec2 instance with nginx configs
when tried to access the app from browser it give me "ec2-18-223-0-201.us-east-2.compute.amazonaws.com refused to connect."
when trying to curl it from VM
using curl http://localhost:3000 it works correctly, however when trying curl http://127.0.0.1:3000 it give me this output
Found. Redirecting to https://127.0.0.1:3000/
here's my nginx configs
upstream test{
server 127.0.0.1:3000;
}
server {
listen 80;
server_name ec2-18-223-0-201.us-east-2.compute.amazonaws.com www.ec2-18-223-0-201.us-east-2.compute.amazonaws.com;
location / {
client_max_body_size 20M;
client_body_buffer_size 128k;
proxy_pass http://test;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
One thing that should be clear before the actual problem. Is there in redirect policy in node app that returns below output?
curl http://127.0.0.1:3000 it gives me this output
Found. Redirecting to https://127.0.0.1:3000/ because redirection is
expected from Nginx, not from node app.
But I am sure the problem is with Nginx not with Node app as it is able to respond on a local port 3000.
refused to connect to connect mean that the server not running at all or the port may disable from the firewall.
Two possible reasons:
The Port 80 is not allowed in Security Group of the instance so allow 80 in the security group of AWS instance.
The Nginx is not running. Check the log under tail -f /var/log/nginx/error.log and the reason might be the log name of the DNS in the sever section.
So therefor two Suggestion for Nginx config
update your Nginx config to support long DNS name
vim /etc/nginx/nginx.conf and add value under http section in the config
http {
server_names_hash_bucket_size 512;
....
}
2. Remove redundent name from the config, its not be the reason but you should remove server_name ec2-18-223-0-201.us-east-2.compute.amazonaws.com www.ec2-18-223-0-201.us-east-2.compute.amazonaws.com;
I have a node express application which communicates with mongodb and serves back the response in JSON format after doing some processing. The application works as expected when run on a local machine.
This is how my connect code looks
await MongoClient.connect(uri, async function (err, client) {
...
}
However, I have deployed the application to an aws ec2 instance following this tutorial where I added nginx as a layer on top of my node application. Now I get a 504 Gateway Time-out on any routes that try to connect to mongodb.
The server block in my nginx configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /usr/share/nginx/html;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I understand that mongodb does not use HTTP, which is what nginx uses for communication. So I have tried to follow this tutorial but have had no luck.
Can anybody point me in the right direction?
Turns out I had completely forgotten to whitelist my server's IP address when I deployed the app to an ec2 instance. Hence why everything worked as expected locally (my local IP address was whitelisted).
This had nothing to do with NGINX. My mistake.
I have back-end written using node.js + express with express-ws depedency.
Locally everything works like it should. Previously it was deployed to red hat open shift, also haven't had any problem. Yesterday I bought VPS configured it and deployed there. Everything works except websockets.
I have nginx with enabled SSL that has the next lines in config related to the server
server {
listen ipaddresshere:80 default;
server_name _;
location / {
proxy_pass http://ipaddresshere:8080;
}
location /ws {
proxy_pass http://ipaddresshere:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I have other config places but they were generated by VestaCP and https://certbot.eff.org/
What I know that request to /ws route is coming to node.js app (I am logging it). But it doesn't go to that handler
app.ws('/ws', SocketsHandler.registerWs);
In the end it matches with my last handler and returns 404
app.get('*', ErrorHandler.notFound);
The question: What it can be that WS library doesn't work in VPS environment but I don't see any error in console... ?
P.S. Localy I run app without SSL and nginx
wsServer.on('connection', function (socket) {...})
I found that my config was overridden by some other file. So instead of Connection "upgrade"; server was receiving Connection "close"..
I'm writing web socket project, everything is working like expected(locally), I using:
NGINX as a WebSockets Proxy
NODEJS as a backend server
WS as websocket module: ws
NGINX configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend_cluster {
server 127.0.0.1:5050;
}
# Only retry if there was a communication error, not a timeout.
proxy_next_upstream error;
server {
access_log /code/logs/access.log;
error_log /code/logs/error.log info;
listen 80;
listen 443 ssl;
server_name mydomain;
root html;
ssl_certificate /code/certs/sslCert.crt;
ssl_certificate_key /code/certs/sslKey.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # basically same as apache [all -SSLv2]
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
location /websocket/ws {
proxy_pass http://backend_cluster;
proxy_http_version 1.1;
proxy_redirect off ;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Like I mentioned this is working just fine locally and in one machine in development environments, the issue I'm worry about is when we will go to production, in production environments will have more that one nodejs server.
In production the configuration for nginx will be something like:
upstream backend_cluster {
server domain1:5050;
server domain2:5050;
}
So I don't know how NGINX solves the issue for stickiness, meaning how I know that after the 'HANDSHAKE/upgrade' is done in one server, how it will know to continue working with the same server, is there a way to tell NGINX to stick to the same server?
I hope I make my self clear.
Thanks in advanced
Use this configuration:
upstream backend_cluster {
ip_hash;
server domain1:5050;
server domain2:5050;
}
clody69's answer is pretty standard. However I prefer using the following configuration for 2 reasons :
Users connecting from the same public IP should be connecting to 2 different servers if needed. ip_hash enforces 1 server per public IP.
If user 1 is maxing out server 1's performance I want him/her to be able to use the application smoothly if he/she opens another tab. ip_hash doesn't allow that.
upstream backend_cluster {
hash $content_type;
server domain1:5050;
server domain2:5050;
}