How to use Socket.io with Nginx load balancer? - node.js

I was going over this article: https://medium.com/#feritzcan/node-js-socket-io-1cde93315a7d, the section "CHAPTER 9 — NGINX Load Balancer", where it is said that:
Imagine you have 3 servers on ports 3000,3001,3002. An user connects to server through Nginx and its forwarded to server on port 3000. Then, socket connection between client and server:3000 is established. However, when client emit any event to server through Nginx, client is not guaranteed to be forwarded to the server:3000 again. Most of the time, it will be forwarded to server:3001 or server:3002 which doesn't recognize the client and will result in error.
In round-robin algorithm; requests are distributed sequentially among servers.
And then, below it is said that:
Ip hashing is a proxy algorithm like round-robin. The algorithm will always forward clients from the same ip to the same server. So, user will always connect the same server and be recognized.
Edit your config file again and set IP hashing this time
sudo nano /etc/nginx/sites-available/yourdomain.com
,upstream app_servers {
ip_hash;
server 142.93.111.111:3001;
server 142.93.111.111:3002;
server 142.93.111.111:3003;
….
…
…
After restarting Nginx, it will use IP hashing and will always forward clients to the same server that they were forwarded before.
Now my question here is: OK, I get it how the NGINX Load Balancer ensures that the same client will always be connected to the same server, but what about the frontend and the way the socket.io-client is created? For example, if we use server:3000 as the URL when creating the socket on the frontend side, does this means that every client will always be connected to the server:3000?
This somehow goes against the feature for distrusting the load between multiple servers, so I wonder If I am missing anything related.

Related

How might one set up a reverse proxy that cannot decrypt traffic?

I'd like to have a reverse HTTPS proxy that CANNOT decrypt proxied traffic (ie an HTTPS passthrough/tunnel). The idea is to run this proxy on a VPS and point a domain to it, allowing for the IP address of the origin server to remain unexposed while maintaining end-to-end encryption.
Is this possible? I could probably proxy requests without difficulty since the destination address in that direction is fixed, but proxying responses seems problematic given that the proxy would be unable to read the client IP within an encrypted response.
A potential solution is to have the origin server package the encrypted response and destination address in a request made to the proxy, but I am unsure as to how I might generate the encrypted request without sending it (using node.js, which is the application running on the origin server).
From your question, I got that you want to listen to requests from your VPC server and pass the request to your other server which has to remain unexposed.
This can be configured with the web server which you are using for proxy ( considering AWS allows port forwarding from a VPN server to non-VPN server ).
I prefer doing this with Nginx as it is easy, open-source with less code and more functionality.
There is a concept of load balancing which does the same as you mentioned above.
steps :
Install Nginx and keep it active.
Create a new configuration file in /etc/nginx/sites-enabled
write the below code with modifications:
http {
upstream myapp1 {
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
}
}
}
and at the place of srv1.example.com and srv2.example.com add the domain to which you want to redirect requests
Save the file and restart the Nginx
Boom!! it should redirect all incoming requests to your application.

Redirect all requests from one server to another

I have two servers, Server 1 (A Windows Server VPS) and Server 2 (A Linux VPS).
Both servers are running NodeJS API using PM2 without anything like apache or nginx or whatever.
What I want is to redirect all requests from Server 1 to Server 2 because I want to shut Server 1 down after a while.
Server 1 address: www.pharmart.sy
Server 2 address: www.pharmartco.com
I don't want to redirect using the res.redirect method because that would break my application.
The application is a Mobile Application that's why I don't want to use the res.redirect method, and I have the link to the server hardcoded in the app so I need to upload another version of it in order to change the link to the second server. I also can't make sure that everyone updates the app and that's why I need to redirect all the requests to the second server.
So all the redirection handling should be done on the Windows machine.
What is the best way of doing that?
Here are a couple ideas:
DNS
Change the DNS for the server 1 domain to point to the server 2 host. So, then all requests to either DNS name will go to server 2. You may have to wait a little while until any local DNS caching effects time out. An advantage of this approach is that while you are waiting for DNS caching effects to expire, everything stays up as requests either go to a working server1 or a working server2. When cached DNS has finally expired, all requests will be going to server2 and you can then take server1 down.
Your Proxy
You could replace the current server 1 process with a proxy that points to server 2. So, any requests incoming to server 1 will be proxied over to server 2. Since a change like this probably can't be made instantly, you might have a short amount of downtime for requests coming into server1.
Hosting Proxy
If this is running at a hosting provider, then they probably have a proxy service already running that proxies your public DNS IP address to your actual host. That hosting proxy could be reconfigured to direct requests for server1 to server2 instead.

Beginner question about Ports and server listening

I've just completed a couple of Node.js tutorials and would like a bit of clarification about server port listening and communication between computers in general.
I've created a couple of basic Node.js servers, and what I've learned in that you have to tell your server to listen on a certain port. My questions are as follows:
Say my computer (PC1) is listening on port 3000, does that mean when a client (say PC2) is trying to connect to my server through the internet, the client must be sending their request via port 3000 on their side for my server to receive and respond to the request?
Following on from Q1 - And if PC2 (client) is trying to connect to PC1 (server) and the client's port is different to what the server is listening for, does that mean nothing happens?
This is a very "beginnerish" question. Say I've got a basic Node.js server up and running, and a client makes a request. Before the client information even reaches the server application running in Node.js, a connection between the client and server through the internet must first be established through their IP addresses, right? Then after that, the server application will respond if the port it's listening on is the same port that the client sent the request?
I realise these are basic questions, but I'd really like to firmly grasp these concepts before moving forward with my backend adventure.
Feel free to direct me to any resources you think would help me understand these concepts better.
Many thanks!
If your firewall is off, and you are behind a router that forwards the port 3000 to you, then yes, your 3000 is open and you can connect freely from your publicIP:3000.
What we do normally, we listen on localhost:3000, but we firewall 3000. Then we have a frontend reverse proxy (nginx, HAProxy), that reverses port 80(http)/443(https) to 3000.
Reverse proxy allows listening to different domains as well, so you can have a app on 3000, a app on 3001 for different apps, and redirect different domain to it.
If you cannot access from PUBLIC IP because you are behind a local area network, then you can install something like ngrok which allows you to tunnel your connection from the internet.

Can nginx redirect instead of proxy

Suppose I have 1 nginx server and many Websocket servers:
server1.exmaple.com
server2.example.com
server3.example.com
When there is only a few concurrent connection Nginx can handle the traffic to and from the servers using upstream and proxy_pass.
Because users can send much more Websocket message than HTTP request in a given time, there is going to be a turning point when Nginx can't handle anymore message because it is out of network resources.
Adding more upstream servers won't help because the bottleneck is the server all users conenct to, the Nginx server.
If Nginx can do redirection when the Websocket handshake happens the browser can use that direct connection to each server and leave Nginx out of this.
If these servers can't handle the traffic I can just add more.
Is it possible to do redirection with Nginx instead of proxying, I can't find any demo or documentation about it?

How to set up nginx upstream to check response and set this server to 'down' when response is a 40x

I have three servers. All three use nginx 1.10.3. Two of these servers also host a nodejs application which is only listening on port 2403. The corresponding nginx-instances serve these nodejs apps via port 80. The third server deals as a load balancer and does a simple round robin via upstream to these other two servers.
As far as I know when one of the upstream-servers is down nginx removes these servers from the upstream and does health-checks every n seconds. If the server is back online it's readded.
But now I have the following situation:
One of the nodejs applications crashes. The nginx-instance is still running and listenes on port 80. The nginx-loadbalancer checks port 80, gets a response and therefore continues with the round-robin (but the app isn't reachable of course). Now I would like to setup nginx that, for example, it checks the response of the upstream and when there is a header (there should be a 403/503 header, shouldn't it?) it removes the broken/down server and only uses the last one till it's working again.
How can I do that?
I can provide the nginx configs if you like.

Resources