Bypassing socket connections in node.js - node.js

I'm working in a project where we need to connect clients to devices behind LAN networks.
Brief description: there are "devices" connected, in a home for example, under a LAN created by a router. These devices create a full webserver, operating under linux, and using nodejs as the backend implementation language. They also have access to Internet, through the public IP of the router. On the other side, there are clients which can choose to which device to connect to.
The goal is to connect the clients with the webServer created by any device.
Up to now, my idea is to try to implement something similar to how TeamViewer works. As I understand, Teamviewer has a central server, which the agents connect to. When an agent connects to the central server, this one gets hold of the TCP connection, keeping it alive. When another client wants to access to the first client, the server bypasses both TCP connections. That way the server acts like a proxy, where it additionally routes the TCP connections. This also allows to connect to clients under LAN or firewalls (because the connections are created always from the clients).
If this is correct, what I would like to implement is a central server, in nodejs as well, which manages a pool of socket connections coming from the different active devices, and when a client wants to connect to one specific device, the server bypasses the incoming TCP connection of the client with the already existing connection of the device.
What I first would like to know is if this is possible in nodejs. My idea is to keep the device connections alive, so clients can inmediately connect to them, creating some sort of pool of device connections.
If implemented in C, I guess I could get hold of the socket descriptor, keeping it alive, and bypassing it to the incoming client request. But in nodejs I can't seem to find any modules that manage TCP connections.
Are there any high level npm packages which do this function? Else, is it possible to use lower level modules (like net) which have those functionalities.
Ideally I would like to implement it with high level modules (express), but if it's not possible, I could always rewrite the server using low level modules.
Thanks in advance

Related

TCP hole punching in Node without a server

I'm trying to follow the code given here to implement NAT hole punching in Node.js. I'd like to know if the server is strictly necessary. Having read about hole punching, I am under the impression that the purpose of the server is to allow the clients to exchange some information (including but not limited to their addresses and ports they want to communicate on) so that they can proceed to talk directly. Assuming the clients already had each other's information (again, including but not limited to their addresses and ports), would the server still be necessary? If so why and if not, how could this be implemented?
For instance, say one were to build an application where client_A prints out all information that would have been transmitted to the server for user_A to read, who then sends this to user_B, who then submits this info to client_B (this could be done via email for example). Wouldn't this avoid the need for a server?
Here is another explanation of why I think it might be possible to remove the server in the middle:
In NAT hole punching (assuming I understand it correctly), the communications begin when client_A sends a message to the server. The message contains some information that the server then passes on to client_B when client_B contacts the server. After this point, client_A and client_B are able to communicate directly without the need for the server. I am under the impression that once a direct connection between client_A and client_B has been established, the server could go offline and the two clients would still be able to communicate directly with one another. If this is the case, then I would imagine that any information that is being used to maintain this connection (be that addresses, ports, or any other kind of info) could be exchanged through any other channel (eg: email, a handwritten letter, a voice call, etc) at the beginning of the protocol, and then the connection could be established without ever needing the server.
Regarding 'tricking' the router
As manishig pointed out to me in a comment (thanks), NAT hole punching also requires tricking the router. If I understand correctly (please correct me if not) the router is tricked by having the router store the info for directing incoming packets from the server to client_A, however, these packets are actually coming from client_B after the initial phase of the protocol. If this is a correct description of the problem, is there a way to trick the router that doesn't require using a server?
There are ways to communicate between two remote computers over the internet without an intermidiate server, but IMO it is not the preferred way.
Why an intermidiate server is needed?
If client_A and client_B are both in the same LAN (e.g your home/office network) you can make sure (configure on the clients side and/or the router) that they will have a static ip address over this LAN and they can just talk freely.
E.G: If client_A is listening on port 8080, client_B can create a connection to client_A_ip on port 8080
Over the internet any packet sent is passed through NAT usually at least twice. One time after going through your LAN (e.g your home/office router) and at least once over an ISP endpoint. Which means you have no controll over the public ip and port assigned to your packet.
Now not only that you don't have controll over your packet's assigned public ip and port, these are also not static. They won't change while you have an active TCP connection, but you don't have any other guarantee from your ISP regarding your assigned public ip and port.
The intermediate server`s purpose is to dynamically update each client with it's peer info and also keeping the tcp connection open, so that peer to peer comunication will be available.
Alternative solution to an intermidiate server (Not recommended)
If you want your clients to communicate without an intermidiate server you can buy a public static ip from your ISP (if they support it) and then there are ways you can make (with some config) that one of your clients have a public static ip and port that the other client can connect to.
But I wouldn't recommend it, since it requires some understanding in IT and security risks.
Also if both client's are portable and connect to different networks all the time it's not a valid solution

How do I block all internet connections but permit connections within localhost?

I'm testing a program I built regarding its ethernet/internet behaviour. It's a server/client application. I need to block the server's internet connection while the client still has access to the internet (but I'm running both in the same machine, for testing, trying to simulate what will happen when I get this into the proper hardware). Although the server must not have access to the internet, server and client must still communicate with the localhost address for database communication. VMs are not an option as of now, although they could be perfect.
The problem is that if I try to block internet connections in my firewall for the server process, I also block it from communicating with the client running in the same PC (trying to access it via localhost:port). Ideally, the server would be completely isolated, EXCEPT for local comms with the client program.
How do I properly block ONLY internet connections?

Horizontal scaling with a node.js app & socket io

My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.

Is it possible to both listen and send messages to a remote socket using node-red's websocket node?

The Almond Plus router and home automation hub now exposes the state of its registered z-wave and zigbee sensors via a websocket.
The websocket API is here:
https://wiki.securifi.com/index.php/Websockets_Documentation
I've aimed a node-red websocket node at the router's local IP address, and have included authentication information in the URL. This seems to work for receiving status changes to devices.
But I need to also be able to send commands over the websocket to flip switches and whatnot. When I create both 'listen on' and 'connect to' websocket nodes in node-red, only the node that's listening connects. Do I need to make two nodes at all? I would have hoped there'd be a way to make one connection to the websocket and use it for two-way communication, but maybe this question just exposes my ignorance of how either websockets or node-red function.
Thanks for any help or information.
You should need both a Websocket in and a Websocket out, but both should be set to "connect to" mode. These handle the input and output side of the connection
I'd have to double check the code, but they should share the same client connection under the covers so there will only be one real Websocket connection. But even if the 2 nodes don't share the connection having 2 separate websocket connections to the same address shouldn't be a problem.

Is there a way to test if a computer's connection is firewalled?

I'm writing a piece of P2P software, which requires a direct connection to the Internet. It is decentralized, so there is no always-on server that it can contact with a request for the server to attempt to connect back to it in order to observe if the connection attempt arrives.
Is there a way to test the connection for firewall status?
I'm thinking in my dream land where wishes were horses, there would be some sort of 3rd-party, public, already existent servers to whom I could send some sort of simple command, and they would send a special ping back. Then I could simply listen to see if that arrives and know whether I'm behind a firewall.
Even if such a thing does not exist, are there any alternative routes available?
Nantucket - does your service listen on UDP or TCP?
For UDP - what you are sort of describing is something the STUN protocol was designed for. It matches your definition of "some sort of simple command, and they would send a special ping back"
STUN is a very "ping like" (UDP) protocol for a server to echo back to a client what IP and port it sees the client as. The client can then use the response from the server and compare the result with what it thinks its locally enumerated IP address is. If the server's response matches the locally enumerated IP address, the client host can self determinte that it is directly connected to the Internet. Otherwise, the client must assume it is behind a NAT - but for the majority of routers, you have just created a port mapping that can be used for other P2P connection scenarios.
Further, you can you use the RESPONSE-PORT attribute in the STUN binding request for the server to respond back to a different port. This will effectively allow you to detect if you are firewalled or not.
TCP - this gets a little tricky. STUN can partially be used to determine if you are behind a NAT. Or simply making an http request to whatismyip.com and parsing the result to see if there's a NAT. But it gets tricky, as there's no service on the internet that I know of that will test a TCP connection back to you.
With all the above in mind, the vast majority of broadband users are likely behind a NAT that also acts as a firewall. Either given by their ISP or their own wireless router device. And even if they are not, most operating systems have some sort of minimal firewall to block unsolicited traffic. So it's very limiting to have a P2P client out there than can only work on direct connections.
With that said, on Windows (and likely others), you can program your app's install package can register with the Windows firewall so your it is not blocked. But if you aren't targeting Windows, you may have to ask the user to manually fix his firewall software.
Oh shameless plug. You can use this open source STUN server and client library which supports all of the semantics described above. Follow up with me offline if you need access to a stun service.
You might find this article useful
http://msdn.microsoft.com/en-us/library/aa364726%28v=VS.85%29.aspx
I would start with each os and ask if firewall services are turned on. Secondly, I would attempt the socket connections and determine from the error codes if connections are being reset or timeout. I'm only familiar with winsock coding, so I can't really say much for Linux or mac os.

Resources