run shell script on client remotly from server - node.js

I have one server and multiple clients. The server wants to run shell script on each device it wants to. Absolutely it's not possible via simple socket because we may have thousands of devices. Also server and devices should be always connected via socket. after a lot of search I found out that the solution might be NAT-T. But still I don't know how to use that or if there is another solution.
Please help me what should I do on clients and server.

If you don't know the clients address and port upfront, you need to connect to the server with the clients. 1000s of devices are no problem. You run in a socket limit around 65000 open ports (check ulimit). Build an object stream between client and server and execute the script based on the object the client receives. You could also set an interval on the clients and let them check with simple http(s) every n secs if there is something to do for them?
See for example here: Node Stream Docs
Or here: Node HTTP Docs

Related

Horizontal scaling with a node.js app & socket io

My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.

Multiple Socket.io app processes cause each client socket connects and disconnects repeatedly

I am working on a nodejs app with Socket.io and I did a test in a single process using PM 2 and it was no errors. Then I move to our production environment(We use Google Cloud Compute Instance).
I run 3 app processes and a iOS client connects to the server.
By the way the iOS client doesn't keep the socket connection. It doesn't send disconnect to the server. But it's disconnected and reconnect to the server. It happens continuously.
I am not sure why the server disconnects the client.
If you have any hint or answer for this, I would appreciate you.
That's probably because requests end up on a different machine rather than the one they originated from.
Straight from Socket.io Docs: Using Multiple Nodes:
If you plan to distribute the load of connections among different processes or machines, you have to make sure that requests associated with a particular session id connect to the process that originated them.
What you need to do:
Enable session affinity, a.k.a sticky sessions.
If you want to work with rooms/namespaces you also need to use a centralised memory store to keep track of namespace information, such as the Redis/Redis Adapter.
But I'd advise you to read the documentation piece I posted, things might have changed a bit since the last time I've implemented something like this.
By default, the socket.io client "tests" out the connection to its server with a couple http requests. If you have multiple server requests and those initial http requests don't go to the exact same server each time, then the socket.io connect will never get established properly and will not switch over to webSocket and it will keep attempting to use http polling.
There are two ways to fix this.
You can configure your clients to just assume the webSocket protocol will work. This will initiate the connection with one and only one http connection which will then be immediately upgraded to the webSocket protocol (with socket.io running on top of that). In socket.io, this is a transport option specified with the initial connection.
You can configure your server infrastructure to be sticky so that a request from a given client always goes back to the exact same server. There are lots of ways to do this depending upon your server architecture and how the load balancing is done between your servers.
If your servers are keeping any client state local to the server (and not in a shared database that all servers access), then you will need even a dropped connection and reconnect to go back to the same server and you will need sticky connections as your only solution. You can read more about sticky sessions on the socket.io website here.
Thanks for your replies.
I finally figured out the issue. The issue was caused by TTL of backend service in Google Cloud Load Balancer. The default TTL was 30 seconds and it made each socket connection tried to disconnect and reconnect.
So I updated the value to 3600s and then I could keep the connection.

Coordinating Client and Server side Ports (TCP/IP)

I recently started experimenting with TCP/IP in Java (question not Java specific), and I was wondering if there was a work-around to this problem: I want my Client to be able to list all available Local Network servers. This is easy when every server and client use the same port, eg. 48,000 - just cycle through every possible IP and ping any existing servers...
But what if a user already had a game open that was using port 48,000? The game would just crash with no work around. A fix would be for the server to loop and test every port between say 48,000 and 60,000, but then how would the client know which port to connect to?
I want to be able to list all available servers (like a game such as Halo does), without the user having to specify a port number.
Thanks for the help

how to efficiently transfer file between 2 node.js instances?

I'm developing chat application using app.js which is webkit+node.js framework.
So i have node.js plus bridged web browser environment on both sides.
I want to make file transfer feature somewhat similar to Skype one.
So, initial idea is to:
1.connect clients to main server.
2.Each client gets ip of oposite ones.
3.Start socket or websocket server on both clients and connect to each other.
4.Sender reads the file and transmits it to the reciver.
Question are:
1.Im not really sure that one client can "see" the other.
2.file is a binary data, but websockets are made for text messages so i need some kind of coding/decoding stuff. I thought about base 64 but it has 30% of "overhead" information. So i need something more effitient (base 128?).
3.If it is not efficient to use websocket should i use TCP sockets instead? What problems can appear if i decide to use them?
Yeah i know about node2node and BinaryJS, i just dont know should i use them or not. And i really what to do something myself.
OK, with your communication looking like this:
(C->N)<->N<->(N->C)
(...) is installed on one client's machine. N's are node servers, C's are web clients.
This is out of your control. Some file sharing apps send test packets from the central server to clients, to check whether ports are open and NAT rules are configured correctly, etc. Your clients will start their own servers on some port, your master server can potentially create a test connection to these servers to see whether they're started correctly and open to the web, BEFORE telling other clients that they can send files.
Websockets are great for status messages from your servers to the web GUIs and general client-to-client communication. For the actual file transfers, I would use TCP sockets, see the next answer. On the other hand base64 encoding is really not a slow process, play with it and benchmark its performance, then decide with some data to back up your decision.
You could use a combination: websockets from your servers to the web GUIs, but TCP communication between the servers themselves. TCP servers (and streams) aren't hard to set up in Node, I see no disadvantages. It might actually be less complicated than installing node2node on those servers, since TCP is already built-in.

Non-blocking service to receive messages on port via UDP

I want to build a service on my Linux VPS which listens to a certain UDP port and does something with the (text)message which is captured. This processing consists of appending the message to a locally stored txt-file and send it as http, with a post variable to another server.
I've looked into Nginx but as far is can see this server can only be bound to receive http packets. Although it is asynchronous.
What is the best way to achieve this listening-service on linux? And which has the capabilities to do the above mentioned processing?
Is for instance node.js a possibilty? It looks great
For simplicity, you can use xinetd, and for the app you can use any scripting language, which will read the packet from the stdin and save it to the file.

Resources