connecting two client socket - p2p

i have an application that has client and a server. The server is basically only used to store the file names that the clients have so that when other clients want to search for files, they can go the server, find the client that has the file they want and receive the file by directly connecting to it. By now, i can get the socket information of the client that has the file requested by the other client. However, i am now confused about how to connect these two clients. Do i have to create a separate client and a server socket between the two clients or there are other ways.

Now You have two choices:-
Let the server continue his role, and the server can act as an intermediary between the two parties. It should download the file from the client which has it and send it (via any suitable protocol) to the client who requested the file. This is called the Client -Server Architecture. This is a simple approach and you have the benefits such as file caching etc. i.e. If in future same file is requested the server can send it directly without asking for the client.
You can continue using the P2P architecture, and Create a separate a socket between the two parties, this is not straight forward and needs special care when multiple processes are working simultaneously.

Related

Connection between multiple clients to a single server in Nodejs

I need to create 20 clients which make requests simultaneously to a server in nodejs using websocket. I am able to create the connection between a single server and a client using websocket. But when it comes to creating 20 clients, I am not having any idea to proceed. Please give any suggestions to proceed.
You wouldn't need to create 20 html pages. Same HTML page can be loaded by multiple clients.
On server-side, the 'request' event will fire every time a client connects to your websocket server. Your websocket server will be able to handle multiple clients out of the box. However, you will need to ascertain 'which' client this particular request originated from. That can be done by using tokens or credentials, or any other custom protocol that you want to establish between your client and server.
Check the server-side usage example for websocket module here: https://www.npmjs.com/package/websocket#server-example

Clustered HTTP server that syncs clients for file transfer

I'm writing a Node HTTP server that essentially only exists for NAT punchthrough. Its job is to facilitate a client sending a file, and another client receiving that file.
Edit: The clients are other Node processes, not browsers. We're using Websockets because some client locations won't allow non-HTTP/S port connections.
The overall process works like this:
All clients keep an open websocket connection.
The receiving client (Alice) tells the server via Websocket that it wants a file from another client (Bob).
The server generates a unique token for this transaction.
The server notifies Alice that it should download the file from /downloads?token=xxxx. Alice connects, and the connection is left open.
The server notifies Bob that it should upload the file to /uploads?token=xxxx. Bob connects and begins uploading the file, since Alice is already listening on the other side.
Once the transfer is complete, both connections are closed.
This is all accomplished by storing references to the HTTP req and res objects inside of a transfers object, indexed by the token. It all works great... as long as I'm not clustering the server.
In the past, when I've used clustering, I've converted the server to be stateless. However, that's not going to work here: I need the req and res objects to be stored in a state so that I can pipe the data.
I know that I could just buffer the transferring data to disk, but I would rather avoid that if at all possible: part of the requirements is that if I buffer anything to disk I must encrypt it, and I'd rather avoid putting the additional load of encrypting/decrypting transient data on the server.
Does anyone have any suggestion on how I could implement this in a way that supports clustering, or a pointer to further research sources I could look at?
Thanks so much!

Streaming via socket io with multiple servers from single source

I have a single stream source S that produces ticker data.
I would like to integrate S into my node app that uses socket io. My app runs in a multiple server environment in production, let's say servers A and B.
Initially, I thought I would simply:
Use the socket io redis adapter: https://github.com/socketio/socket.io-redis on both A and B.
Connect both A and B to S and simply have A and B handle the chunks of data emanating from S by simply broadcasting the chunks into the appropriate rooms.
However, after thinking about this, I am realizing that I will probably run into an issue where both A and B broadcast the same data to the client (and the client receives duplicates of the same information). Am I thinking about this correctly? How can I avoid this?
One client must be connected to one server, and the same connection remains open on the same server, it's called session stickyness, he will not have two connections open. In order to do that, you should use a proxy which will act as a load balancer on your pool of server, you can use nginx for example.
All you have to do is synchronise rooms across server to broadcast correctly to all users in a room (because some user will be in a room on server A and other on server B).
documentation about nginx and websockets:
https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/
Hope it helps

how to send a message to individual clients with socket.io with multiple server processes?

I'm about to begin with socket.io and this is more of a theoretical question,
let's say that I want to send a message to a specific user with socket.io,
normally I would have to store the socketid with the relevant userid and when sending, get the socketid and send to.
but what if I have mutliple server processes running ? I'll have to make sure the correct server that the client is actually connected to does the sending. is it possible ?
For multiple server instances, you need to have a caching service (memcache, redis) for authentication and a central message queue service (stormMQ, rabbitMQ, AQ, java-based mq) where all your node instances bind to. Thus, a Node instance binds to the message queue for each client / channel / whatever, and all the other bound Node instances receive the messages and forward them to the client.
The problem is typically about how to play with a WebSocket cluster:
Several front-end servers which will be in charge of handling bidirectional connections with each client. They form the WebSocket cluster.
Several back-end servers which will be in charge of handling the business logic of your application.
Each time the back-end wants to inform the client, it will send a request to the WebSocket cluster which has the responsibility to communicate with the client.
A possible scenario:
Identify each WebSocket cluster's server with a unique id.
Identify each client with a unique id.
Each time a client will connect one of your WebSocket cluster's server, store its unique id along with the server's unique id in a a distributed key/value like database.
Thus you know which client is connected with which server.
The next time your back-end application wants to notify a client there are two possibilities:
The pair (clientId, serverId) is not present in the database and you cannot inform the client.
The pair (clientId, serverId) is present in the database, then you have to ask to the server identified by serverId to notify the client identified by clientId.
Notes:
Each WebSocket cluster's server can run a node.js instance supercharged with socket.io. It has to provide a route which will take the clientId as a parameter and will use socket.io to notify this client. Indeed, socket.io is aware of whcih client is using which socket on this server.
Every time your server will crash, you have to clean your database and remove all pairs which contain the server id.
Deploying a WebSocket cluster can be tedious, so you have commercial offers like Kaazing.
A good distributed key/value like database is Riak. It is better than Redis or Memcached for the above purpose because it can be easily distributed in a data-center and over several data-centers.

Retrieve Socket.io Client from Redis

I'm building a real time data system that allows an Apache/PHP server to send data to my Node.js server, which will then immediately send that data to the associated client via socket.io. So the Apache/PHP server makes a request that includes the data, as well as a user token that tells Node.js which user to send the data to.
Right now this is working fine - I've got an associative array that ties the user's socket.io connection to their user token. The problem is that I need to start scaling this to multiple servers. Naturally, with the default configs of socket.io I can't share connections between node workers.
The solution I had in mind was to use the RedisStore functionality, and just have each of my workers looking at the same Redis store. I've been doing research and there's a lot of documentation on how to use pub/sub functionality for broadcasting messages to large groups (rooms). That's fine, but I need to be able to send messages to a single client, so I need some way to retrieve a user's socket.io connection from the RedisStore.
The only way I can think to do this right now is to create a ton of 'rooms' named with the user's token, and only have one user in each room. Then I could just emit to that room. However, that seems very inefficient.
Is there a better way that I can retrieve user's unique socket.io connections from Redis?
Once a socket connection is made to a server running the node server, it is connected to that instance.
So it seems you need to make a way for your php server to know which node server a client is connected to.
In your redis store you could just store the id of the server as the value by the client id. Then php looks up which node server to use and makes the request.

Resources