I am writing a decentralized chat application using nodejs, expressjs, angularjs, socket.io and ipfs.I am using libp2p to form the nodes that will communicate with each other over an open connection. Libp2p is a networking stack modularized out of IPFS project.
Libp2p allows me to build nodes which are capable of hosting a swarm or listening/ dialing to one. I have developed to the point where several nodes can communicate with each other via inputs in angularjs (supplemented by socket.io) webpage but their IP addresses and tcp ports need to be hard coded.
The problem I am facing is, if an unknown number of users join this system and set up their nodes, how do I handle the scenario. I have done lot of research into DHT specifically into its application with torrents but am no where close to actually applying it.
I do not want to run a central system that keeps track of the users as a tracker keeps track of seeders and leachers in torrents (now somewhat redundant due to DHT)
In a centralized chat application, every time a user enters or leaves, I can send an emit event from the server to all peers using socket.io signaling the same. But the equivalent in a decentralized chat app is something I am struggling with greatly.
I need some guidance please.
You won't have to worry about that issue specifically as libp2p will handle the discovery and connection of the nodes. In the end you get a primitive for process addressing which will always dial to the process if it is accessible in the network.
I've been working recently in better documentation and tutorials for libp2p, please go to https://github.com/libp2p/js-libp2p/tree/master/examples and https://github.com/libp2p/js-libp2p. More examples to come next week, including Peer Routing + Content Routing (aka DHT).
Cheers!
Related
I'm developing a video chat application for multiple users using socket.io ans simple-peer. I'm using react for front end and node js for server. Deployed the server in heroku(Now I'm using free dynos only). I'm also using my own TURN server.
It is working without any trouble for four devices . One of the existing peer disconnects when the fifth one connects.
I couldn't find what I'm missing. I'm trying to connect 10 peers in a room.
Do I need media server for streaming? or I have to change anything in the signalling server or TURN server?
Any help would be appreciated.
The average user's computer cannot maintain a lot of peer connections at the same time. If you use mesh topology in your WebRTC app, the recommended number of users in the chat room is 4. If the number is higher, it begins to load the CPU much more and the p2p connection with each peer becomes unstable. If you want your application to support multiple participants in the room, you should integrate SFU into your app (mediasoup, for example).
This is a bit of a stretch, but I hope someone can help.
I'm a PHP/iOS developer who's been working on an app that has a messaging component. Front end is Obj-C, backend is PHP/MySQL currently. As I've gone further into development, I'm feeling the shortcomings of polling and I've been looking for a more realtime solution and, sure enough, I've found the answer in web sockets. PHP doesn't play too well in this domain, but I've been able to get things working locally by using Laravel + Redis + Node.js.
Next I needed to find a suitable host for the real world app deployment and this is where I'm running into my first major obstacle (or perceived obstacle?)
Heroku appears to have very low limits on the number of Redis connections allowable:
Link: https://elements.heroku.com/addons/heroku-redis
Free plan: 20 connections
$120/month: 400 connections
$1450/month: 5000 connections
The problem is, if this app does well and gains the kind of traction I want, a LOT of people will be using it at the same time all across the country and these limits have me worried. These prices seem a bit ridiculous or I'm not looking at it correctly.
So my question is, does maintaining an open web socket (one user) mean that one of the Redis connections is used? Or am I looking at this completely wrong? Trying to decide if I need to just stick to polling or if there is a cost-efficient solution to this. I do want to stick to Laravel/Redis if possible because I am not too familiar with JS and I feel that my backend will be much less secure if I try to go down that route at this point.
Proper design will use 2 Redis connections per server (or per Heroku Dyno):
One connection will be used to Subscribe (to listen) to the app's channel(s). This connection cannot be used for other functions, so...
A second connection is used for all other Redis features, such as Database use and Publishing to the app's channel(s).
I don't know if you're into Ruby, but I'm the author of the Plezi Http(REST)/Websocket framework and had to manage a solution for Plezi's scaling capabilities over Redis (which is an automated feature, you just tell Plezi the Redis server's address and you're good to go).
If you want to look over Plezi's Redis code, you will notice there are two connections and that each server registers to two channels - a global channel and a private channel: one used for application wide events and the other one allows messages to be routed to specific connections based on the server they belong to (avoiding workload on unrelated servers).
Good luck!
I'm coding an online multiplayer game using nodejs and HTML5 and I'm at the point where I would like to have multiple maps for people to play on, but I'm having a scaling issue. The server I'm running this on isn't able to support the game loops for more than a few maps on its own, and even though it has 4 cores I can only utilize one with a single node process.
I'd like to be able to scale this to not even necessarily be limited to a single server. I'd like to be able to start up a node process for each map in the game, then have a master process that looks up what map a player is in and passes their connection to the correct sub process for handling, updating with game information, etc.
I've found a few ways to use a proxy like nginx or the built in node clusters to load balance but from what I can tell the examples I've seen just give a connection to whatever the next available process is, and I need to hand them out specifically. Is there some way for me to route a connection to a node process based on a condition like that? I'm using Express to serve my static content and socket.io for client to server communication currently. The information for what map the player is in will be in MongoDB along with the rest of the player data, if that makes a difference.
There are many ways to adress your problem, here are two suggestions based on your description.
1 - Use a router server which will dispatch players queries to "Area servers" : in this topology all clients queries will arrive to your route server, the server tag each query with a unique id and dispatch it to the right area server, the area server handle the query and sendit back to the route server which will recognize it from the unique tag and send back the response to the client.
this solution will dispatch the CPU/memory load but not the bandwidth !
2 - Use an authentication server which redirect client to the servers with less load : in this case, you'll have multiple identical servers and one authentication server, when a client authenticate, send the url and an auth token of available server to the client and an authentication ticket to the server.
the client then connect to the server which will recognize using the auth toekn/auth ticket.
this solution will dispatch all CPU/Memory/Bandwidth, but might not be suited to all games since you can be sent to different server each connection and you'll not see the players in the same area if you are not on the same server.
those are only two simple suggestions, you can mix the two approaches or add other stuff (for example inter-communication area servers etc) which will solve the mensioned issues but will add complexity.
I am trying to setup a POC for myself using Nginx, Node.js and Socket.io 1.0 using clustering on Rackspace. I am under the assumption that I need to use clustering because I want this to be scalable across multiple servers if needed. I want each node to have their own instance and as of now I can't see any need for each of the instances to have to talk to each other for any reason. Again as of now, I believe I need to use clustering for simply the fact that I may have many clients connecting to this server and I want it to be able to grown and shrink accordingly. My end goal is to build a little POC similar to what is shown here: https://cloud.google.com/developers/articles/real-time-gaming-with-node-js-websocket-on-gcp
I just got what I believe to be a valid setup of the new Socket.io 1.0 established, but when connecting from different devices behind my router, they are all showing the same PID in my logging and I assume this is due to the required sticky-sessioning by Socket.io. I am not sure if this is the same as the worker-process that we used to get with clustering, but again I am still trying to get my head wrapped around all this.
First I want to know if using clustering and sticky-sessions is required, since only 1 PID is issued for the same external IP, is there anyway to have each computer treated as its own instance? I do not want to send back a response that updates everyone behind that IP.
My second question is this and it may be a stupid question but i'm asking anyway :) In reading about how to get the sticky-sessions working I kept seeing people stating to "use sticky-sessions, like by IP Address". The word "like" is what got me. I seemed to have found people referring to using sticky-sessions with IP and cookies. Can you do it by anything else, such as a username, issued token or anything? My concern is if someone is playing with this on a mobile device and they switch towers, the tower will issue a new IP so in-turn a new PID would get issued and essentially that players game lost. Am I understanding this right?
Please forgive me as I am new to Node.js but thought this would be a cool way to learn node.js and clustering in the cloud. Any info or direction that anyone can provide would be of great help. Many of the tuts all seem to broadcast events to everyone but i am looking for a scalable solution where each connection can be sent events individually most fo the time. I also need to solve for a number of people behind the same firewall being treated as separate connections when the server communicates to them. Again if there is any reading or tutorials that you feel may help me with socket.io 1.0 and what I am trying to do, please reply. Thanks!
In general since you are using websockets you don't need to worry about stickiness as long as the connection does not terminate. This communication is bi-directional and the http connection is kept alive. If the connection drops the client is essentially reconnecting and starting over. So yes if anyone's ip gets renewed you will now get a new server socket.
Refer to article using-multiple-nodes where it states the requirement for XHR/JSONP long polling clients.
I don;t believe nginx has capabilities of load balancing on things like MAC address etc as per nginx load-balancing techniques.
I am thinking that you may need a solid load balancer that can use MAC addresses, virtual port ID or some headers for routing.
We are developing a Javascript control which should be constantly connected to a server for receiving animation updates.
We are planning to host this stuff on an Amazon cloud.
The scenario is like this: server connects to activemq queue waiting for updates, for each update it broadcasts it to all connected clients.
Is it even possible to handle such load with node.js + socket.io?
Will a single node.js server be able to handle such load?
How to organize fast transport between different nodes if we will have to use more than one node?
Will single node.js server be able to handle such load?.. How to organize fast transport between different nodes if we will have to use more than one node
You say that you are planning to host on Amazon. So first off, nothing should be scoped for a single server. Amazon machines will simply "disappear", you have to assume that you are going to use multiple computers.
...handling 50k simultaneous clients
So to start with, 50k connections for a single box is a very big number. Here's a very detailed blog post discussing "getting to 10k" with node.js+socket.io.
Here's a very telling quote:
it seemed as though 10,000 clients simply required more serialization
than my server was able to handle.
So a key component to "getting to 50k" is going to be the amount of work required just pushing data over the wire.
How to organize fast transport between different nodes if we will have to use more than one node.
That blog post is the first of 3. When you're done the first, read the other two. That should point you in the right direction.