I am working on a webRTC application where a P2P connection is established between a Customer and free agents .The agents are fetched using AJAX call in the application.I want to scale the application such that if the agents are running on any node server they are able to have a communication mechanism and update status on agent(available,busy,unavailable)can be performed.
My problem statement is that the application is running on 8040 and agentsservice is running on 8088 where the application is making ajax calls and bringing the data.What best can be done to scale the agents or any idea about how to scale the application.
I followed https://github.com/rajaraodv/redispubsub using Redis pub/sub but my problem is not resolved as the agents are being updated , fetched on another node using ajax calls .
You didnt gave enough info... but to scale your nodejs app you need a centeral place which will hold all the info that needed and than can scale redis can scale easily, youc can try socket.io etc..
now after you have your cluster of redis for example you need to make all your node.js to communicate with the redis server that way all you nodes server will have access to same info, now its up to you to send to right info to right clients
Message Bus approach:
Ajax call will send to one of the nodejs servers. If the message doesn't find its destination in that server, it will be sent to the next one, and so one. So signaling server must distribute the received message to all the other nodes in the cluster by establishing a Message Bus
Related
I'm new to working with sockets and have a small system design question:
I have 2 separate node processes for a web app, 1 is a simulator that is constantly running and the 2nd is an api server. Both share the same MongoDB database and we have a React app running for the client, served by the api server.
I'm looking to implement socket.io for real-time notifications and so I've set up a simple connection between the api and client.
My problem is that while the simulator runs, there are some events that I also want to trigger push notifications for so my question is how to hook that into everything?
The file hierarchy is like:
app/
simulator/
api/
client/
I saw this article for communication between node processes and I currently have 3 solutions in mind:
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
Leave hierarchy as is and communicate via node-ipc. Same situation as above with simulator messaging api first before api emits that to client
If 1 is possible, that seems like the best solution in my impression. It seems like extra work to add an additional layer of messaging for 2 and 3.
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
The client would have to create a separate socket.io connection to the simulator process. Then, the client can receive data from the API server over one connection and from the simulator over another connection. You would need two separate, independent socket.io connections from the client, one to the API server and one to the simulator. Simulator and API server cannot share the same socket unless they are in the same process.
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
This is really part of a broader option that the simulator communicates with the API server and sends it data that the API server can then send to the client over the single socket.io connection that the client made to the API server.
There are lots of different ways for the simulator process to communicate with the API server.
Since it's already an API server, you can just make an API for this (probably non-public). The simulator calls an API to send data to the client. The API server receives that data and sends it to the client.
As you suggest, if the simulator is run from the API server as a child process, then you can use parent/child communication messaging built into node.js. Note, you don't have to move the simulator files into the API file at all. You can just use child_process to launch the simulator as another nodejs app from another project. You just have to know the path to that other project.
You can use any another communication mechanism you want between the simulator process and the API server process. There could be a socket.io connection between them. You could use several forms of IPC, etc...
If 1 is possible, that seems like the best solution in my impression.
Your #1 option is not possible as separate processes can't use the same socket.io connection.
It seems like extra work to add an additional layer of messaging for 2 and 3.
My options #1 and #2 are not much code in each server. You're doing interprocess communication. You should expect to use some code to enable that. But, it's not hard at all.
If the lifetime of the simulator server and the API server are always together (they have no independent uses), then I'd probably do the child process thing where the API server launches the simulator and then use parent/child messaging to communicate between them. You do NOT have to combine sources to do this.
The child_process module can run the simulator process by just knowing what directory it is located in.
Otherwise, I'd probably make a small web server on a non-public port in the API server and have the simulator just send data to that other web server. I often refer to this as a control port. It's a way of "controlling or diagnosing" the API server internals and can only be accessed from within the private network and/or with credentials. The reason I'd use a separate web server (in the same nodejs app as the API server) is to make it easy to secure so it can't be accessed from the outside world like the regular public APIs can. You just put the internal web server on a port that is not exposed to the outside world.
You should check Socket.IO docs about adapters and Emitters. This allows to connect to sockets from different node processes and scalability.
I currently have a socket.io server spawned by a nodeJS web API server.
The UI runs separately and connects to the API via web socket. This is mostly used for notifications and connectivity status checks.
However the API also acts as a gateway for several micro services. One of these is responsible for computing the data necessary for the UI to render some charts. This operation is long-lasting and due to many reasons the computation will only start when a request is received.
In a nutshell, the UI sends a REST request to the API and the API currently uses gRPC to send the request to the micro service. This is bad because it locks both API and UI.
To avoid locking the socket server on the API should be be able to relay the UI request and the "computation ended" event received by the micro service, this way nothing would be locked. This could eventually lead to the gRPC server on the micro service to be removed.
Is this something achievable with socket.io?
If not is the only way for the API to spawn a secondary socket connection to the micro service for each one received by the UI?
Is this a bad idea?
I hope this is clear, thanks.
I actually ended up not using socket.io. However this can still be done with it if the API spawns a server and has the different services connected as clients, https://socket.io/docs/rooms-and-namespaces/ can be used.
This way messages can be "relayed" and even broadcasted from the server to both in case something happens.
I currently am creating a horizontally scalable socket.io server which looks like the following:
LoadBalancer (nginx)
Proxy1 Proxy2 Proxy3 Proxy{N}
BackEnd1 BackEnd2 BackEnd3 BackEnd4 BackEnd{N}
Where the proxies are using sticky session + cluster each with a socket.io server running on a core and being load balanced by the nginx proxy.
Now to my question, these backend nodes use redis pubsub to communicate with the proxies which are handling all the communication via the transport (websockets).
When a request is sent to a backend server by a proxy, it knows the user who requested it, along with the proxy the user is on. My fear is that, when a proxy server goes offline for whatever reason, any pending request on my backend nodes will fail to reach the user when it comes back online because the messages where sent while the server was offline. What can I implement to circumvent this issue and essentially have messages get queued while any proxy server is offline, then delivered when its back on?
Pubsub doesn't persist messages. At all. In order to use Redis for this you would need to use a queue instead. For example you can use a combination of list operations where the producer pushes them to a list and you client server uses a BLPOP or BRPOP depending on how you add them and whether you want messages in FIFO or LIFO sequence.
I'm trying to setup RabbitMQ to take web application logs to a log server.
My log server will listen to one channel and store the logs that comes in.
There are several web applications that need to send info to the log server.
With many connections (users) hitting the web server, what is the best design to publish messages to RabbitMQ without locking each other? Is it a good idea to keep opening a new connection to the MQ for each web request? Is there some sort of message queue pool?
I'm using IIS for a web server.
I assume you’re leveraging the .NET framework to build your application, given that it’s hosted in IIS. If so, you can also leverage Daishi.AMQP, which has a built-in QueuePool feature. Here is a tutorial that outlines the mechanism in full.
To answer your question, you should initially establish a connection to RabbitMQ from your application server. You can then initialise a Channel (a process that executes within the context of the underlying connection) to serve each HTTP request. It is not a good idea to establish a new connection for each request.
RabbitMQ has a build in queue feature. It is well documented, have a look at the official docs: http://www.rabbitmq.com/getstarted.html
We are developing a Javascript control which should be constantly connected to a server for receiving animation updates.
We are planning to host this stuff on an Amazon cloud.
The scenario is like this: server connects to activemq queue waiting for updates, for each update it broadcasts it to all connected clients.
Is it even possible to handle such load with node.js + socket.io?
Will a single node.js server be able to handle such load?
How to organize fast transport between different nodes if we will have to use more than one node?
Will single node.js server be able to handle such load?.. How to organize fast transport between different nodes if we will have to use more than one node
You say that you are planning to host on Amazon. So first off, nothing should be scoped for a single server. Amazon machines will simply "disappear", you have to assume that you are going to use multiple computers.
...handling 50k simultaneous clients
So to start with, 50k connections for a single box is a very big number. Here's a very detailed blog post discussing "getting to 10k" with node.js+socket.io.
Here's a very telling quote:
it seemed as though 10,000 clients simply required more serialization
than my server was able to handle.
So a key component to "getting to 50k" is going to be the amount of work required just pushing data over the wire.
How to organize fast transport between different nodes if we will have to use more than one node.
That blog post is the first of 3. When you're done the first, read the other two. That should point you in the right direction.