When to create RabbitMQ channels in node.js - node.js

The common advice I've read for creating channels for RabbitMQ recommends using a single channel per thread. But in node.js, we don't manage threads at all. So when do we create channels?
My use case is that of a node web server, using AMQPLib, that needs to use a request/response pattern to communicate with a single RabbitMQ server. Each HTTP request may require multiple RabbitMQ requests in order to generate the HTTP response. I plan to use a single Rabbit connection per node process, but as far as how much to reuse channels for various requests or response queues, I'm not certain.
An add-on question: If the answer is to use a channel for each separate request, then will there be much of a latency penalty for having to create a channel before each message sent?

Channels are an AMQP protocol-level construct. They really have nothing to do with the underlying connection (other than the obvious fact that a connection is required in order to have a channel). The .NET implementation of RabbitMQ client is so poorly written that it threadlocks on channels, hence one channel per thread - this is a code limitation, not a protocol limitation.
There is a comment stating that there is a "heavy cost to creating" channels - I don't see how this could be true based on the construct of a channel, but I don't know.
In any case, to answer your question: don't create more channels than you need. If you can operate using one channel (and it sounds like you can), do so. Don't create more work for yourself.

Related

Difference between Spring Inbound channel adapters and application event listing message producers

I am working on a POC using Spring Integration and STOMP. The initial POC is successful.
I followed the adapters configuration mentioned in https://docs.spring.io/spring-integration/reference/html/stomp.html#stomp
In my POC, I did not include the last two #Bean definitions from the above link.
Inbound Channel Adapter and Message Handlers were sufficient to handle the incoming messages.
Now, my question is:
What is the difference between Inbound channel adapters and application event listing message producers?
Is ApplicationListener used when we follow DSL as mentioned in an example here?
Thanks,
Mahesh
Well, as you noticed in that Spring Integration documentation about STOMP support there is some bunch of ApplicationEvents emitted by STOMP Channel Adapters. You indeed can handle them using regular ApplicationListener (#EventListener) if your logic for handling those events is pretty much simple and doesn't need further distribution. But if your logic is much complicated and you may need store an even (or its part) in some database, or send via email, do that in parallel after some aggregtion, etc., then indeed that ApplicationEventListeningMessageProducer is much better solution when we have Spring Integration on board already.
However if you talk about a StompInboundChannelAdapter nature and relationship with those mentioned events, you need to take a look into the StompIntegrationEvent implementations. You quickly realize that there is no events for payload in the STOMP frame. So, that is what is done really by the StompInboundChannelAdapter - it produces messages based on the body from STOMP frame.
All the mentioned events emitted fro that channel adapter are more about that adapter state sharing for possible management in your application.

How to configure MassTransit in an unreliable network environment?

I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).

What is the best way to communicate between two servers?

I am building a web app which has two parts. In one part it uses a real time connection between the server and the client and in the other part it does some cpu intensive task to provide relevant data.
Implementing the real time communication in nodejs and the cpu intensive part in python/java. What is the best way the nodejs server can participate in a duplex communication with the other server ?
For a basic solution you can use Socket.IO if you are already using it and know how it works, it will get the job done since it allows for communication between a client and server where the client can be a different server in a different language.
If you want a more robust solution with additional options and controls or which can handle higher traffic throughput (though this shouldn't be an issue if you are ultimately just sending it through the relatively slow internet) you can look at something like ØMQ (ZeroMQ). It is a messaging queue which gives you more control and lots of different communications methods beyond just request-response.
When you set either up I would recommend using your CPU intensive server as the stable end(server) and your web server(s) as your client. Assuming that you are using a single server for your CPU intensive tasks and you are running several NodeJS server instances to take advantage of multi-cores for your web server. This simplifies your communication since you want to have a single point to connect to.
If you foresee needing multiple CPU servers you will want to setup a routing server that can route between multiple web servers and multiple CPU servers and in this case I would recommend the extra work of learning ØMQ.
You can use http.request method provided to make curl request within node's code.
http.request method is also used for implementing Authentication api.
You can put your callback in the success of request and when you get the response data in node, you can send it back to user.
While in backgrount java/python server can utilize node's request for CPU intensive task.
I maintain a node.js application that intercommunicates among 34 tasks spread across 2 servers.
In your case, for communication between the web server and the app server you might consider mqtt.
I use mqtt for this kind of communication. There are mqtt clients for most languages, including node/javascript, python and java. In my case I publish json messages using mqtt 'topics' and any task that has registered to subscribe to a 'topic' receives it's data when published. If you google "pub sub", "mqtt" and "mosquitto" you'll find lots of references and examples. Mosquitto (now an Eclipse project) is only one of a number of mqtt brokers that are available. Another very good broker that is written in Java is called hivemq.
This is a very simple, reliable solution that scales well. In my case literally millions of messages reliably pass through mqtt every day.
You must be looking for socketio
Socket.IO enables real-time bidirectional event-based communication.
It works on every platform, browser or device, focusing equally on reliability and speed.
Sockets have traditionally been the solution around which most
realtime systems are architected, providing a bi-directional
communication channel between a client and a server.

Does each queue on ZeroMQ require it's own port?

We are looking to build a facade in nodejs that will accept requests from a client and then farm out the requests to a number of services using request/reply pattern to a number of different backend services. We want these requests held on individual queues in the event that one of the backend services is down. From initially reading of the ZeroMQ docs, it appears each queue is bound to its own port. When sending a message to a socket, there doesn't appear to be a way of naming a queue/topic to send to.
Is there a one-one mapping between ports and queues?
Thanks, Tom
ZeroMQ doesn't have the concept of "queues" or "topics". Your application consists of tasks, connected across some protocol, e.g. tcp://, and sending each other messages in various patterns. In your example one task will bind to an address:port and the workers will connect to it. The sender then sends requests to its socket, which deals them out to workers.
The best way to learn ZeroMQ is to work through at least the first couple of chapters of the Guide, before you design your own application. Many of the existing messaging concepts you're familiar with disappear into simpler patterns with ZeroMQ.

Publish subscribe with nodejs and redis(node_redis)

I am trying to build a generic publish/subscribe server with nodejs and node_redis that receives requests from a browser with a channel name and responds with any data that has been published too that channel. To do this, I am using long polling requests from the browser and dealing with these requests by sending a response when a message is received on a channel.
For each new request, an obect is created for subscribing to the channel (if and only if it does not already exist).
clients = {};
//when request comes in,
clients[channel] = redis.createClient();
clients[channel].subscribe(channel);
Is this the best way to deal with the subscribtion channels, or is there some other more intuitive way?
I don't know what's your design, but you can subscribe with one redis client on multiple channels (after you subscribe with client, then you can only subscribe to other channel or unsubscribe within this connection: http://redis.io/commands/subscribe), because after you receive message, you have full information which channel this message comes from. Then you can distribute this message to all interested clients.
This helped me a little, because I could put type of message in channel name and then dynamically choose action for each message from small function, instead of generating separate subscription for each channel with separate logic.
Inside my node.js server I have only 2 redis clients:
simple client for all standard actions - lpush, sadd and so on
subscribe client - which listens for messages over subscribed channels, then this messages are distribute to all sessions (stored as sets for each channel type) using first redis client.
I would like to point you out to my post about pubsub using socket.io together with redis. Socket.io is a very good library =>
How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?
I think the design is very simple and it should also be very scalable.
That seems like a pretty reasonable solution to me. What don't you like about it?
Something to keep in mind is that you can have multiple subscriptions on each Redis connection. This might end up complicating your logic, which is the opposite of what you are asking for. However, at scale this might be necessary. Each Redis connection is relatively inexpensive, but it does require a file descriptor and some memory.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)

Resources