jvm-libp2p how to achieve programs communicating over pubsub on the level of go-ipfs or js-ipfs nodes? - p2p

if we connect two jvm-libp2p hosts to a public ipfs node and send pubsub messages between them - messages are not passed
only if we connect them directly they receive each other's pubsub messages - but this defies the purpose of pubsub
however - with js-ipfs or go-ipfs running two or more peers and subscribing to a topic is enough to almost instantly have message exchange between them, even if isolated with no mDNS - so that it is clear that messages go through IPFS network and remote peers
how to achieve the same on jvm?

Related

Routing MQTT protocol to PM2

We have some sort of devices connected to a MQTT Broker (mosquitto), publishing some events. We want to capture all these events through a node application. One simple solution is to create a node app as a client which is connected to MQTT Broker and listen for every event and do an specific job for them. But in scalability point of view, if we want to scale our node app, we have to run multiple instance of our node app and use a PM2 as a load balancer. But the problem is when we create more than one instance, all instances receive the same event and for that specific event, all instances do the same job multiple time as the number of instances we have.
How can we route all MQTT events to PM2 load balancer?
You are possibly approaching the problem the wrong way.
You want to look at something called Shared Subscriptions. This is new in the MQTT v5 specification (though some brokers implemented a propitiatory versions at MQTT v3).
Shared Subscriptions tells the broker to distribute in coming messages to collection of clients, only delivering each message to 1 of the group.
Mosquitto added support for Shared Subscriptions at version 1.6 (but you should make sure you are using the latest 1.6.x release)

Can two different MQTT brokers communicate with each other?

I am currently exploring the possibility of using MQTT protocol in my program and the system has found out that there are several different MQTT Brokers. So, my question is that can you mix and match brokers for this communication? For instance, Mosquitto broker on device 1 and ActiveMQ Broker on device 2. Will this work?
I think there might be a slight misunderstanding here.
In a simple deployment there would only be 1 MQTT broker that multiple MQTT clients (on one or many devices) would connect to this one broker and exchange messages on any topics. As long as all the client conforms to the MQTT specifications then they should be able to connect successfully to any broker implementation.
If you want a more complex deployment then it is possible to have multiple brokers and have groups of clients connect to different brokers. You can then set up what is known as a bridge between the brokers which allow the to share some/all of the topics. This allows messages to be shared by all clients regardless of which broker they connect to.
Assuming all the brokers conform to the MQTT spec (which is very likely) then it all should just work, but how you configure bridges differs between broker implementation.
Be aware that a new version of the MQTT spec (v5) just went live (end of 2017), brokers and client libraries will be updating to support this over the coming weeks/months. So check what versions you try and connect with.
Usually there's a bridge mode to connect brokers together, even for different kind of brokers such as Mosquitto and ActiveMQ, this is not only a concept in MQTT brokers but also in other message queue. Also, some kinds of brokers support with clustered, such as RabbitMQ. Official Mosquitto only support bridge, but there's a clustered mosquitto implementation on hui6075/mosquitto-cluster, it is easy to deploy.
Besides, the most significant different with "cluster" and "bridge" is that with clustered, the whole brokers looks like one logic broker for external clients, such as session, retain, qos, etc.

Send my message only to subscribed server and not to my other servers

How do I send my message that is published to a Redis channel only to subscribed server (which is connected to the subscriber) and not to my other servers (where the required subscriber isn't connected).
I'm using Socket.IO and Redis server.
Have you read the documentation?
not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be
I other words, you cannot target a specific subscriber.
Depending on what you are trying to achieve, you can consider using multiple channels, with each consumer using its own.

How to configure MassTransit in an unreliable network environment?

I'm trying to get my head around MassTransit in combination with RabbitMQ.
The basic concepts are working in a test project, but what I need is the following:
My system will have one or more servers that react to real life events (telephony). These events wil, by means of MassTransit and RabbitMQ, translate into messages that will be picked up by one or more receivers via a separate server, set up as RabbitMQ host. So far so good.
However, I cannot assume that I always have a connection between the publisher and the host machines. Just assume that the publishing server will continue to consume the real life events, but now cannot publish it's messages.
So, the question is: Does MassTransit have some kind of mechanism to store messages locally some way until the connection is re-established?
Or should I install RabbitMQ on every publishing server as well, in order to create a local exchange? Then I have to make the exchanges synchronize themselves after a reconnect.
Probably you have to implement a store and forward policy. Instead of publishing directly your message through MassTransit and RabbitMQ, you can store the message in a persistence repository (a local database) and delegate to some other process the notification through Masstransit of the messages stored before. This approach is often referred as "Client High Availability". This does not substitute the standard HA (High Availability) on server like the one implemented by RabbitMQ. But it's a good approach to use in a distributed system (like the one you described) because it could help you a lot in scenarios of server failure (e.g. an issue on RabbitMQ server that causes some loss of messages that you still have inside the store of some client and therefore you can make it process again).

How does socket.io send messages across multiple servers?

The Socket.io API has the ability to send messages to all clients.
With one server and all sockets in memory, I understand how that server one can send a message to all its clients, that's pretty obvious. But what about with multiple servers using Redis to store the sockets?
If I have client a connected to server y and client b connected to server z (and a Redis box for the store) and I do socket.broadcast.emit on one server, the client on the other server will receive this message. How?
How do the clients that are actually connected to the other server get that message?
Is one server telling the other server to send a message to its connected client?
Is the server establishing its own connection to the client to send that message?
Socket.io uses MemoryStore by default, so all the connected clients will be stored in memory making it impossible (well, not quiet but more on that later) to send and receive events from clients connected to a different socket.io server.
One way to make all the socket.io servers receive all the events is that all servers use redis's pub-sub. So, instead using socket.emit one can publish to redis.
redis_client = require('redis').createClient();
redis_client.publish('channelName', data);
And all the socket servers subscribe to that channel through redis and upon receiving a message emit it to clients connected to them.
redis_sub = require('redis').createClient();
redis_sub.subscribe('channelName', 'moreChannels');
redis_sub.on("message", function (channel, message) {
socket.emit(channel, message);
});
Complicated Stuff !! But wait, turns out you dont actually need this sort of code to achieve the goal. Socket.io has RedisStore which essentially does what the code above is supposed to do in a nicer way so that you can write Socket.io code as you would write for a single server and will still get propagated over to other socket.io server through redis.
To summarise socket.io sends messages across multiple servers by using redis as the channel instead of memory.
There are a few ways you can do this. More info in this question. A good explanation of how pub/sub in Redis works is here, in Redis' docs. An explanation of how the paradigm works in general is here, on Wikipedia.
Quoting the Redis docs:
SUBSCRIBE, UNSUBSCRIBE and PUBLISH implement the Publish/Subscribe
messaging paradigm where (citing Wikipedia) senders (publishers) are
not programmed to send their messages to specific receivers
(subscribers). Rather, published messages are characterized into
channels, without knowledge of what (if any) subscribers there may be.
Subscribers express interest in one or more channels, and only receive
messages that are of interest, without knowledge of what (if any)
publishers there are. This decoupling of publishers and subscribers
can allow for greater scalability and a more dynamic network topology.

Resources