I'm evaluating thrift as an rpc framework. I want to be able to do publish/subscribe logic with thrift and was wondering how to do this.
A few different answers may help:
Is there a canonical way to do publish/subscribe with thrift?
Is there a way to stream results of a call (similar to zerorpc streaming)?
How do you solve this problem?
I've done my own research and it looks like that with thrift, you should serialize and do pub sub over some type of message queue like zeromq or redis.
Have you taken a look at DDS? Data Distribution Service is a full standard for doing publish / subscribe communications via topics.
Related
I've created one producer and consumer script each to publish and receive data from topic. But how can I do the same for multiple clients? I'm using Node JS for implementation.
Your question need a bit more of explanation but maybe you can find what you are looking for here
Can you please try to describe a bit more your issue ?
I need to build a MQTT broker with basic functions but I cannot find any documents about MQTT broker.
Anyone have any idea how to do this? What do I need to read?
Firstly, I just want broker can accept connection using CONNECT and CONNACK.
The MQTT specification is available here, this will outline the protocol you will need to implement.
If your question is more generically, "How do I implement a network protocol?" then I would have to ask why you think you need to write your own broker and not just use one of the existing ones available. Even if the existing open source brokers don't do exactly what you want, adapting one of these will be much easier than starting from scratch. Brokers like Mosca and Moquetta allow themselves to be embedded into other applications.
If you still feel you need to write your own then I would start by picking one of the existing open source brokers and see how they have gone about it, picking one in a language similar to the one you intend to use would be the best bet.
I want to use MQTT as a protocol communication with RabbitMQ Message Broker, but from rabbitmq website I found this paragraph:
These implementations are suitable for development but sometimes won't be for production needs. MQTT 3.1 specification does not define consistency or replication requirements for retained message stores, therefore RabbitMQ allows for custom ones to meet the consistency and availability needs of a particular environment. For example, stores based on Riak and Cassandra would be suitable for most production environments as those data stores provide tunable consistency.
https://www.rabbitmq.com/mqtt.html
So, from this paragraph, I should to use Cassandra as a database for RabbitMQ, but I didn't find anything about integration Cassandra as a database for rabbitmq.
can you help me by giving me something to make it possible.
NB:I'm newbie in RabbitMQ.
The paragraph refers specifically to "retained messages" part of MQTT spec, as in, the messages you want to keep for a long time. Like a "last know configuration", that you may want to apply to any MQTT subscriber, regardless whether or not it has been online and subscribed at the moment the message is published.
It's a very particular situation and unless you need that feature you don't have to worry about using RabbitMQ as MQTT broker. For regular messages built-in RabbitMQ replication options are perfectly suitable and production-ready.
Until now, RabbitMQ doesn't support this feature.
so, it's not possible to use another database instead of Mnseia database
I have a multiserver multiclient application and I would like to keep some common data managed by a single daemon (to avoid a nightmare f concurrency), so the servers can just ask it when they need to manipulate the shared data.
I am already using libevent in the servers so I would like to stick to it and use it's RPC framework but I could not find an example of it used in real world.
Google Protobuf provides a RPC framework. And it is also used inside Google for RPC and many other things.
Protobuf is a library for data exchanging.
It handles data serialization, deserialization, compression, and so on.
It is created and opensourced by Google.
However, they didn't opensource the part of RPC implementation.
It only provides a framework.
You can integrate Protobuf with your existing libevent program.
I have personally implemented a RPC with Protobuf and libev(which is a similar project to libevent). And they work fine.
MsgPack ?
JSON-RPC ?
Socket.io (is it possible ? how ?)
EDIT:
I am talking about 2 node processes each one on a different physical machine;
I don't understand how redis can help me on this...
I'm not really clear on whether you are looking for ways to make two node servers on two physical machines "talk to each other", or two node.js server processes on one machine.
(You could edit your question to make it clearer).
You could look at:
protocol buffers for node
MsgPack-RPC for node
Websocket.MQ
dnode --This uses socket.io as the transport layer
IPCNode
AMQP with node-amqp or node-amqp and something like RabbitMQ
Or you could go with a document based db like redis
Note: some of these may need some updating
I hope this helps
I would go for redis. The pubsub semantics are pretty sweet. The node_redis client library is very fast because it can use the lightning fast c-extension-library named hiredis. I would just use json as my encoding. That will probably be more than fast enough.
You could also use DNode to do your communication if you like. I also believe it has socket.io capabilities. You should have a look at the source code to find this out.
It is not really clear from your question what do you mean by a Node server talking to another server. You can use anything from sending UDP packets, making TCP connections, HTTP connections to using any of the high-level mechanisms that others have already pointed out.
For an interesting scenerio of Node processes communication you may take a look at
the 2010 JSConf.eu talk by Mikeal Rogers. He explains how to use CouchDB to do that. Very interesting talk.