I need to send and receive multiple values between 2 tasks. I am currently using tokio oneshot channel because I am only dealing with 2 tasks. But I can't seem to re-use the tx probably due to its one message limit. How is this situation handled usually?
Do I create a new oneshot channel everytime? or is there a way to reuse the channel?
Do I try packing all my interactions into 1 message and just do it once? -> seems very restrictive.
Do I use other channel types?
The tokio::sync::mpsc channel should be used in this situation.
I suspect you're worried by the fact that it supports multiple senders, but its perfectly fine to use it when you only have one sender.
Related
At the bottom of the page on graceful shutdown in the Tokio docs it suggests a neat way to wait for tasks to finish using an mpsc channel and waiting for the channel to be closed, which happens when every sender has been dropped.
What are the advantages this offers over just tokio::join! on the tasks you're expecting to finish? The channels method does mean you can avoid creating handles to all your tasks, if you don't otherwise care about the results coming out. But it isn't clear it makes the code easier to read with the additional drop(tx), and you're required to pass an extra _sender: Sender<()> parameter into each task.
Does it just boil down to style or is there another reason to favour channels here?
The advantage which I see to the channel approach is that it is more composable; it is easier to distribute through some complex system.
In their example — "spawn 10 tasks, then use an mpsc channel to wait for them to shut down" — the ten tasks are right there. But what if some tasks are spawned dynamically, later, or "privately" within some subsystem?
All of these can be handled by passing the sender clones down to all the parts. The receiver holder need not know what all those parts are.
and you're required to pass an extra _sender: Sender<()> parameter into each task.
Note that, in a more complex situation than an example, the sender could be carried within some context object that the tasks need anyway. The sender also does not have to be cloned itself; an Arc containing some context struct containing the sender will do just as well.
I am trying to implement a messaging system between two processes with boost.interprocess and message_queue.
First problem: One queue can only be used for sending messages from process A to B, not B to A.
Thus, I am using two queues in both processes. Process A listens/receives at Queue-A and sends on Queue-B; Process B listens/receives at Queue-B and sends on Queue-A.
I am unable to get the to system work with both queues. Depending on the ordering of the processes calling boost::interprocess::message_queue(boost::interprocess::open_or_create,...)
or
boost::interprocess::message_queue(boost::interprocess::open_only,...)
either one Queue works or the other or neither.
Even if Process A creates Queue-A and Queue-B and Process B opens Queue-A and Queue-B, only. In one direction boost::interprocess is stuck at the receive-function and never awakes.
1) Is it possible to get bidirectional messaging/signalling to work with interprocess::message_queue using two queues in each process?
2) Is there a better way get bidirectional messaging without using message_queue?
I did not receive any comments on this. The solution was to not use boost::interprocess::message_queue. With the help of boost/interprocess/shared_memory_object I did write on my own a new, simple library for unidirectional interprocess messaging. https://github.com/svebert/InterprocessMsg
I have a process thats supposed to handle two different kind of messages and process them similiarly but differently.
Naturally I would use two seperate queues for both kind of messages and call consume() twice.
The other possibility would be to just have one queue and differ by some kind of "message type" property inside the content buffer and handle each message in a switch case.
Which would be the more "recommended" way of doing this?
Are there any advantages/disadvantages when using any of the two approaches?
I need to delay each message I produce with a specific time.
As far as I know the rabbitmq-delayed-message-exchange plugin allows me to do exactly that, however I was warned that it doesn't scale properly which is a definite requirement. (Has there been any updates lately fixing scaling problems?)
So, the alternative was to use TTL and a DLQ. With this approach though, you set the time when creating the exchange instead of the actual message which means I wouldn't be able to set different times for different messages.
Did I miss something?
My use case: Basicly I will be receiving specific "appointments" from clients which I must store and send back to the client at a specific time supplied in the appointment object. I want to acheive this by specifying a delay on each message so that my consumers must not implement waiting logic.
Why don't you use a per-queue message TTL, and have different queues for each different TTL you want to set, originally publish the messages through direct exchange with key related to the specific TTL?
Then having configured the same dead letter exchange for all those queues, they'll end up in the "final" queue for your consumers with the desired delay.
Of course it wouldn't be great if the possible values for the delays were too numerous.
Right now I'm using rabbitMQ to send data between two programs - 1 queue, 1 channel, 1 exchange. I'm going to extend it to be multithreaded and I want to declare another queue on the second thread.
I understand in this case I should use another channel, but what I would like to know is is it necessary to declare another exchange with a different name as well?
What exactly is the relationship between the two?
In what kind of situation would you need multiple exchanges?
As you figured out, the channel is the communication end point used to reach a rabbitMQ object.
There are so far 2 kinds of objects:
Queues, which are simply buffers for messages,
Exchanges, which are broadcasting devices.
As a simple analogy, queues are the pipes in which messages get accumulated, and exchanges are the T shaped, cross shaped and other types of connectors between pipes.
Perhaps it works better to compare it with a physical network, where queues would be like cables, and exchanges like switches or smart hubs, for which different distribution strategies can be chosen.
So basically, what you need to do is simply create a new queue from the new consumer thread, have it connect to the exchange object (for which you should have a name), and let the producer thread send its messages to the exchange object exclusively. Any new thread may follow the same protocol.
The last important point is to pick the correct distribution method for the exchange object, round-robin, fanout, etc. depending on how you wish your consumers to receive messages.
Take a look at our introduction to AMQP concepts
http://www.rabbitmq.com/tutorials/amqp-concepts.html