At the bottom of the page on graceful shutdown in the Tokio docs it suggests a neat way to wait for tasks to finish using an mpsc channel and waiting for the channel to be closed, which happens when every sender has been dropped.
What are the advantages this offers over just tokio::join! on the tasks you're expecting to finish? The channels method does mean you can avoid creating handles to all your tasks, if you don't otherwise care about the results coming out. But it isn't clear it makes the code easier to read with the additional drop(tx), and you're required to pass an extra _sender: Sender<()> parameter into each task.
Does it just boil down to style or is there another reason to favour channels here?
The advantage which I see to the channel approach is that it is more composable; it is easier to distribute through some complex system.
In their example — "spawn 10 tasks, then use an mpsc channel to wait for them to shut down" — the ten tasks are right there. But what if some tasks are spawned dynamically, later, or "privately" within some subsystem?
All of these can be handled by passing the sender clones down to all the parts. The receiver holder need not know what all those parts are.
and you're required to pass an extra _sender: Sender<()> parameter into each task.
Note that, in a more complex situation than an example, the sender could be carried within some context object that the tasks need anyway. The sender also does not have to be cloned itself; an Arc containing some context struct containing the sender will do just as well.
Related
I'm successfully using a mpsc::channel() to send messages from a producer thread to a consumer.
The consumer is only ever interested in the latest message. (It uses the message from the previous check if there is no new message.)
In consequence, I'm running the consumer's try_recv() in a loop until it fails to get a new message, and then using the last received message, or the old one if no new messages were found.
Memory is being wasted storing old messages which the consumer will throw away.
How would I build a one-element variant of mpsc::channel()?
(I've considered using sync::Mutex<Option<MyMessage>> but it is critical that the consuming thread blocks for as little time as possible. Also, I want ownership to pass from the producer to the consumer.)
You can do it with an AtomicPtr, whose compare_exchange method should compile to a simple cmpxchg instruction, allowing you to store either std::ptr::null or an actual message.
There's quite a few possibilities, with various trade-offs.
I'd recommend the arc-swap crate (see below) for a safe and fast interface, and the DIY Double Buffering approach if performance is that critical.
std::mpsc
There's a second option for std::mpsc: the sync_channel function creates a bounded channel, where the sender blocks when the channel is full, until the receiver picks off a message.
I do not think that it is ideal for your usecase.
Tokio Watch channel
The Tokio ecosystem has the watch channel designed for the purpose of propagating configuration changes.
Unfortunately it is designed for multiple consumers, so the consumers borrow the messages: there is no transfer of ownership.
Arc Swap
I believe the arc-swap crate may be closer to what you need. As the name implies, it provides the moral equivalent of an Atomic<Arc<T>>.
You can use the ArcSwapOption<T> to have the equivalent of an Atomic<Option<Arc<T>>>, and the consumer can simply perform a let new = atomic.swap(None); then check if new is None (nothing new) or Some(Arc<T>) in which case it received an updated configuration.
Do be mindful of the cost of the dropping the previous Arc<T> when swapping a new one in: free is typically more expensive than malloc.
Back to std
You could use an AtomicPtr<T>. It'll require you to use unsafe, and would be a smidgen faster than ArcSwap by virtue of avoiding the reference counting.
It would suffer from the same drop issue, though.
DIY Double Buffering
You could also simply Do It Yourself. A simple double-buffering storage would work.
By storing a plain Option<T>, you avoid the additional extra allocation (and thus extra de-allocation), at the cost of making the check itself slower -- as you may now need to check both buffers. It may be possible to check a single buffer, not clear.
I took the sample code from Apache here: https://activemq.apache.org/components/cms/example
(The producer section specfically) and tried to rewrite it so it doesn't create any threads for producing. And instead, in my program's main thread, creates a producer object and sets up the connection, session, destination, and so on. Then it sends messages using a message producer. This is all done in a singleton so that my program just has one Producer object and just goes to it whenever it needs to dump any message to one of my queues. This example code seems to create a producer for every thread, set it up everytime, just to send a message, then deletes everything. And it does this for every time you want to want to produce something from your program.
I am crashing right when I try to call send on a message producer with any given message. I found out after some digging that after the send call it tries to lock a mutex and enter a critical section. I guess this is for threading? I don't use threads at all in my code so I guess it crashes because of that... Does anyone know a way to bypass this? I don't want to use multiple threads, I won't need to worry about two threads trying to call send at the same time or whatever the problem is that using mutexes is trying to solve.
You don't need to create a thread to run the producer in but internally the library is going to use a couple of threads as that is necessary for meeting the API requirements and also just because you don't use multiple threads doesn't means others won't so the mutex is an internal requirement.
You are free to modify the example to only create a producer inside the main thread of the application, the example uses two threads because it is acting as both a producer and consumer.
One likely cause of the error you are receiving is because you did not initialize the ActiveMQ-CPP library:
activemq::library::ActiveMQCPP::initializeLibrary();
Okay SO is warning me about a subjective title so please let me explain. Right now I'm looking at Go, I've read the spec, watched a few IO talks, it looks interesting but I have some questions.
One of my favourite examples was this select statement that listened to a channel that came from "DoAfter()" or something, the channel would send something at a given time from now.
Something like this (this probably wont work, pseudo-go if anything!)
to := Time.DoAfter(1000 * Time.MS)
select:
case <-to:
return nil //we timed out
case d := <-waitingfor:
return d
Suppose the thing we're waiting for happens really fast, so this function returns and isn't listening to to any more, what happens in DoAfter?
I like and know that you ought not test the channel, for example
if(chanToSendTimeOutOn.isOpen()) {
chanToSendTimeOutOn<-true
}
I like how channels sync places, with this for example it is possible that the function above could return after the isOpen() test but before the sending of true. I really am against the test, this avoids what channels do - hide locks and whatnot.
I've read the spec and seen the run time panics and recovery, but in this example where do we recover? Is the thing waiting to send the timeout a go routine or an "object" of sorts? I imagined this "object" which had a sorted list of things it had to send things to after given times, and that it'd just append TimeAfter requests to the queue in the right order and go through it. I'm not sure where that'd get an opportunity to actually recover.
If it spawned go-routines each with their own timer (managed by the run-time of course, so threads don't actually block for time) what then would get the chance to recover?
The other part of my question is about the lifetime of channels, I would imagine they're ref counted, well those able to read are ref-counted, so that if nothing anywhere holds a readable reference it is destroyed. I'd call this deterministic. For the "point-to-point" topologies you can form it will be if you stick towards Go's "send stuff via channels, don't access it"
So here for example, when the thing that wants a timeout returns the to channel is no longer read by anyone. So the go-routine is pointless now, is there a way to make it return without doing work?
Example:
File-reading go routine that has used defer to close the file when it is done, can it "sense" the channel it is supposed to send stuff to has closed, and thus return without reading any more?
I'd also like to know why the select statement is "nondeterministic" I'd have quite liked it if the first case took priority if the first and second are ready (for a non-blocking operation) - I wont condemn it for that, but is there a reason? What's the implementation of this?
Lastly, how are go-routines scheduled? Does the compiler add some sort of "yielding" every so many instructions, so a thread running will switch between different goroutines? Where can I find info on the lower level stuff?
I know Go touts that "you simply don't need to worry about this" but I like to know what things I write actually hide (that could be a C++ thing) and the reasons why.
If you write to a closed channel, your program will panic (see http://play.golang.org/p/KU7MLrFQSx for example). You could potentially catch this error with recover, but being in a situation where you don't know whether the channel you are writing to is open is usually a sign of a bug in the program. The send side of the channel is responsible for closing it, so it should know the current state. If you have multiple goroutines sending on the channel, then they should coordinate in closing the channel (e.g. by using a sync.WaitGroup).
In your Time.DoAfter hypothetical, it would depend on whether the channel was buffered. If it was an unbuffered channel, then the goroutine writing to the timer channel would block until someone read from the channel. If that never happened, then the goroutine would remain blocked until the program completed. If the channel was buffered, the send would complete immediately. The channel could be garbage collected before anyone read from it.
The standard library time.After behaves this way, returning a channel with a one slot buffer.
Simply put, I want to manipulate two motors in parallel, then when both are ready, continue with a 3rd thread.
Below is image of what I have now. In two top threads, it sets motors B and C to "unlimited", then waits until both trigger the switches, then sets a separate boolean variable for both.
Then in 3rd thread, I poll these two variables with 1 second interval, until AND operation gives true to the loop termination condition.
This is embedded system and all, so it may be ok here, but in "PC programming", this kind of polling loop would be rather horrible thing to do.
Question: Can I do either of both of
wait for variable without this kind of polling loop?
wait for a thread to finish without using a variable at all?
Your question is a bit vague on what you actually want to achieve and using which language. As I understood you want to be able to implement a similar multithreaded motor control mechanism in Labview?
If so, then the answer to both of your questions is yes, you can implement the wait without an explicitly defined variable (other than the error cluster, which you probably would be passing around anyway). The easiest method is to pass an error cluster to both your loops and then use Merge errors to combine the generated errors once the loops are finished. Merge errors will wait until both inputs have data, merges the errors, and passes the merged error cluster on. By wiring the merged error cluster to your teardown function you effectively achieve the thread synchronization you described. If you require thread synchronization for the two control loops, you would however still have to use semaphores, rendezvous', notifiers, and other built-in synch methods.
In the image there's an init function that opens two serial devices (purple wire) and passes them to the control loops, which both runs until an error (yellow-black wire) occurs. The errors from both are merged and passed to the teardown function that releases the serial devices. Notice that in this particular example the synchronization would occur at the end of program as long as there's at least one wire coming from each loop to the teardown function.
Similar functionality in a text based programming language would necessitate the use of more elaborate mechanisms, though some specialised language for parallel programming might help here.
What needs to be done to make InMemoryTransientMessageService run in a background thread? I publish things inside a service using
base.MessageProducer.Publish(new RequestDto());
and they are exececuted immediately inside the service-request.
The project is self-hosted.
Here is a quick unit test showing the blocking of the current request instead of deferring it to the background:
https://gist.github.com/lmcnearney/5407097
There is nothing out of the box. You would have to build your own. Take a look at ServiceStack.Redis.Messaging.RedisMqHost - most of what you need is there, and it is probably simpler (one thread does everything) to get you going when compared to ServiceStack.Redis.Messaging.RedisMqServer (one thread for queue listening, one for each worker). I suggest you take that class and adapt it to your needs.
A few pointers:
ServiceStack.Message.InMemoryMessageQueueClient does not implement WaitForNotifyOnAny() so you will need an alternative way of getting the background thread to wait to incoming messages.
Closely related, the ServiceStack.Redis implementation uses topic subscriptions, which in this class is used to transfer the WorkerStatus.StopCommand, which means you have to find an alternative way of getting the background thread to stop.
Finally, you may want to adapt ServiceStack.Redis.Messaging.RedisMessageProducer as its Publish() method pushes the message requested to the queue and pushes the channel / queue name to the TopicIn queue. After reading the code you can see how the three points tie together.
Hope this helps...