I am trying to get the amount of items left in a channel (receiver and sender part). Like the same way in which a Python queue works.
I need to know this to specify some actions that need to be taken when there are N items left in the channel on the receiver end.
Maybe it needs a counter somewhere somehow? I am new to Rust so it is very vague to me.
Related
I want to find a channel implementation of channel like this, but with inversed count of readers and writers, i.e. only one allowed writer with infinite number of readers. Is it exists or i need to write it manually?
The documentation page you linked to calls this kind of channels "mpsc", which stands for Multiple Producer, Single Consumer. The converse is therefore Single Producer, Multiple Consumer. Googling "rust spmc channel" leads to what you are looking for as the very first result.
I have this concurrent pattern that came up when trying to model my problem, and I don't know if there's a name for it. Having a design pattern reference or something like that could help me implement it more safely.
Concept:
The foreman (main thread) is asked to look for a series of objects in a big warehouse.
This warehouse has n floors. The foreman has a team of n workers (helper threads), each with a dedicated floor.
The foreman recieves an object, and asks every worker to find it.
If a worker finds it on their floor, they return to the foreman with appropriate information. (location, status...)
The foreman then calls back all other workers (since the item has been found there's no need for more searching), and move on to the next object.
If everyone comes back saying "No it's not on my floor" we can act accordingly. (signal a missing product to management...)
The main problem I have is that I need to make sure threads don't waste calculation time when the item has already been found, and to ensure proper coordination.
I also can't give every thread the entire list of things to find, since this information is recieved item by item. (eg. via network)
Are you looking for Observer pattern ?
Once a worker finds the item and returns to the Foreman. The Foreman should notify to all the workers that the item is found, so all the threads will stop search and return.
I'm trying to determine if there's a way for Azure Service Bus to provide message collapsing. Specifically I'm after something like:
First event into a queue gets picked up straight away
All other events that are queued within the next N seconds, and match some criteria (e.g. matching message ids), have the schedule enqueue set to a value so they fire at the end of the N seconds. If a "waiting" message already exists it should be deleted.
After the N seconds has expired the newest scheduled message appears and is picked up.
Basically I need a way to get a good time-to-first-event, but provide protection from over processing events from chatty sources.
Does anyone have a pattern they've used to get something close to these semantics?
Update 1
The messages involved aren't true duplicates, rather they're the current state of an entity that is used for some processing (e.g. a message that's generated each time a file is updated). The result of the processing of an early message is fully replaced by that of later messages (e.g. the result is the size of the file). So we still need to guarantee we process the most recent message, but it's a waste to process all M within N seconds.
It sounds like you're talking about Duplicate Detection, especially in regards to matching MessageIds. If you want to evaluate some other attribute in the message for duplicate detection, maybe it's worth taking a step back and asking Why are my publishers sending so many duplicate messages? If it's unavoidable, maybe you can segregate your chatty consumers into a separate consumer group and manually handle the the duplicate check, then re-enqueue (just thinking out loud).
socket i/o: I am using nodejs in my php application. I have an array of connected rooms say rooms={room1,room2,room3} and an array of respective message say messages={message1, message2, message3}.
Till now i am emitting my message as below:
io.sockets.in('room1').emit('message', 'message1');
This works fine but i am worried that once number of rooms will increase, looping over all will give performance hit as well as big delays.
Is there any way that i can directly send an array of message to an array of room like following?:
io.sockets.in(rooms).emit('message', messages);
which eventually should send message1 to room1... respectively.
thank you.!
I don't know of a built-in method to do this. However, I also don't think it's necessary for you yet. Here's why I think you should just stick to using a loop for now.
How do you think your suggested calls would work? At some point, somebody's code is going to have to create a loop to touch every room and every message. A 3rd party library might have optimized their loop (or it might be really buggy and slow, just depends), but it's still going to have to loop.
In a lot of cases it is better to get your features working first and then go back to evaluate and fix slow points in your code. Read here for a lot more guidance on this topic. Note that I'm not advising you to blindly code without thinking ahead, just to not sweat the small stuff. I especially appreciate the way the second answer in the link suggests appropriate considerations for each stage of development.
Okay SO is warning me about a subjective title so please let me explain. Right now I'm looking at Go, I've read the spec, watched a few IO talks, it looks interesting but I have some questions.
One of my favourite examples was this select statement that listened to a channel that came from "DoAfter()" or something, the channel would send something at a given time from now.
Something like this (this probably wont work, pseudo-go if anything!)
to := Time.DoAfter(1000 * Time.MS)
select:
case <-to:
return nil //we timed out
case d := <-waitingfor:
return d
Suppose the thing we're waiting for happens really fast, so this function returns and isn't listening to to any more, what happens in DoAfter?
I like and know that you ought not test the channel, for example
if(chanToSendTimeOutOn.isOpen()) {
chanToSendTimeOutOn<-true
}
I like how channels sync places, with this for example it is possible that the function above could return after the isOpen() test but before the sending of true. I really am against the test, this avoids what channels do - hide locks and whatnot.
I've read the spec and seen the run time panics and recovery, but in this example where do we recover? Is the thing waiting to send the timeout a go routine or an "object" of sorts? I imagined this "object" which had a sorted list of things it had to send things to after given times, and that it'd just append TimeAfter requests to the queue in the right order and go through it. I'm not sure where that'd get an opportunity to actually recover.
If it spawned go-routines each with their own timer (managed by the run-time of course, so threads don't actually block for time) what then would get the chance to recover?
The other part of my question is about the lifetime of channels, I would imagine they're ref counted, well those able to read are ref-counted, so that if nothing anywhere holds a readable reference it is destroyed. I'd call this deterministic. For the "point-to-point" topologies you can form it will be if you stick towards Go's "send stuff via channels, don't access it"
So here for example, when the thing that wants a timeout returns the to channel is no longer read by anyone. So the go-routine is pointless now, is there a way to make it return without doing work?
Example:
File-reading go routine that has used defer to close the file when it is done, can it "sense" the channel it is supposed to send stuff to has closed, and thus return without reading any more?
I'd also like to know why the select statement is "nondeterministic" I'd have quite liked it if the first case took priority if the first and second are ready (for a non-blocking operation) - I wont condemn it for that, but is there a reason? What's the implementation of this?
Lastly, how are go-routines scheduled? Does the compiler add some sort of "yielding" every so many instructions, so a thread running will switch between different goroutines? Where can I find info on the lower level stuff?
I know Go touts that "you simply don't need to worry about this" but I like to know what things I write actually hide (that could be a C++ thing) and the reasons why.
If you write to a closed channel, your program will panic (see http://play.golang.org/p/KU7MLrFQSx for example). You could potentially catch this error with recover, but being in a situation where you don't know whether the channel you are writing to is open is usually a sign of a bug in the program. The send side of the channel is responsible for closing it, so it should know the current state. If you have multiple goroutines sending on the channel, then they should coordinate in closing the channel (e.g. by using a sync.WaitGroup).
In your Time.DoAfter hypothetical, it would depend on whether the channel was buffered. If it was an unbuffered channel, then the goroutine writing to the timer channel would block until someone read from the channel. If that never happened, then the goroutine would remain blocked until the program completed. If the channel was buffered, the send would complete immediately. The channel could be garbage collected before anyone read from it.
The standard library time.After behaves this way, returning a channel with a one slot buffer.