I have a list of a bunch of file descriptors that I have created kevents for, and I'm trying to figure out if there's any way to get the number of them that are ready for read or write access.
Is there any way to get a list of "ready" file descriptors, like what epoll_wait provides?
Events that occurred are placed into the eventlist buffer passed to the kevent call. So making this buffer sufficiently large will give you the list you are seeking. The return
value of the kevent call will tell you have many events
are in the eventlist buffer.
If using a large buffer is not feasible for some reason,
you can always do a loop calling kevent with a zero timeout
and a smaller buffer, until you get zero events in the eventlist.
To give a little more context...
One of the expected scenarios with kevent() is that you will thread pool calls to it. If you had 3 thread pools all asking for 4 events, the OS would like to be able to pool and dispatch the actual events as it sees fit.
If 7 events are available the OS might want to dispatch to 3 threads, or it might want to dispatch to all 3 threads if it believed it had empty cores and less overhead.
I'm not saying your scenario is invalid at all; just that the system is more or less designed to keep that information away from you so it doesn't get into scenarios of saying 'well, 12 descriptors are ready. Oh, hrm, I just told you that but 3 of them got surfaced before you had a chance to do anything'.
Grrr pretty much nailed the scenario. You register/deregister your descriptors once and the relevent descriptor will be provided back to you with the event when the event triggers.
Related
The aim is interact with an OpenEthereum server using json-rpc.
The problem is once connected, I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking.
But in that case, if I ask to read more in the buffer than what the server sent the request will be blocking.
The OpenEthereum server is separating it s requests with a linefeed \n character but I don t know how this can help.
I know about simply waiting recv() to timeout. But I using C++ and ipc for having a better latency than my competitors on arbitrage. This also means I need to have the fewest number of context switches as possible.
How to effciently read a message whoes length cannot be determined in advance?
Is there a function for determining how many bytes are left to read on a unix domain socket?
No - just keep doing non-blocking reads until one returns EAGAIN or EWOULDBLOCK.
There may be a platform-specific ioctl or fcntl - but you haven't named a platform, and it's neither portable nor necessary.
How to effciently read a message whoes length cannot be determined in advance?
Just do a non-blocking read into a buffer large enough to contain the largest message you might receive.
I need to react only when receving data as the aim is to subscribe to an event so I need the recv() function to be blocking
You're confusing two things.
How to be notified when the socket becomes readable:
by using select or poll to wait until the socket is readable. Just read their manuals, that's their most common use case.
How to read everything available to read without blocking indefinitely:
by doing non-blocking reads until EWOULDBLOCK or EAGAIN is returned.
There is logically a third step, for stream-based protocols like this, which is correctly managing buffers in case of partial messages. Oh, and actually parsing the messages, but I assume you have a JSON library already.
This is entirely normal, basic UNIX I/O design. It is not an exotic optimization.
I have an application A which calls another application B which does some calculation and writes to a file File.txt
A invokes multiple instances of B through multiple threads and each instances tries to write to same file File.txt
Here comes the actual problem :
Since multiple threads tries to access the same file , the file access throws out which is common.
I tried an approach of using a concurrent queue in a singleton class and each instances of B adds to the queue And another thread in this class takes care of dequeing the items from queue and writes to the file File.txt. The queue is fetched synchronously and write operation succeeded . This works fine .
If I have too many threads and too many items in queue the file writing works but if for some reason my queue crashes or stops abruptly all the information which is supposed to be written to file is lost .
If I make the file writing synchronous from the B without using the queue then it will be slow as it needs to check for file locking but here there are less chances of data being missed as after B immediately writes to file.
What could be there best approach or design to handle this scenario? I don't need the response after file writing is completed . I can't make B wait for the file writing to be completed.
Would async await file writing could be of any use here ?
I think what you've done is the best that can be done. You may have to tune your producer/consumer queue solution if there are still problems, but it seems to me that you've done rather well with this approach.
If an in-memory queue isn't the answer, perhaps externalizing that to a message queue and a pool of listeners would be an improvement.
Relational databases and transaction managers are born to solve this problem. Why continue with a file based solution? Is it possible to explore an alternative?
is there a better approach or design to handle this scenario?
You can make each producer thread write to it's own rolling file instead of queuing the operation. Every X seconds the producers move to new files and some aggregation thread wakes up, read the previous files (of each producer) and writes the results to the final File.txt output file. No read / write locks are required here.
This ensures safe recovery since the rolling files exist until you process and delete them.
This also mean that you always write to disk, which is much slower than queuing tasks in memory and write to disk in bulks. But that's the price you pay for consistency.
Would async await file writing could be of any use here ?
Using asynchronous IO has nothing to do with this. The problems you mentioned were 1) shared resources (the output file) and 2) lack of consistency (when the queue crash), none of which async programming is about.
Why the async is in picture is because I dont want to delay the existing work by B because of this file writing operation
async would indeed help you with that. Whatever pattern you choose to implement (to solve the original problem) it can always be async by merely using the asynchronous IO api's.
I'm rather new to event based programming. I'm experimenting with epoll's edge-mode which apparently only signals files which have become ready for read/write (as opposed to level-mode which signals all ready files, regardless of whether there were already ready, or just became ready).
What's not clear to me, is: in edge-mode, am I informed of readiness events that happen while I'm not epoll_waiting ? What about events on one-shot files that haven't been rearmed yet ?
To illustrate why I'm asking that, consider the following scenario:
have 10 non-blocking sockets connected
configure epoll_ctl to react when the sockets are ready for read, in edge-mode + oneshot : EPOLLET | EPOLLONESHOT | EPOLLIN
epoll_wait for something to happen (reports max 10 events)
linux wakes my process and reports sockets #1 and #2 are ready
I read and process data socket #1 (until E_AGAIN)
I read and process data socket #2 (until E_AGAIN)
While I'm doing that, a socket S receives data
I processed all events, so I rearm the triggered files with epoll_ctl in EPOLL_CTL_MOD mode, because of oneshot
my loop goes back to epoll_waiting the next batch of events
Ok, so will the last epoll_wait always be notified of the readiness of socket S ? Event if S is #1 (i.e. it's not rearmed) ?
I'm experimenting with epoll's edge-mode which apparently only signals
files which have become ready for read/write (as opposed to level-mode
which signals all ready files, regardless of whether there were
already ready, or just became ready)
First let's get a clear view of the system, you need an accurate mental model of how the system works. Your view of epoll(7) is not really accurate.
The difference between edge-triggered and level-triggered is the definition of what exactly makes an event. The former generates one event for each action that has been subscribed on the file descriptor; once you consume the event, it is gone - even if you didn't consume all the data that generated such an event. OTOH, the latter keeps generating the same event over and over until you consume all the data that generated the event.
Here's an example that puts these concepts in action, blatantly stolen from man 7 epoll:
The file descriptor that represents the read side of a pipe (rfd) is registered on the epoll instance.
A pipe writer writes 2 kB of data on the write side of the pipe.
A call to epoll_wait(2) is done that will return rfd as a ready file descriptor.
The pipe reader reads 1 kB of data from rfd.
A call to epoll_wait(2) is done.
If the rfd file descriptor has been added to the epoll interface using
the EPOLLET (edge-triggered) flag, the call to epoll_wait(2) done in
step 5 will probably hang despite the available data still present in
the file input buffer; meanwhile the remote peer might be expecting a
response based on the data it already sent. The reason for this is
that edge-triggered mode delivers events only when changes occur on
the monitored file descriptor. So, in step 5 the caller might end up
waiting for some data that is already present inside the input buffer.
In the above example, an event on rfd will be generated because of the
write done in 2 and the event is consumed in 3. Since the read
operation done in 4 does not consume the whole buffer data, the call
to epoll_wait(2) done in step 5 might block indefinitely.
In short, the fundamental difference is in the definition of "event": edge-triggered treats events as a single unit that you consume once; level-triggered defines the consumption of an event as being equivalent to consuming all of the data belonging to that event.
Now, with that out of the way, let's address your specific questions.
in edge-mode, am I informed of readiness events that happen while I'm
not epoll_waiting
Yes, you are. Internally, the kernel queues up the interesting events that happened on each file descriptor. They are returned on the next call to epoll_wait(2), so you can rest assured that you won't lose events. Well, maybe not exactly on the next call if there are other events pending and the events buffer passed to epoll_wait(2) can't accommodate them all, but the point is, eventually these events will be reported.
What about events on one-shot files that haven't been rearmed yet?
Again, you never lose events. If the file descriptor hasn't been rearmed yet, should any interesting event arise, it is simply queued in memory until the file descriptor is rearmed. Once it is rearmed, any pending events - including those that happened before the descriptor was rearmed - will be reported in the next call to epoll_wait(2) (again, maybe not exactly the next one, but they will be reported). In other words, EPOLLONESHOT does not disable event monitoring, it simply disables event notification temporarily.
Ok, so will the last epoll_wait always be notified of the readiness of
socket S? Event if S is #1 (i.e. it's not rearmed)?
Given what I said above, by now it should be pretty clear: yes, it will. You won't lose any event. epoll offers strong guarantees, it's awesome. It's also thread-safe and you can wait on the same epoll fd in different threads and update event subscription concurrently. epoll is very powerful, and it is well worth taking the time to learn it!
Okay SO is warning me about a subjective title so please let me explain. Right now I'm looking at Go, I've read the spec, watched a few IO talks, it looks interesting but I have some questions.
One of my favourite examples was this select statement that listened to a channel that came from "DoAfter()" or something, the channel would send something at a given time from now.
Something like this (this probably wont work, pseudo-go if anything!)
to := Time.DoAfter(1000 * Time.MS)
select:
case <-to:
return nil //we timed out
case d := <-waitingfor:
return d
Suppose the thing we're waiting for happens really fast, so this function returns and isn't listening to to any more, what happens in DoAfter?
I like and know that you ought not test the channel, for example
if(chanToSendTimeOutOn.isOpen()) {
chanToSendTimeOutOn<-true
}
I like how channels sync places, with this for example it is possible that the function above could return after the isOpen() test but before the sending of true. I really am against the test, this avoids what channels do - hide locks and whatnot.
I've read the spec and seen the run time panics and recovery, but in this example where do we recover? Is the thing waiting to send the timeout a go routine or an "object" of sorts? I imagined this "object" which had a sorted list of things it had to send things to after given times, and that it'd just append TimeAfter requests to the queue in the right order and go through it. I'm not sure where that'd get an opportunity to actually recover.
If it spawned go-routines each with their own timer (managed by the run-time of course, so threads don't actually block for time) what then would get the chance to recover?
The other part of my question is about the lifetime of channels, I would imagine they're ref counted, well those able to read are ref-counted, so that if nothing anywhere holds a readable reference it is destroyed. I'd call this deterministic. For the "point-to-point" topologies you can form it will be if you stick towards Go's "send stuff via channels, don't access it"
So here for example, when the thing that wants a timeout returns the to channel is no longer read by anyone. So the go-routine is pointless now, is there a way to make it return without doing work?
Example:
File-reading go routine that has used defer to close the file when it is done, can it "sense" the channel it is supposed to send stuff to has closed, and thus return without reading any more?
I'd also like to know why the select statement is "nondeterministic" I'd have quite liked it if the first case took priority if the first and second are ready (for a non-blocking operation) - I wont condemn it for that, but is there a reason? What's the implementation of this?
Lastly, how are go-routines scheduled? Does the compiler add some sort of "yielding" every so many instructions, so a thread running will switch between different goroutines? Where can I find info on the lower level stuff?
I know Go touts that "you simply don't need to worry about this" but I like to know what things I write actually hide (that could be a C++ thing) and the reasons why.
If you write to a closed channel, your program will panic (see http://play.golang.org/p/KU7MLrFQSx for example). You could potentially catch this error with recover, but being in a situation where you don't know whether the channel you are writing to is open is usually a sign of a bug in the program. The send side of the channel is responsible for closing it, so it should know the current state. If you have multiple goroutines sending on the channel, then they should coordinate in closing the channel (e.g. by using a sync.WaitGroup).
In your Time.DoAfter hypothetical, it would depend on whether the channel was buffered. If it was an unbuffered channel, then the goroutine writing to the timer channel would block until someone read from the channel. If that never happened, then the goroutine would remain blocked until the program completed. If the channel was buffered, the send would complete immediately. The channel could be garbage collected before anyone read from it.
The standard library time.After behaves this way, returning a channel with a one slot buffer.
Is
OutputDebugString(PAnsiChar(''));
thread safe?
I/we have been using it in threads for debugging, and it never occurred to me if I should be doing it a different way.
(Delphi 7)
Well, not that it isn't true, it is, but just so that you don't have to just take Lieven word for it:
Passing of data between the
application and the debugger is done
via a 4kbyte chunk of shared memory,
with a Mutex and two Event objects
protecting access to it. These are the
four kernel objects involved.
Understanding Win32 OutputDebugString is an excellent article on the matter.
Don't worry, it is.
When OutputDebugString() is called by an application, it takes these
steps. Note that a failure at any point abandons the whole thing and
treats the debugging request as a no-op (the string isn't sent
anywhere).
Open DBWinMutex and wait until we have exclusive access to it.
Map the DBWIN_BUFFER segment into memory: if it's not found,
there is no debugger running so the entire request is ignored.
Open the DBWIN_BUFFER_READY and DBWIN_DATA_READY events. As with
the shared memory segment, missing objects mean that no debugger is
available.
Wait for the DBWIN_BUFFER_READY event to be signaled: this says
that the memory buffer is no longer in use. Most of the time, this
event will be signaled immediately when it's examined, but it won't
wait longer than 10 seconds for the buffer to become ready (a timeout
abandons the request).
Copy up to about 4kbytes of data to the memory buffer, and store
the current process ID there as well. Always put a NUL byte at the end
of the string.
Tell the debugger that the buffer is ready by setting the
DBWIN_DATA_READY event. The debugger takes it from there.
Release the mutex
Close the Event and Section objects, though we keep the handle to
the mutex around for later.
I've had trouble once, though, with strings in an ISAPI DLL. For some odd reason the IsMultiThread boolean defined in System.pas was not set!
It was causing weird AccessViolations, once the thread was running more than one thread... A simple "IsMultiThread:=true;" in a unit initialization fixed it.