file descriptor starvation and blocking file descriptors - linux

In the linux programming interface book, (p.1367)
Starvation considerations can also apply when using signal-driven I/O,
since it also presents an edge-triggered notification mechanism. By
contrast, starvation considerations don’t necessarily apply in
applications employing a level-triggered notification mechanism. This
is because we can employ blocking file descriptors with
level-triggered notification and use a loop that continuously checks
descriptors for readiness, and then performs some I/O on the ready
descriptors before once more checking for ready file descriptors.
I don't understand what this 'blocking' part means. I think it's irrelevant whether we use blocking I/O or nonblocking I/O. (The author also says early in the chapter that nonblocking I/O is usually employed regardless of level-triggered or edge-triggered notification)

SO, IO eh? Well, IO is "handling things" so we can go with a human metaphor. Imagine you are a process on a system getting stuff done for your boss.
Then blocking IO is like going to the dentist or having a face to face meeting with a customer. In both these scenarios, when you go to undertake that event, you're away from your desk and so totally unable to do anything else until you get back to your desk. Chances are, you're going to waste some time in the waiting room or idle chatter in the meeting/waiting for people to turn up.
Blocking IO is like this - blocking IO "sacrifices" (I say this because you lose the thread, effectively) the thread to the task in question. You can't use it for any other purpose whilst it is blocked - it's waiting on that IO to happen.
Non-blocking IO, by contrast, is like being on the telephone. When you're on the phone, you can engage in that IO whilst writing an answer on Stack Overflow! Such IO is said to be asynchronous - in that you accept an IO request and start processing it, but can handle other requests whilst they complete.
Now, my favourite resource for this kind of thing is the c10k problem page here. I'd say you're right - 99% of the time you are going to use non-blocking IO (in fact, your OS is conducting non-blocking IO for you all the time), mostly because using a whole thread for each incoming IO task is incredibly inefficient, even in Linux where threads and processes are the same thing (tasks) and are fairly lightweight.
The difference between edge-triggered and level-triggered notification types probably applies more to non-blocking connections, since it would be irrelevant for the blocking case anyway. As I understand it, an edge-triggered notification only marks a descriptor as ready when there is new data from the last time you asked for a status update, whereas level triggered marks a descriptor ready to handle whenever there is data available. This means edge-triggered interfaces are considered a bit more tricky, since you have to handle the incoming data when you see it as you won't be notified again. In theory, that should be more efficient (fewer notifications).
So, tl;dr - edge vs level readiness are slightly different considerations to blocking vs non-blocking designs namely, there are several ways to do non-blocking IO and only really one to do blocking IO.

Related

Will non-blocking I/O be put to sleep during copying data from kernel to user?

I ask this question because I am looking at multiplexing I/O in Go, which is using epollwait.
When an socket is ready, a goroutine will be waked up and begin to read socket in non-blocking mode. If the read system call still will be blocked during copying data from kernel to user, I assume the kernel thread the gorouine attached to will be put to sleep as well.
I am not sure of that, hoping someone can help correct me if I am wrong.
I fail to quite parse what you've written.
I'll try to make a sheer guess and conjure you might be overseeing the fact that the write(2) and read(2) syscalls (and those of their ilk such as send(2) and recv(2)) on the sockets put into non-blocking mode are free to consume (and return, respectively) less data than requested.
In other words, a write(2) call on a non-blocking socket told to write 1 megabyte of data will consume just as much data currently fits into the assotiated kernel buffer and return immediately, signalling it consumed only as much data. The next immediate call to write(2) will likely return EWOULDBLOCK.
The same goes for the read(2) call: if you pass it a buffer large enough to hold 1 megabyte of data, and tell it to read that number of bytes, the call will only drain the contents of the kernel buffer and return immediately, signaling how much data it actually copied. The next immediate call to read(2) will likely return EWOULDBLOCK.
So, any attempt to get or put data to the socket succeeds almost immediately: either after the data had been shoveled between the kernel's buffer and the user space or right away—with the EAGAIN return code.
Sure, there's supposedly a possibility for an OS thread to be suspended right in the middle of performing such a syscall, but this does not count as "blocking in a syscall."
Update to the original answer in response to the following comment of the OP:
<…>
This is what I see in book
"UNIX Network Programming" (Volume 1, 3rd), chapter 6.2:
A synchronous I/O operation causes the requesting process
to be blocked until that I/O operation completes. Using these
definitions, the first four I/O models—blocking, nonblocking, I/O
multiplexing, and signal-driven I/O—are all synchronous because the
actual I/O operation (recvfrom) blocks the process.
It uses "blocks" to describe nonblocking I/O operation. That makes me confused.
I still don't understand why the book uses "blocks the process" if the process is actually not blocked.
I can only guess that the book's author intended to highlight that the process is indeed blocked since entering a syscall and until returning from it. Reads from and writes to a non-blocking socket do block to transfer the data, if available, between the kernel and the user space. We colloquially say this does not block because we mean "it does not block waiting and doing nothing for an indeterminate amount of time".
The book's author might contrast this to the so-called asynchronous I/O (called "overlapping" on Windows™)—where you basically give the kernel a buffer with/for data and ask it to do away with it completely in parallel with your code—in the sense the relevant syscall returns right away and the I/O is carried out in background (with regard to your user-space code).
To my knowledge, Go does not use kernel's async I/O facilities on neither platform it supports. You might look there for the developments regarding Linux and its contemporary io_uring subsystem.
Oh, and one more point. The book might (at that point through the narrative at least) be discussing a simplified "classic" scheme where there are no in-process threads, and the sole unit of concurrency is the process (with a single thread of execution). In this scheme, any syscall obviously blocks the whole process. In contrast, Go works only on kernels which support threads, so in a Go program a syscall never blocks the whole process—only the thread it's called on.
Let me take yet another stab at explaining the problem as—I perceive—the OP stated it.
The problem of serving multiple client requests is not new—one of the more visible first statements of it is "The C10k problem".
To quickly recap it, a single threaded server with blocking operations on the sockets it manages is only realistically able to handle a single client at a time.
To solve it, there exist two straightforward approaches:
Fork a copy of the server process to handle each incoming client connection.
On an OS which supports threads, fork a new thread inside the same process to handle each incoming client.
They have their pros and cons but they both suck with regard to resource usage, and—which is more important—they do not play well with the fact most clients have relatively low rate and bandwidth of I/O they perform with regard to the processing resources available on a typical server.
In other words, when serving a typical TCP/IP exchange with a client, the serving thread most of the time sleeps in the write(2) and read(2) calls on the client socket.
This is what most people mean when talking about "blocking operations" on sockets: if a socket is blocking, and operation on it will block until it can actually be carried out, and the originating thread will be put to sleep for an indeterminate amount of time.
Another important thing to note is that when the socket becomes ready, the amount of work done is typically miniscule compared to the amount of time slept between the wakeups.
While the tread sleeps, its resources (such as memory) are effectively wasted, as they cannot be used to do any other work.
Enter "polling". It combats the problem of wasted resources by noticing that the points of readiness of networked sockets are relatively rare and far in between, so it makes sense to have lots of such sockets been served by a single thread: it allows to keep the thread almost as busy as theoretically possible, and also allows to scale out when needed: if a single thread is unable to cope with the data flow, add another thread, and so on.
This approach is definitely cool but it has a downside: the code which reads and writes data must be re-written to use callback style instead of the original plain sequential style. Writing with callbacks is hard: you usuaully have to implement intricate buffer management and state machines to deal with this.
The Go runtime solves this problem by adding another layer of scheduling for its execution flow units—goroutines: for goroutines, operations on the sockets are always blocking, but when a goroutine is about to block on a socket, this is transparently handled by suspending only the goroutine itself—until the requested operation will be able to proceed—and using the thread the goroutine was running on to do other work¹.
This allows to have the best of both approaches: the programmer may write classic no-brainer sequential callback-free networking code but the threads used to handle networking requests are fully utilized².
As to the original question of blocking, both the goroutine and the thread it runs on are indeed blocked when the data transfer on a socket is happening, but since what happens is data shoveling between a kernel and a user-space buffer, the delay is most of the time small, and is no different to the classic "polling" case.
Note that performing of syscalls—including I/O on non-pollable descriptors—in Go (at leas up until, and including Go 1.14) does block both the calling goroutine and the thread it runs on, but is handled differently from those of pollable descriptors: when a special monitoring thread notices a goroutine spent in a syscall more that certain amount of time (20 µs, IIRC), the runtime pulls the so-called "processor" (a runtime thing which runs goroutines on OS threads) from under the gorotuine and tries to make it run another goroutine on another OS thread; if there is a goroutine wanting to run but no free OS thread, the Go runtime creates another one.
Hence "normal" blocking I/O is still blocking in Go in both senses: it blocks both goroutines and OS threads, but the Go scheduler makes sure the program as a whole still able to make progress.
This could arguably be a perfect case for using true asynchronous I/O provided by the kernel, but it's not there yet.
¹ See this classic essay for more info.
² The Go runtime is certainly not the first one to pioneer this idea. For instance, look at the State Threads library (and the more recent libtask) which implement the same approach in plain C; the ST library has superb docs which explain the idea.

Grayzone between blocking and non-blocking I/O?

I am familar with programming according to the two paradigms, blocking and non-blocking, on the JVM (Java/nio, Scala/Akka).
However, I see a kind of grayzone in between that confuses me.
Look at any non-blocking program of your choice: it is full of blocking statements!
For example, each assignment of a variable is a blocking operation that waits for CPU-registers and memory-reads to succeed.
Moreover, non-blocking programs even contain blocking statements that carry out computations on complex in-memory-collections, without violating the non-blocking paradigm.
In contrast to that, the non-blocking paradigm would clearly be violated if we would call some external web-service in a blocking way to receive its result.
But what is in between these extremes? What about reading/writing a tiny file, a local socket, or making an API-call to an embedded data storage engine (such as SQLite, RocksDb, etc.). Is it ok to do blocking reads/writes to these APIs? They usually give strong timing guarantees in practice (say << 1ms as long as the OS is not stalled), so there is almost no practical difference to pure in-memory-access. As a precise example: is calling RocksDBs get/put within an Akka Actor considered to be an inadvisable blocking I/O?
So, my question is whether there are rules of thumb or precise criteria that help me in deciding whether I may stick to a simple blocking statement in my non-blocking program, or whether I shall wrap such a statement into non-blocking boilerplate (framework-depending, e.g., outsourcing such calls to a separate thread-pool, nesting one step deeper in a Future or Monad, etc.).
for example, each assignment of a variable is a blocking operation that waits for CPU-registers and memory-reads to succeed
That's not really what is considered "blocking". Those operations are constant time, and that constant is very low (a few cycles in general) compared to the latency of any IO operations (anywhere between thousands and billions of cycles) - except for page faults due to swapped memory, but if those happen regularly you have a problem anyway.
And if we want to get all nitpicky, individual instructions do not fully block a CPU thread as modern CPUs can reorder instructions and execute ones that have no data dependencies out of order while waiting for memory/caches or other more expensive instructions to finish.
Moreover, non-blocking programs even contain blocking statements that carry out computations on complex in-memory-collections, without violating the non-blocking paradigm.
Those are not considered as blocking the CPU from doing work. They should not even block user interactivity if they are correctly designed to present the results to the user when they are done without blocking the UI.
Is it ok to do blocking reads/writes to these APIs?
That always depends on why you are using non-blocking approaches in the first place. What problem are you trying to solve? Maybe one API warrants a non-blocking approach while the other does not.
For example most file IO methods are nominally blocking, but writes without fsync can be very cheap, especially if you're not writing to spinning rust so it can be overkill to avoid those methods on your compute threadpool. On the other hand one usually does not want to block a thread in a fixed threadpool while waiting for a multi-second database query

What's the difference between event-driven and asynchronous? Between epoll and AIO?

Event-driven and asynchronous are often used as synonyms. Are there any differences between the two?
Also, what is the difference between epoll and aio? How do they fit together?
Lastly, I've read many times that AIO in Linux is horribly broken. How exactly is it broken?
Thanks.
Events is one of the paradigms to achieve asynchronous execution.
But not all asynchronous systems use events. That is about semantic meaning of these two - one is super-entity of another.
epoll and aio use different metaphors:
epoll is a blocking operation (epoll_wait()) - you block the thread until some event happens and then you dispatch the event to different procedures/functions/branches in your code.
In AIO, you pass the address of your callback function (completion routine) to the system and the system calls your function when something happens.
Problem with AIO is that your callback function code runs on the system thread and so on top of the system stack. A few problems with that as you can imagine.
They are completely different things.
The events-driven paradigm means that an object called an "event" is sent to the program whenever something happens, without that "something" having to be polled in regular intervals to discover whether it has happened. That "event" may be trapped by the program to perform some actions (i.e. a "handler") -- either synchronous or asynchronous.
Therefore, handling of events can either be synchronous or asynchronous. JavaScript, for example, uses a synchronous eventing system.
Asynchronous means that actions can happen independent of the current "main" execution stream. Mind you, it does NOT mean "parallel", or "different thread". An "asynchronous" action may actually run on the main thread, blocking the "main" execution stream in the meantime. So don't confuse "asynchronous" with "multi-threading".
You may say that, technically speaking, an asynchronous operation automatically assumes eventing -- at least "completed", "faulted" or "aborted/cancelled" events (one or more of these) are sent to the instigator of the operation (or the underlying O/S itself) to signal that the operation has ceased. Thus, async is always event-driven, but not the other way round.
Event driven is a single thread where events are registered for a certain scenario. When that scenario is faced, the events are fired. However even at that time each of the events are fired in a sequential manner. There is nothing Asynchronous about it. Node.js (webserver) uses events to deal with multiple requests.
Asynchronous is basically multitasking. It can spawn off multiple threads or processes to execute a certain function. It's totally different from event driven in the sense that each thread is independent and hardly interact with the main thread in an easy responsive manner. Apache (webserver) uses multiple threads to deal with incoming requests.
Lastly, I've read many times that AIO in Linux is horribly broken. How exactly is it broken?
AIO as done via KAIO/libaio/io_submit comes with a lot of caveats and is tricky to use well if you want it to behave rather than silently blocking (e.g. only works on certain types of fd, when using files/block devices only actually works for direct I/O but those are the tip of the iceberg). It did eventually gain the ability to indicate file descriptor readiness with the 4.19 kernel) which is useful for programs using sockets.
POSIX AIO on Linux is actually a userspace threads implementation by glibc and comes with its own limitations (e.g. it's considered slow and doesn't scale well).
These days (2020) hope for doing arbitrary asynchronous I/O on Linux with less pain and tradeoffs is coming from io_uring...

What mechanism do PIPES use to "wake up" the recipient?

I have two questions is one here.
On Windows, I am familiar with pipes and how they work. However, I am curious as to what mechanism the OS uses to notify the recipient thread of a message arrival.
Does the thread "poll & sleep" continuously for data? Does the OS check to see if the thread is sleeping and wake it up? Or is there some other mechanism used?
Specifically, I want to build an IPC system where many threads need to pass messages. I don't need to use pipes, but I do need to know the most efficient notification method possible.
The developer can decide how they want to work with the pipe, whether they will sleep/poll or else they want to call blocking functions and wait until the data is available.
About the mechanism that the pipe has for waking up the process --assuming that the process is in a blocking read call-- it is not the pipe, but the OS the one that takes charge, like in any other OS call: it registers the operation and blocks the process/thread until the data is available. When the data is available, it completes the system call.
This is an answer for Unix. I'd lay good money on Windows being pretty similar as the solution has been around a long time and is well known to be robust. The details will vary a bit (different API calls, specifics of semantics, etc.)
It depends on whether the other end is using the pipe's file descriptor in blocking or non-blocking mode.
In blocking mode, the process is waiting in the OS kernel for the data to become available. The way in which notification happens there depends on the OS. Chances are it involves a queue of processes that are considered to be runnable, and everything's made simpler by the fact that the kernel can (largely) control what interrupts it. In a simple (single processor) implementation you could go for something as trivial as noting on write to the pipe that the other process is waiting to read from it (via some kind of “interest set”), and so marking the reader as runnable at that point (at which time it becomes up to the scheduler to decide).
In non-blocking mode, either the process is polling from time to time (yuck!) or they're using a system call like select() or poll() (there are some higher-performance variants too). That's very much like the Windows call WaitForMultipleObjects() and works just great with pipes. That in turn ends up back at that runnable process queue, the interest set, and the scheduler.
It also doesn't really matter too much whether it's blocking because the pipe is full or the pipe is empty, as the control flow is pretty much symmetric between readers and writers. (Unlike the data flow, of course.)

What is in simple words blocking IO and non-blocking IO?

How would you explain a simple mortal about blocking IO and non-blocking IO? I've found these concepts are not very clear among many of us programmers.
Blocking I/O means that the program execution is put on hold while the I/O is going on. So the program waits until the I/O is finished and then continues it's execution.
In non-blocking I/O the program can continue during I/O operations.
It's a concurrency issue. In the normal case, after an OS kernel receives an I/O op from a user program, that program does not run again until the I/O operation completes. Other programs typically get scheduled in the meantime.
This solves lots of little problems. For example, how does a program know how many bytes were read unless the I/O is complete when the read(2) returns? How does it know if it can reuse a write(2) buffer if the operation is still in progress when write(2) returns? Obviously, a more complex interface is needed for truely asynchronous I/O.
Ultimately it comes down to:
I/O happens synchronously with respect to the program, by blocking the program until I/O is finished
I/O is merely scheduled by a system call, and some notification mechanism exists to communicate the real result
There is a compromise where I/O ops simply fail if they can't be completed immediately. This is the more common use of "non-blocking" I/O in practice.
The whole issue is complicated moreover by the effort to schedule multithreaded programs when I/O could conceivably block only one thread, but that's a different question...
simply said .. the non blocking i/o (Asynchronous) allows other operations to be carried out while it does its thing and blocking i/o would block other operations

Resources