Does "synchronized I/O" always mean "blocking I/O"? - linux

I'm just confused about the "synchronized/asynchronized I/O" and "blocking/non-blocking I/O". I suppose that "synchronized I/O" always mean some kind of blocking I/O like read/write, they're blocking operations, so they're "synchronized I/O".
Please kindly help to correct!

"Asynchronous" or "non-blocking" I/O are, indeed, effectively synonymous. However, if we're using Linux terminology, "blocking" and "synchronized" I/O are different.
"Blocking" just tells you that the syscall won't return until the kernel has recorded the data... somewhere. There's no guarantee that this record is persistent in the event of an unexpected power loss or hardware failure; it can simply be a writeahead cache, for example -- so your blocking call can return at a point where other processes running at the time can see the write, but where that write would be lost if a power failure took place.
"Synchronized" in the O_SYNC sense tells you that the syscall won't return until the data is actually persisted to hardware.
Thus: All synchronized I/O is blocking, but not all blocking I/O is synchronized.

Related

Will non-blocking I/O be put to sleep during copying data from kernel to user?

I ask this question because I am looking at multiplexing I/O in Go, which is using epollwait.
When an socket is ready, a goroutine will be waked up and begin to read socket in non-blocking mode. If the read system call still will be blocked during copying data from kernel to user, I assume the kernel thread the gorouine attached to will be put to sleep as well.
I am not sure of that, hoping someone can help correct me if I am wrong.
I fail to quite parse what you've written.
I'll try to make a sheer guess and conjure you might be overseeing the fact that the write(2) and read(2) syscalls (and those of their ilk such as send(2) and recv(2)) on the sockets put into non-blocking mode are free to consume (and return, respectively) less data than requested.
In other words, a write(2) call on a non-blocking socket told to write 1 megabyte of data will consume just as much data currently fits into the assotiated kernel buffer and return immediately, signalling it consumed only as much data. The next immediate call to write(2) will likely return EWOULDBLOCK.
The same goes for the read(2) call: if you pass it a buffer large enough to hold 1 megabyte of data, and tell it to read that number of bytes, the call will only drain the contents of the kernel buffer and return immediately, signaling how much data it actually copied. The next immediate call to read(2) will likely return EWOULDBLOCK.
So, any attempt to get or put data to the socket succeeds almost immediately: either after the data had been shoveled between the kernel's buffer and the user space or right away—with the EAGAIN return code.
Sure, there's supposedly a possibility for an OS thread to be suspended right in the middle of performing such a syscall, but this does not count as "blocking in a syscall."
Update to the original answer in response to the following comment of the OP:
<…>
This is what I see in book
"UNIX Network Programming" (Volume 1, 3rd), chapter 6.2:
A synchronous I/O operation causes the requesting process
to be blocked until that I/O operation completes. Using these
definitions, the first four I/O models—blocking, nonblocking, I/O
multiplexing, and signal-driven I/O—are all synchronous because the
actual I/O operation (recvfrom) blocks the process.
It uses "blocks" to describe nonblocking I/O operation. That makes me confused.
I still don't understand why the book uses "blocks the process" if the process is actually not blocked.
I can only guess that the book's author intended to highlight that the process is indeed blocked since entering a syscall and until returning from it. Reads from and writes to a non-blocking socket do block to transfer the data, if available, between the kernel and the user space. We colloquially say this does not block because we mean "it does not block waiting and doing nothing for an indeterminate amount of time".
The book's author might contrast this to the so-called asynchronous I/O (called "overlapping" on Windows™)—where you basically give the kernel a buffer with/for data and ask it to do away with it completely in parallel with your code—in the sense the relevant syscall returns right away and the I/O is carried out in background (with regard to your user-space code).
To my knowledge, Go does not use kernel's async I/O facilities on neither platform it supports. You might look there for the developments regarding Linux and its contemporary io_uring subsystem.
Oh, and one more point. The book might (at that point through the narrative at least) be discussing a simplified "classic" scheme where there are no in-process threads, and the sole unit of concurrency is the process (with a single thread of execution). In this scheme, any syscall obviously blocks the whole process. In contrast, Go works only on kernels which support threads, so in a Go program a syscall never blocks the whole process—only the thread it's called on.
Let me take yet another stab at explaining the problem as—I perceive—the OP stated it.
The problem of serving multiple client requests is not new—one of the more visible first statements of it is "The C10k problem".
To quickly recap it, a single threaded server with blocking operations on the sockets it manages is only realistically able to handle a single client at a time.
To solve it, there exist two straightforward approaches:
Fork a copy of the server process to handle each incoming client connection.
On an OS which supports threads, fork a new thread inside the same process to handle each incoming client.
They have their pros and cons but they both suck with regard to resource usage, and—which is more important—they do not play well with the fact most clients have relatively low rate and bandwidth of I/O they perform with regard to the processing resources available on a typical server.
In other words, when serving a typical TCP/IP exchange with a client, the serving thread most of the time sleeps in the write(2) and read(2) calls on the client socket.
This is what most people mean when talking about "blocking operations" on sockets: if a socket is blocking, and operation on it will block until it can actually be carried out, and the originating thread will be put to sleep for an indeterminate amount of time.
Another important thing to note is that when the socket becomes ready, the amount of work done is typically miniscule compared to the amount of time slept between the wakeups.
While the tread sleeps, its resources (such as memory) are effectively wasted, as they cannot be used to do any other work.
Enter "polling". It combats the problem of wasted resources by noticing that the points of readiness of networked sockets are relatively rare and far in between, so it makes sense to have lots of such sockets been served by a single thread: it allows to keep the thread almost as busy as theoretically possible, and also allows to scale out when needed: if a single thread is unable to cope with the data flow, add another thread, and so on.
This approach is definitely cool but it has a downside: the code which reads and writes data must be re-written to use callback style instead of the original plain sequential style. Writing with callbacks is hard: you usuaully have to implement intricate buffer management and state machines to deal with this.
The Go runtime solves this problem by adding another layer of scheduling for its execution flow units—goroutines: for goroutines, operations on the sockets are always blocking, but when a goroutine is about to block on a socket, this is transparently handled by suspending only the goroutine itself—until the requested operation will be able to proceed—and using the thread the goroutine was running on to do other work¹.
This allows to have the best of both approaches: the programmer may write classic no-brainer sequential callback-free networking code but the threads used to handle networking requests are fully utilized².
As to the original question of blocking, both the goroutine and the thread it runs on are indeed blocked when the data transfer on a socket is happening, but since what happens is data shoveling between a kernel and a user-space buffer, the delay is most of the time small, and is no different to the classic "polling" case.
Note that performing of syscalls—including I/O on non-pollable descriptors—in Go (at leas up until, and including Go 1.14) does block both the calling goroutine and the thread it runs on, but is handled differently from those of pollable descriptors: when a special monitoring thread notices a goroutine spent in a syscall more that certain amount of time (20 µs, IIRC), the runtime pulls the so-called "processor" (a runtime thing which runs goroutines on OS threads) from under the gorotuine and tries to make it run another goroutine on another OS thread; if there is a goroutine wanting to run but no free OS thread, the Go runtime creates another one.
Hence "normal" blocking I/O is still blocking in Go in both senses: it blocks both goroutines and OS threads, but the Go scheduler makes sure the program as a whole still able to make progress.
This could arguably be a perfect case for using true asynchronous I/O provided by the kernel, but it's not there yet.
¹ See this classic essay for more info.
² The Go runtime is certainly not the first one to pioneer this idea. For instance, look at the State Threads library (and the more recent libtask) which implement the same approach in plain C; the ST library has superb docs which explain the idea.

Where do blocking I/O comes from?

My understanding is that the hardware architecture and the operating systems are designed not to block the cpu. When any kind of blocking operation needs to happen, the operating system registers an interruption and moves on to something else, making sure the precious time of the cpu is always effectively used.
It makes me wonder why most programming languages were designed with blocking APIs, but most importantly, since the operating system works in an asynchronous way when it comes to IO, registering interruptions and dealing with results when they are ready later on, I'm really puzzled about how our programming language APIs escape this asynchrony. How does the OS provides synchronous system calls for our programming language using blocking APIs?
Where this synchrony comes from? Certainly not at the hardware level. So, is there an infinite loop somewhere I don't know about spinning and spinning until some interruption is triggered?
My understanding is that the hardware architecture and the operating systems are designed not to block the cpu.
Any rationally designed operating system would have a system service interface that does what you say. However, there are many non-rational operating systems that do not work in this manner at the process level.
Blocking I/O is simpler to program than non-blocking I/O. Let me give you an example from the VMS operating system (Windoze works the same way under the covers). VMS has a system services called SYS$QIO and SYS$QIOW. That is, Queue I/O Request and Queue I/O Request and wait. The system services have identical parameters. One pair of parameters is the address of a completion routine and a parameters to that routine. However, these parameters are rarely used with SYS$QIOW.
If you do a SYS$QIO call, it returns immediately. When the I/O operation completes, the completion routine is called as a software interrupt. You then have to do interrupt programming in your application. We did this all the time. If you want your application to be able to read from 100 input streams simultaneously, this is the way you had to do it. It's just more complicated than doing simple blocking I/O with one device.
If a programming language were to incorporate such a callback system into its I/O statements, it would be mirroring VMS/RSX/Windoze. Ada uses the task concept to implement such systems in a operating-system-independent manner.
In the Eunuchs world, it was traditional to create a separate process for each device. That was simpler until you had to read AND write to each device.
Your observations are correct - the operating system interacts with the underlying hardware asynchronously to perform I/O requests.
The behavior of blocking I/O comes from threads. Typically, the OS provides threads as an abstraction for user-mode programs to use. But sometimes, green/lightweight threads are provided by a user-mode virtual machine like in Go, Erlang, Java (Project Loom), etc. If you aren't familiar with threads as an abstraction, read up on some background theory from any OS textbook.
Each thread has a state consisting of a fixed set of registers, a dynamically growing/shrinking stack (for function arguments, function call registers, and return addresses), and a next instruction pointer. The implementation of blocking I/O is that when a thread calls an I/O function, the underlying platform hosting the thread (Java VM, Linux kernel, etc.) immediately suspends the thread so that it cannot be scheduled for execution, and also submits the I/O request to the platform below. When the platform receives the completion of the I/O request, it puts the result on that thread's stack and puts the thread on the scheduler's execution queue. That's all there is to the magic.
Why are threads popular? Well, I/O requests happen in some sort of context. You don't just read a file or write a file as a standalone operation; you read a file, run a specific algorithm to process the result, and issue further I/O requests. A thread is one way to keep track of your progress. Another way is known as "continuation passing style", where every time you perform an I/O operation (A), you pass a callback or function pointer to explicitly specify what needs to happen after I/O completion (B), but the call (A) returns immediately (non-blocking / asynchronous). This way of programming asynchronous I/O is considered hard to reason about and even harder to debug because now you don't have a meaningful call stack because it gets cleared after every I/O operation. This is discussed at length in the great essay "What color is your function?".
Note that the platform has no obligation to provide a threading abstraction to its users. The OS or language VM can very well expose an asynchronous I/O API to the user code. But the vast majority of platforms (with exceptions like Node.js) choose to provide threads because it's much easier for humans to reason about.

Grayzone between blocking and non-blocking I/O?

I am familar with programming according to the two paradigms, blocking and non-blocking, on the JVM (Java/nio, Scala/Akka).
However, I see a kind of grayzone in between that confuses me.
Look at any non-blocking program of your choice: it is full of blocking statements!
For example, each assignment of a variable is a blocking operation that waits for CPU-registers and memory-reads to succeed.
Moreover, non-blocking programs even contain blocking statements that carry out computations on complex in-memory-collections, without violating the non-blocking paradigm.
In contrast to that, the non-blocking paradigm would clearly be violated if we would call some external web-service in a blocking way to receive its result.
But what is in between these extremes? What about reading/writing a tiny file, a local socket, or making an API-call to an embedded data storage engine (such as SQLite, RocksDb, etc.). Is it ok to do blocking reads/writes to these APIs? They usually give strong timing guarantees in practice (say << 1ms as long as the OS is not stalled), so there is almost no practical difference to pure in-memory-access. As a precise example: is calling RocksDBs get/put within an Akka Actor considered to be an inadvisable blocking I/O?
So, my question is whether there are rules of thumb or precise criteria that help me in deciding whether I may stick to a simple blocking statement in my non-blocking program, or whether I shall wrap such a statement into non-blocking boilerplate (framework-depending, e.g., outsourcing such calls to a separate thread-pool, nesting one step deeper in a Future or Monad, etc.).
for example, each assignment of a variable is a blocking operation that waits for CPU-registers and memory-reads to succeed
That's not really what is considered "blocking". Those operations are constant time, and that constant is very low (a few cycles in general) compared to the latency of any IO operations (anywhere between thousands and billions of cycles) - except for page faults due to swapped memory, but if those happen regularly you have a problem anyway.
And if we want to get all nitpicky, individual instructions do not fully block a CPU thread as modern CPUs can reorder instructions and execute ones that have no data dependencies out of order while waiting for memory/caches or other more expensive instructions to finish.
Moreover, non-blocking programs even contain blocking statements that carry out computations on complex in-memory-collections, without violating the non-blocking paradigm.
Those are not considered as blocking the CPU from doing work. They should not even block user interactivity if they are correctly designed to present the results to the user when they are done without blocking the UI.
Is it ok to do blocking reads/writes to these APIs?
That always depends on why you are using non-blocking approaches in the first place. What problem are you trying to solve? Maybe one API warrants a non-blocking approach while the other does not.
For example most file IO methods are nominally blocking, but writes without fsync can be very cheap, especially if you're not writing to spinning rust so it can be overkill to avoid those methods on your compute threadpool. On the other hand one usually does not want to block a thread in a fixed threadpool while waiting for a multi-second database query

when polling is better than interrupt?

I was looking at this pic:
and have 2 questions regarding it:
1. how much faster should a disk be in order for polling to be refered over the interrupt?
I thought that beacuse of the ISR and the process jumping (when using interrupt) - that polling will be better when using fast SSD for example , where the polling takes less time than the the interrupt(ISR+ scheduler ). Am I mistaken?
and the second question is : If my disk is slower than the SSD in my first question, but still fast - is there any reason to prefer polling?
I was wondering if the fact that I'll have lot's of I/O - read requests is a good enough reaon to prefer polling.
thanks!
You could imagine a case where the overhead of running interrupt handlers (invalidating your caches, setting up the interrupt stack to run on) could be slower than actually doing the read or write, in which case I guess polling would be faster.
However, SSDs are fast compared to disk, but still much slower than memory. SSDs still take anywhere from tens of microseconds to milliseconds to complete each I/O, whereas doing the interrupt setup and teardown uses all in-memory operations and probably takes a maximum of, say, 100-1000 cycles (~100ns to 1us).
The main benefit of using interrupts instead of polling is that the "disabled" effect of using interrupts is much lower, since you don't have to schedule your I/O threads to continuously poll for more data while there is none available. It has the added benefit that I/O is handled immediately, so if a user types a key, there won't be a pause before the letter appears on the screen while the I/O thread is being scheduled. Combining these issues is a mess - inserting arbitrary stalls into your I/O thread makes polling less resource-intense at the expense of even slower response times. These may be the primary reasons nobody uses polling in kernel I/O designs.
From a user process' perspective (using software interrupt systems such as Unix signals), either way can make sense since polling usually means a blocking syscall such as read() or select() rather than (say) checking a boolean memory-mapped variable or issuing device instructions like the kernel version of polling might do. Having system calls to do this work for us in the OS can be beneficial for performance because the userland thread doesn't get its cache invalidated any more or less by using signals versus any other syscall. However this is pretty OS dependent, so profiling is your best bet for figuring out which codepath is faster.

What is in simple words blocking IO and non-blocking IO?

How would you explain a simple mortal about blocking IO and non-blocking IO? I've found these concepts are not very clear among many of us programmers.
Blocking I/O means that the program execution is put on hold while the I/O is going on. So the program waits until the I/O is finished and then continues it's execution.
In non-blocking I/O the program can continue during I/O operations.
It's a concurrency issue. In the normal case, after an OS kernel receives an I/O op from a user program, that program does not run again until the I/O operation completes. Other programs typically get scheduled in the meantime.
This solves lots of little problems. For example, how does a program know how many bytes were read unless the I/O is complete when the read(2) returns? How does it know if it can reuse a write(2) buffer if the operation is still in progress when write(2) returns? Obviously, a more complex interface is needed for truely asynchronous I/O.
Ultimately it comes down to:
I/O happens synchronously with respect to the program, by blocking the program until I/O is finished
I/O is merely scheduled by a system call, and some notification mechanism exists to communicate the real result
There is a compromise where I/O ops simply fail if they can't be completed immediately. This is the more common use of "non-blocking" I/O in practice.
The whole issue is complicated moreover by the effort to schedule multithreaded programs when I/O could conceivably block only one thread, but that's a different question...
simply said .. the non blocking i/o (Asynchronous) allows other operations to be carried out while it does its thing and blocking i/o would block other operations

Resources