What is in simple words blocking IO and non-blocking IO? - io

How would you explain a simple mortal about blocking IO and non-blocking IO? I've found these concepts are not very clear among many of us programmers.

Blocking I/O means that the program execution is put on hold while the I/O is going on. So the program waits until the I/O is finished and then continues it's execution.
In non-blocking I/O the program can continue during I/O operations.

It's a concurrency issue. In the normal case, after an OS kernel receives an I/O op from a user program, that program does not run again until the I/O operation completes. Other programs typically get scheduled in the meantime.
This solves lots of little problems. For example, how does a program know how many bytes were read unless the I/O is complete when the read(2) returns? How does it know if it can reuse a write(2) buffer if the operation is still in progress when write(2) returns? Obviously, a more complex interface is needed for truely asynchronous I/O.
Ultimately it comes down to:
I/O happens synchronously with respect to the program, by blocking the program until I/O is finished
I/O is merely scheduled by a system call, and some notification mechanism exists to communicate the real result
There is a compromise where I/O ops simply fail if they can't be completed immediately. This is the more common use of "non-blocking" I/O in practice.
The whole issue is complicated moreover by the effort to schedule multithreaded programs when I/O could conceivably block only one thread, but that's a different question...

simply said .. the non blocking i/o (Asynchronous) allows other operations to be carried out while it does its thing and blocking i/o would block other operations

Related

Will non-blocking I/O be put to sleep during copying data from kernel to user?

I ask this question because I am looking at multiplexing I/O in Go, which is using epollwait.
When an socket is ready, a goroutine will be waked up and begin to read socket in non-blocking mode. If the read system call still will be blocked during copying data from kernel to user, I assume the kernel thread the gorouine attached to will be put to sleep as well.
I am not sure of that, hoping someone can help correct me if I am wrong.
I fail to quite parse what you've written.
I'll try to make a sheer guess and conjure you might be overseeing the fact that the write(2) and read(2) syscalls (and those of their ilk such as send(2) and recv(2)) on the sockets put into non-blocking mode are free to consume (and return, respectively) less data than requested.
In other words, a write(2) call on a non-blocking socket told to write 1 megabyte of data will consume just as much data currently fits into the assotiated kernel buffer and return immediately, signalling it consumed only as much data. The next immediate call to write(2) will likely return EWOULDBLOCK.
The same goes for the read(2) call: if you pass it a buffer large enough to hold 1 megabyte of data, and tell it to read that number of bytes, the call will only drain the contents of the kernel buffer and return immediately, signaling how much data it actually copied. The next immediate call to read(2) will likely return EWOULDBLOCK.
So, any attempt to get or put data to the socket succeeds almost immediately: either after the data had been shoveled between the kernel's buffer and the user space or right away—with the EAGAIN return code.
Sure, there's supposedly a possibility for an OS thread to be suspended right in the middle of performing such a syscall, but this does not count as "blocking in a syscall."
Update to the original answer in response to the following comment of the OP:
<…>
This is what I see in book
"UNIX Network Programming" (Volume 1, 3rd), chapter 6.2:
A synchronous I/O operation causes the requesting process
to be blocked until that I/O operation completes. Using these
definitions, the first four I/O models—blocking, nonblocking, I/O
multiplexing, and signal-driven I/O—are all synchronous because the
actual I/O operation (recvfrom) blocks the process.
It uses "blocks" to describe nonblocking I/O operation. That makes me confused.
I still don't understand why the book uses "blocks the process" if the process is actually not blocked.
I can only guess that the book's author intended to highlight that the process is indeed blocked since entering a syscall and until returning from it. Reads from and writes to a non-blocking socket do block to transfer the data, if available, between the kernel and the user space. We colloquially say this does not block because we mean "it does not block waiting and doing nothing for an indeterminate amount of time".
The book's author might contrast this to the so-called asynchronous I/O (called "overlapping" on Windows™)—where you basically give the kernel a buffer with/for data and ask it to do away with it completely in parallel with your code—in the sense the relevant syscall returns right away and the I/O is carried out in background (with regard to your user-space code).
To my knowledge, Go does not use kernel's async I/O facilities on neither platform it supports. You might look there for the developments regarding Linux and its contemporary io_uring subsystem.
Oh, and one more point. The book might (at that point through the narrative at least) be discussing a simplified "classic" scheme where there are no in-process threads, and the sole unit of concurrency is the process (with a single thread of execution). In this scheme, any syscall obviously blocks the whole process. In contrast, Go works only on kernels which support threads, so in a Go program a syscall never blocks the whole process—only the thread it's called on.
Let me take yet another stab at explaining the problem as—I perceive—the OP stated it.
The problem of serving multiple client requests is not new—one of the more visible first statements of it is "The C10k problem".
To quickly recap it, a single threaded server with blocking operations on the sockets it manages is only realistically able to handle a single client at a time.
To solve it, there exist two straightforward approaches:
Fork a copy of the server process to handle each incoming client connection.
On an OS which supports threads, fork a new thread inside the same process to handle each incoming client.
They have their pros and cons but they both suck with regard to resource usage, and—which is more important—they do not play well with the fact most clients have relatively low rate and bandwidth of I/O they perform with regard to the processing resources available on a typical server.
In other words, when serving a typical TCP/IP exchange with a client, the serving thread most of the time sleeps in the write(2) and read(2) calls on the client socket.
This is what most people mean when talking about "blocking operations" on sockets: if a socket is blocking, and operation on it will block until it can actually be carried out, and the originating thread will be put to sleep for an indeterminate amount of time.
Another important thing to note is that when the socket becomes ready, the amount of work done is typically miniscule compared to the amount of time slept between the wakeups.
While the tread sleeps, its resources (such as memory) are effectively wasted, as they cannot be used to do any other work.
Enter "polling". It combats the problem of wasted resources by noticing that the points of readiness of networked sockets are relatively rare and far in between, so it makes sense to have lots of such sockets been served by a single thread: it allows to keep the thread almost as busy as theoretically possible, and also allows to scale out when needed: if a single thread is unable to cope with the data flow, add another thread, and so on.
This approach is definitely cool but it has a downside: the code which reads and writes data must be re-written to use callback style instead of the original plain sequential style. Writing with callbacks is hard: you usuaully have to implement intricate buffer management and state machines to deal with this.
The Go runtime solves this problem by adding another layer of scheduling for its execution flow units—goroutines: for goroutines, operations on the sockets are always blocking, but when a goroutine is about to block on a socket, this is transparently handled by suspending only the goroutine itself—until the requested operation will be able to proceed—and using the thread the goroutine was running on to do other work¹.
This allows to have the best of both approaches: the programmer may write classic no-brainer sequential callback-free networking code but the threads used to handle networking requests are fully utilized².
As to the original question of blocking, both the goroutine and the thread it runs on are indeed blocked when the data transfer on a socket is happening, but since what happens is data shoveling between a kernel and a user-space buffer, the delay is most of the time small, and is no different to the classic "polling" case.
Note that performing of syscalls—including I/O on non-pollable descriptors—in Go (at leas up until, and including Go 1.14) does block both the calling goroutine and the thread it runs on, but is handled differently from those of pollable descriptors: when a special monitoring thread notices a goroutine spent in a syscall more that certain amount of time (20 µs, IIRC), the runtime pulls the so-called "processor" (a runtime thing which runs goroutines on OS threads) from under the gorotuine and tries to make it run another goroutine on another OS thread; if there is a goroutine wanting to run but no free OS thread, the Go runtime creates another one.
Hence "normal" blocking I/O is still blocking in Go in both senses: it blocks both goroutines and OS threads, but the Go scheduler makes sure the program as a whole still able to make progress.
This could arguably be a perfect case for using true asynchronous I/O provided by the kernel, but it's not there yet.
¹ See this classic essay for more info.
² The Go runtime is certainly not the first one to pioneer this idea. For instance, look at the State Threads library (and the more recent libtask) which implement the same approach in plain C; the ST library has superb docs which explain the idea.

Where do blocking I/O comes from?

My understanding is that the hardware architecture and the operating systems are designed not to block the cpu. When any kind of blocking operation needs to happen, the operating system registers an interruption and moves on to something else, making sure the precious time of the cpu is always effectively used.
It makes me wonder why most programming languages were designed with blocking APIs, but most importantly, since the operating system works in an asynchronous way when it comes to IO, registering interruptions and dealing with results when they are ready later on, I'm really puzzled about how our programming language APIs escape this asynchrony. How does the OS provides synchronous system calls for our programming language using blocking APIs?
Where this synchrony comes from? Certainly not at the hardware level. So, is there an infinite loop somewhere I don't know about spinning and spinning until some interruption is triggered?
My understanding is that the hardware architecture and the operating systems are designed not to block the cpu.
Any rationally designed operating system would have a system service interface that does what you say. However, there are many non-rational operating systems that do not work in this manner at the process level.
Blocking I/O is simpler to program than non-blocking I/O. Let me give you an example from the VMS operating system (Windoze works the same way under the covers). VMS has a system services called SYS$QIO and SYS$QIOW. That is, Queue I/O Request and Queue I/O Request and wait. The system services have identical parameters. One pair of parameters is the address of a completion routine and a parameters to that routine. However, these parameters are rarely used with SYS$QIOW.
If you do a SYS$QIO call, it returns immediately. When the I/O operation completes, the completion routine is called as a software interrupt. You then have to do interrupt programming in your application. We did this all the time. If you want your application to be able to read from 100 input streams simultaneously, this is the way you had to do it. It's just more complicated than doing simple blocking I/O with one device.
If a programming language were to incorporate such a callback system into its I/O statements, it would be mirroring VMS/RSX/Windoze. Ada uses the task concept to implement such systems in a operating-system-independent manner.
In the Eunuchs world, it was traditional to create a separate process for each device. That was simpler until you had to read AND write to each device.
Your observations are correct - the operating system interacts with the underlying hardware asynchronously to perform I/O requests.
The behavior of blocking I/O comes from threads. Typically, the OS provides threads as an abstraction for user-mode programs to use. But sometimes, green/lightweight threads are provided by a user-mode virtual machine like in Go, Erlang, Java (Project Loom), etc. If you aren't familiar with threads as an abstraction, read up on some background theory from any OS textbook.
Each thread has a state consisting of a fixed set of registers, a dynamically growing/shrinking stack (for function arguments, function call registers, and return addresses), and a next instruction pointer. The implementation of blocking I/O is that when a thread calls an I/O function, the underlying platform hosting the thread (Java VM, Linux kernel, etc.) immediately suspends the thread so that it cannot be scheduled for execution, and also submits the I/O request to the platform below. When the platform receives the completion of the I/O request, it puts the result on that thread's stack and puts the thread on the scheduler's execution queue. That's all there is to the magic.
Why are threads popular? Well, I/O requests happen in some sort of context. You don't just read a file or write a file as a standalone operation; you read a file, run a specific algorithm to process the result, and issue further I/O requests. A thread is one way to keep track of your progress. Another way is known as "continuation passing style", where every time you perform an I/O operation (A), you pass a callback or function pointer to explicitly specify what needs to happen after I/O completion (B), but the call (A) returns immediately (non-blocking / asynchronous). This way of programming asynchronous I/O is considered hard to reason about and even harder to debug because now you don't have a meaningful call stack because it gets cleared after every I/O operation. This is discussed at length in the great essay "What color is your function?".
Note that the platform has no obligation to provide a threading abstraction to its users. The OS or language VM can very well expose an asynchronous I/O API to the user code. But the vast majority of platforms (with exceptions like Node.js) choose to provide threads because it's much easier for humans to reason about.

Does "synchronized I/O" always mean "blocking I/O"?

I'm just confused about the "synchronized/asynchronized I/O" and "blocking/non-blocking I/O". I suppose that "synchronized I/O" always mean some kind of blocking I/O like read/write, they're blocking operations, so they're "synchronized I/O".
Please kindly help to correct!
"Asynchronous" or "non-blocking" I/O are, indeed, effectively synonymous. However, if we're using Linux terminology, "blocking" and "synchronized" I/O are different.
"Blocking" just tells you that the syscall won't return until the kernel has recorded the data... somewhere. There's no guarantee that this record is persistent in the event of an unexpected power loss or hardware failure; it can simply be a writeahead cache, for example -- so your blocking call can return at a point where other processes running at the time can see the write, but where that write would be lost if a power failure took place.
"Synchronized" in the O_SYNC sense tells you that the syscall won't return until the data is actually persisted to hardware.
Thus: All synchronized I/O is blocking, but not all blocking I/O is synchronized.

nodejs spawns threads implicitly by delegating the I/O to the kernel. How is this different than a server that makes a thread per request

Following up on ideas from these two previous questions I had:
When a goroutine blocks on I/O how does the scheduler identify that it has stopped blocking?
When doing asynchronous I/O, how does the kernel determine if an I/O operation is completed?
I've been looking into nodejs recently. It's advertised as "single threaded", which is partially true since all your JS does run on one thread, but from what I've read, in the background, node achieves this by delegating the I/O tasks to the kernel so that it doesn't get stuck having to wait for the response.
What I'm having difficulty understanding is how this is any different than the paradigms where you explicitly are creating a thread per request.
Could someone explain the differences in depth?
This would be true if node created one thread for each I/O request. But, of course, it doesn't do that. It has an I/O engine that understands the best way to do I/O on each platform.
What nodejs hides from you is not some naive implementation where a scheduling entity waits for each request to complete, but a sophisticated implementation that understands the optimal way to do I/O on every platform on which it is implemented.
Updates:
If both approaches need the kernel for I/O aren't they both creating a kernel thread per request?
No. There are lots of ways to use the kernel for I/O that don't require a kernel thread per request. They differ from platform to platform. Windows has IOCP. Linux has epoll. And so on.
If nodejs somehow is using a fixed amount of threads and queueing the I/O operations, isn't that slower than a thread per request?
No, it's typically much faster for a variety of reasons that depend on the specifics of each platform. Here are a few advantages:
You can avoid "thundering herds" when lots of I/O completes at once. Instead, you can wake just the number of threads that can usefully run at the same time.
You can avoid needing lots of contexts switches to get all the different threads to execute. Instead, each thread can handle completion after completion.
You don't have to put each thread on a wait queue for each I/O operation. Instead, you can use a single wait queue for the group of threads.
Just to give you an idea of how significant it can be, consider the difference between using a thread per I/O and using epoll on Linux. If you use a thread per I/O, that means each I/O operation requires a thread to place itself on a wait queue, that thread to block, that thread to be unblocked, a context switch to occur to that thread, and that thread to remove itself from the wait queue.
By contrast, with epoll, a single thread can service any number of I/O completions without having to be rescheduled or added to or removed from a wait queue for each I/O. Similarly, a thread can issue a number of I/O requests without being descheduled. This difference is massive.

when is non-blocking I/O used?

I am trying to understand in which cases nodejs will be faster than its pros. I completely understood the terms Asynchronous I/O and non-blocking I/O but cant think of a use case where it is be useful. can somebody give me an example ?
Node is a prime example why async I/O is useful.
Node is (as far as the user is concerned) single threaded, so waiting for synchronous I/O would stop the only thread that is executing code. Since there are no guarantees how long I/O will take, that could/would make Node code run extremely slowly.
That's why Node pretty much uses only async I/O, it allows the single thread to quickly offload I/O work to the operating system while continuing code execution without interruption until the operating system notifies Node that the I/O operation is done.
NodeJS is basically a server side coding which is based on single threaded concept, so we have to manage all the I/O and CPU works on this thread itself.
We know, I/O operations are basic blocking operation for the running thread (example: I/O operation may include getting an input from a user, or reading a large file from the data center; this operations may hang up the thread for sometime, which may results in hanging up many client request).
To avoid this above cases, NodeJS came up with the concept of single thread asynchronous non-blocking I/O (also overcoming the overhead of creating multiple threads in case of multi-threading).

Resources