I'm a bit confused about the whole idea of IO; I want to know how the CPU reads from the disk (a SATA disk for example) ?
When the program with read()/write() is complied with a reference to a specific file and when the CPU encounters this reference, does it read from the disk directly (via memory mapped IO ports)? Or does it write to the RAM and then writes back to disk?
I'd suggest reading:
http://www.makelinux.net/books/ulk3/understandlk-CHP-13-SECT-1
With a supplement of:
http://en.wikipedia.org/wiki/Direct_memory_access
With regards to buffering in RAM: most programming languages and operating systems buffer at least part of I/O operations (read and write) to memory. This is usually done asynchronously: i.e. a buffer is created, filled, and then processed. For a read, the CPU would (working with the disk controller) create IO instructions to fetch data and a place to put it in memory, fill that space, and then present its contents to the program making the request. For a write request, this would be queuing write operations and their associated data and then sending them off to the IO controller and eventually the disk to be executed. Buffering can happen in multiple places: on the CPU's caches, in RAM, (sometimes) on the disk controller, or on the hard disk itself. How much buffering is done, and exactly how the abstract sequence of operations I've mentioned is handled, differs depending on your hardware architecture, OS, and task.
Main memory is the only large storage area (millions to bilions of bytes) that the processors can access directly.
"Operating System Concepts" said.
So if you want to run a program or manipulate some data, they (program and data) must be in Main memory.
Related
like said in the title, I don't really understand the usage of this syscall. I was writing some program that write some data in a file, and the tutorial I've seen told me to use sys_sync syscall. But my problem is why and when should we use this? The data isn't already written on the file?
The manual says:
sync - Synchronize cached writes to persistent storage
So it is written to the file cache in memory, not on disk.
You rarely have to use sync unless you are writing really important data and need to make sure that data is on disk before you go on. One example of systems that use sync a lot are databases (such as MySQL or PostgreSQL).
So in other words, it is theoretically in your file, just not on disk and therefore if you lose electricity, you could lose the data, especially if you have a lot of RAM and many writes in a raw, it may privilege the writes to cache for a long while, increasing the risk of data loss.
But how can a file be not on the disk? I understand the concept of cache but if I wrote in the disk why would it be in a different place?
First, when you write to a file, you send the data to the Kernel. You don't directly send it to the disk. Some kernel driver is then responsible to write the data to disk. In my days on Apple 2 and Amiga computers, I would actually directly read/write to disk. And at least the Amiga had a DMA so you could setup a buffer, then tell the disk I/O to do a read or a write and it would send you an interrupt when done. On the Apple 2, you had to write loops in assembly language with precise timings to read/write data on floppy disks... A different era!
Although you could, of course, directly access the disk (but with a Kernel like Linux, you'd have to make sure the kernel gives you hands free to do that...).
Cache is primarily used for speed. It is very slow to write to disk (as far as a human is concerned, it looks extremely fast, but compared to how much data the CPU can push to the drive, it's still slow).
So what happens is that the kernel has a task to write data to disk. That task wakes up as soon as data appears in the cache and ends once all the caches are transferred to disk. This task works in parallel. You can have one such task per drive (which is especially useful when you have a system such as RAID 1).
If your application fills up the cache, then a further write will block until some of the cache can be replaced.
and the tutorial I've seen told me to use sys_sync syscall
Well that sounds silly, unless you're doing filesystem write benchmarking or something.
If you have one really critical file that you want to make sure is "durable" wrt. power outages before you do something else (like sent a network packet to acknowledge a complete transfer), use fsync(fd) to sync just that one file's data and metadata.
(In asm, call number SYS_fsync from sys/syscall.h, with the file descriptor as the first register arg.)
But my problem is why and when should we use this?
Generally never use the sync system call in programs you're writing.
There are interactive use-cases where you'd normally use the wrapper command of the same name, sync(1). e.g. with removable media, to get the kernel started doing write-back now, so unmount will take less time once you finish typing it. Or for some benchmarking use-cases.
The system shutdown scripts may run sync after unmounting filesystems (and remounting / read-only), before making a reboot(2) system call.
Re: why sync(2) exists
No, your data isn't already on disk right after echo foo > bar.txt.
Most OSes, including Linux, do write-back caching, not write-through, for file writes.
You don't want write() system calls to wait for an actual magnetic disk when there's free RAM, because the traditional way to do I/O is synchronous so simple single-threaded programs wouldn't be able to do anything else (like reading more data or computing anything) while waiting for write() to return. Blocking for ~10 ms on every write system call would be disastrous; that's as long as a whole scheduler timeslice. (It would still be bad even with SSDs, but of course OSes were designed before SSDs were a thing.) Even just queueing up the DMA would be slow, especially for small file writes that aren't a whole number of aligned sectors, so even letting the disk's own write-back write caching work wouldn't be good enough.
Therefore, file writes do create "dirty" pages of kernel buffers that haven't yet been sent to the disk. Sometimes we can even avoid the IO entirely, e.g. for tmp files that get deleted before anything triggers write-back. On Linux, dirty_writeback_centisecs defaults to 1500 (15 seconds) before the kernel starts write-back, unless it's running low on free pages. (Heuristics for what "low" means use other tunable values).
If you really want writes to flush to disk immediately and wait for data to be on disk, mount with -o sync. Or for one program, have it use open(O_SYNC) or O_DSYNC (for just the data, not metadata like timestamps).
See Are file reads served from dirtied pages in the page cache?
There are other advantages to write-back, including delayed allocation even at the filesystem level. The FS can wait until it knows how big the file will be before even deciding where to put it, allowing better decisions that reduce fragmentation. e.g. a small file can go into a gap that would have been a bad place to start a potentially-large file. (It just have to reserve space to make sure it can put it somewhere.) XFS was one of the first filesystems to do "lazy" delayed allocation, and ext4 has also had the feature for a while.
https://en.wikipedia.org/wiki/XFS#Delayed_allocation
https://en.wikipedia.org/wiki/Allocate-on-flush
https://lwn.net/Articles/323169/
The linux disk writing operation must first write to the memory and then write to the disk at the appropriate time. Also, when writing to the disk the CPU can be freed for use by other processes while waiting for the disk to complete.
In this case, write disk operations should not affect the computing performance of Linux.Is this correct?
To release the CPU from performing write and read operations on peripherals the Direct Memory Access (DMA) is used. The DMA-Controller, assuming your Linux System has one, is instructed by the CPU to perform the Data transfer. So the CPU needs to initiate the transfer. Additionally the DMA-Controller is working on a bus, that the rest of your system might also like to use. However, your CPU should not be affected much by a data-transfer.
I am learning computer organization but struggling with the following concept. In non-DMA scenarios, do all disk reads follow the following sequence to get into main memory:
Disk storage surface -> Disk registers -> CPU registers -> Main memory
Similarly for writes, is the sequence:
Main memory -> CPU registers -> Disk registers -> Disk storage surface
(I know that in a DMA scenario, the CPU only initiates the transfer after which the content of the disks are transferred directly to main memory).
If yes, before DMA came, was the above sequence a serious bottleneck as overall CPU registers' capacity is much less compared to main memory and storage disk? Or it is so fast that a human user won't notice in non-DMA modes?
PS: Please bear with my rudimentary terminology, but I hope I conveyed what I want to ask.
Yes, what you describe is what happened in the bad old days with programmed-I/O instead of DMA.
For example, IDE disk-controller hardware used to be less well standardized, so the Linux drivers defaulted to programmed I/O (i.e. a copy loop using x86 IN instructions, since ATA predated memory-mapped I/O registers being common). For decent performance, you had to manually enable DMA in your boot scripts.
But before doing that, check by manually enabling DMA it didn't lead to lockups, or far worse cause data corruption.
re: memory-mapped file: nothing to do with how the data gets from disk into the pagecache (or vice versa). mmap() just means your process's address space includes a shared mapping of the same pages that the OS is using to cache the file's contents.
I have a C program that runs only weekly, and reads a large amount of files only once. Since Linux also caches everything that's read, they fill up the cache needlessly and this slows down the system a lot unless it has an SSD drive.
So how do I open and read from a file without filling up the disk cache?
Note:
By disk caching I mean that when you read a file twice, the second time it's read from RAM, not from disk. I.e. data once read from the disk is left in RAM, so subsequent reads of the same file will not need to reread the data from disk.
I believe passing O_DIRECT to open() should help:
O_DIRECT (Since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this file. In general this will degrade performance, but it is useful in special situations, such as when applications do their own caching. File I/O is done directly to/from user space buffers. The O_DIRECT flag on its own makes at an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC that data and necessary metadata are transferred. To guarantee synchronous I/O the O_SYNC must be used in addition to O_DIRECT.
There are further detailed notes on O_DIRECT towards the bottom of the man page, including a fun quote from Linus.
You can use posix_fadvise() with the POSIX_FADV_DONTNEED advice to request that the system free the pages you've already read.
I am using ext4 on linux 2.6 kernel. I have records in byte arrays, which can range from few hundred to 16MB. Is there any benefit in an application using write() for every record as opposed to saying buffering X MB and then using write() on X MB?
If there is a benefit in buffering, what would be a good value for ext4. This question is for someone who has profiled the behavior of the multiblock allocator in ext4.
My understanding is that filesystem will buffer in multiples of pagesize and attempt to flush them on disk. What happens if the buffer provided to write() is bigger than filesystem buffer? Is this a crude way to force filesystem to flush to disk()
The "correct" answer depends on what you really want to do with the data.
write(2) is designed as single trip into kernel space, and provides good control over I/O. However, unless the file is opened with O_SYNC, the data goes into kernel's cache only, not on disk. O_SYNC changes that to ensure file is synchroinized to disk. The actual writing to disk is issued by kernel cache, and ext4 will try to allocate as big buffer to write to minimize fragmentation, iirc. In general, write(2) with either buffered or O_SYNC file is a good way to control whether the data goes to kernel or whether it's still in your application's cache.
However, for writing lots of records, you might be interested in writev(2), which writes data from a list of buffers. Similarly to write(2), it's an atomic call (though of course that's only in OS semantics, not actually on disk, unless, again, Direct I/O is used).