In a Xen setup, IO accesses from guest VMs go through a privileged domain called dom0 that is just a modified Linux kernel which has calls from and to the XEN hypervisor. For block IO, they have a split driver model whose front-end is in the guest VM and the backend of the driver in the domain0. The backend just creates a 'bio' structure and invokes submit_bio() as in traditional linux block driver code.
My goal here is to check if there is any problem in the data written to disk(lost data, silently corrupted writes, misdirected writes, etc). So I need to read the data that was written to disk and compare it with a on-cache copy of data (this is a common disk function called 'read after write'). My question is, is it not possible to invoke __bread() from my backend driver level ? The kernel crashes when __bread is invoked.. Could anyone understand the reason for this ? Also, if this ain't possible, what other ways are out there to read a specific block of data from disk at the driver's bottom-half ?
Can I intercept and clone the bio structure of the writes, and change the operation as read in my new bio and invoke submit_bio() again ? I did that, but the sector number in the bio structure that is returned by the completion callback of submit_bio() is some random value and not the ones I sent..
Thanks.
If this were my task, I'd try first writing a new scheduling algorithm. Start by copying cfq or deadline or noop or as scheduling code and start working on it from there to self-submit read commands after accepting write requests. noop would probably be the easiest one to modify to read immediately after write, and propagate errors upwards, but I can't imagine the performance would be very good. But, if you use one of the other schedulers as base, it would probably be much more difficult to signal an error immediately after the write -- perhaps a few seconds would have elapsed before reads were scheduled again -- so it would really only be useful as a diagnostic after the fact, and not something that could benefit applications directly.
Related
From what I understand, the disk device has a queue that stores read/write requests from the linux kernel. What happens when the device doesn't drain the queue fast enough (i.e. overflows)?
Does this queue extend (logically) into DRAM?
can some requests be lost?
Does this queue extend (logically) into DRAM?
Where do you think that queue is in the first place? It's in RAM.
The IO buffering infrastructure of any operating system can only serve the purpose of avoiding blocking whatever program tries to do an IO operation.
E.g. imagine you have a program that writes data to a file. For that reason, it calls a write system call. in the Operating System, that goes to the file system driver, which decides which disk sector gets changed.
Now, that change command goes to the IO subsystem, which puts the command in a queue. If that queue is full, the file system call blocks, ie. the call doesn't complete until there is space in the queue, which means that the write call blocks.
very simple: for as long as your writing device doesn't keep up, your writing program gets stopped in the write call. That's pretty logical. It's like trying to push mail into a full postbox. Until someone takes out the mail at the other end, you can't push in new mail, so the postman will have to wait.
The queue doesn't extend to RAM. There's a disk cache with dirty pages. The OS really would like to write those to disk. Some programs may even block while they're waiting for their dirty pages to be written. And as programs get blocked, they stop writing further data to disk. Pretty self-limiting, actually.
With kernel AIO and O_DIRECT|O_SYNC, there is no copying into kernel buffers and it is possible to get fine grained notification when data is actually flushed to disk. However, it requires data to be held in user space buffers for io_prep_pwrite().
With splice(), it is possible to move data directly to disk from kernel space buffers (pipes) without never having to copy it around. However, splice() returns immediately after data is queued and does not wait for actual writes to the disk.
The goal is to move data from sockets to disk without copying it around while getting confirmation that it has been flushed out. How to combine both previous approaches?
By combining splice() with O_SYNC, I expect splice() to block and one has to use multiple threads to mask latency. Alternatively, one could use asynchronous io_prep_fsync()/io_prep_fdsync(), but this waits for all data to be flushed, not for a specific write. Neither is perfect.
What would be required is a combination of splice() with kernel AIO, allowing zero copy and asynchronous confirmation of writes, such that a single event driven thread can move data from sockets to the disk and get confirmations when required, but this doesn't seem to be supported. Is there a good workaround / alternative approach?
To get a confirmation of the writes, you can't use splice().
There's aio stuff in userspace, but if you were doing it in the kernel it might come to finding out which bio's (block I/O) are generated and waiting for those:
Block I/O structure:
http://www.makelinux.net/books/lkd2/ch13lev1sec3
If you want to use AIO, you will need to use io_getevents():
http://man7.org/linux/man-pages/man2/io_getevents.2.html
Here are some examples on how to perform AIO:
http://www.fsl.cs.sunysb.edu/~vass/linux-aio.txt
If you do it from userspace and use msync it's still kind of up in the air if it is actually on spinning rust yet.
msync() docs:
http://man7.org/linux/man-pages/man2/msync.2.html
You might have to soften expectations in order to make it more robust, because it might be very expensive to actually be sure that the writes are fisically written on disk.
The 'highest' typical standard for write assurance in light of something like power removal is a journal recording operation that modifies the storage. The journal itself is append only and you can see if entries are complete when you play it back. That very last journal entry may not be complete, so something may still be potentially lost.
My motivation
I'd love to write a distributed file system using FUSE. I'm still designing the code before I jump in. It'll be possibly written in C or Go, the question is, how do I deal with network i/o in parallel?
My problem
More specifically, I want my file system to write locally, and have a thread do the network overhead asynchronously. It doesn't matter if it's slightly delayed in my case, I simply want to avoid slow writes to files because the code has to contact some slow server somewhere.
My understanding
There's two ideas conflicting in my head. One is that the FUSE kernel module uses the ABI of my program to hijack the process and call the specific FUSE function names I implemented (sync or async, w/e), the other is that.. the program is running, and blocking to receive events from the kernel module (which I don't think is the case, but I could be wrong).
Whatever it is, does it means I can simply start a thread and do network stuff? I'm a bit lost on how that works. Thanks.
You don't need to do any hijacking. The FUSE kernel module registers as a filesystem provider (of type fusefs). It then services read/write/open/etc calls, by dispatching them to the user-mode process. When that process returns, the kernel module gets the return value, and returns from the corresponding system call.
If you want to have the server (i.e. user mode process) by asynchronous and multi-threaded, all you have to do is dispatch the operation (assuming it's write - you can't parallelize input this way) to another thread in that process, and return immediately to FUSE. That way, your user mode process can, at its leisure, write out to the remote server.
You could similarly try to parallelize read, but the issue here is that you won't be able to return to FUSE (and thus release the reading process) until you have at least the beginning of the data read.
I'm currently working on an audio recording application, that fetches up to 8 audio streams from the network and saves the data to the disk (simplified ;) ).
Right now, each stream gets handled by one thread -> the same thread also does the saving work on the disk.
That means I got 8 different threads that perform writes on the same disk, each one into a different file.
Do you think there would be an increase in the disk i/o performance if all the writing work would be done by one common thread (that would sequently write the data into the particular files)?
OS is an embedded Linux, the "disk" is a CF card, the application is written in C.
Thanks for your ideas
Nick
The short answer: Given that you are writing to a Flash disk, I wouldn't expect the number of threads to make much difference one way or another. But if it did make a difference, I would expect multiple threads to be faster than a single thread, not slower.
The longer answer:
I wrote a similar program to the one you describe about 6 years ago -- it ran on an embedded PowerPC Linux card and read/wrote multiple simultaneous audio files to/from a SCSI hard drive. I originally wrote it with a single thread doing I/O, because I thought that would give the best throughput, but it turned out that that was not the case.
In particular, when multiple threads were reading/writing at once, the SCSI layer was aware of all the pending requests from all the different threads, and was able to reorder the I/O requests such that seeking of the drive head was minimized. In the single-thread-IO scenario, on the other hand, the SCSI layer knew only about the single "next" outstanding I/O request and thus could not do that optimization. That meant extra travel for the drive head in many cases, and therefore lower throughput.
Of course, your application is not using SCSI or a rotating drive with heads that need seeking, so that may not be an issue for you -- but there may be other optimizations that the filesystem/hardware layer can do if it is aware of multiple simultaneous I/O requests. The only real way to find out is to try various models and measure the results.
My suggestion would be to decouple your disk I/O from your network I/O by moving your disk I/O into a thread-pool. You can then vary the maximum size of your I/O-thread-pool from 1 to N, and for each size measure the performance of the system. That would give you a clear idea of what works best on your particular hardware, without requiring you to rewrite the code more than once.
If it's embedded linux, I guess your machine has only one processor/core. In this case threads won't improve I/O performance at all. Of course linux block subsystem works well in concurrent environment, but in your case (if my guess about number of cores is right) there can't be a situation when several threads do something simultaneously.
If my guess is wrong and you have more than 1 core, then I'd suggest to benchmark disk I/O. Write a program that writes a lot of data from different threads and another program that does the same from only one thread. The results will show you everything you want to know.
I think that there is no big difference between multithreaded and singlethreaded solution in your case, but in case of multithreading you can syncronize between receiving threads and no one thread can affect on other threads in case of blocking in some system call.
I did particulary the same thing on embedded system, the problem was the high cpu usage when kernel drop many cached dirty pages to the CF, pdflush kernel process take all cpu time in that moment and if you receive stream via udp so it can be skipped because of cpu was busy when udp stream came, so I solved that problem by fdatasync() call every time when some not big amount of data received.
I have a Linux process that is being called numerous times, and I need to make this process as fast as possible.
The problem is that I must maintain a state between calls (load data from previous call and store it for the next one), without running another process / daemon.
Can you suggest fast ways to do so? I know I can use files for I/O, and would like to avoid it, for obvious performance reasons. Should (can?) I create a named pipe to read/write from and by that avoid real disk I/O?
Pipes aren't appropriate for this. Use posix shared memory or a posix message queue if you are absolutely sure files are too slow - which you should test first.
In the shared memory case your program creates the segment with shm_open() if it doesn't exist or opens it if it does. You mmap() the memory and make whatever changes and exit. You only shm_unlink() when you know your program won't be called anymore and no longer needs the shared memory.
With message queues, just set up the queue. Your program reads the queue, makes whatever changes, writes the queue and exits. Mq_unlink() when you no longer need the queue.
Both methods have kernel persistence so you lose the shared memory and the queue on a reboot.
It sounds like you have a process that is continuously executed by something.
Why not create a factory that spawns the worker threads?
The factory could provide the workers with any information needed.
... I can use files for I/O, and would like to avoid it, for obvious performance reasons.
I wonder what are these reasons please...
Linux caches files in kernel memory in the page cache. Writes go to the page cash first, in other words, a write() syscall is a kernel call that only copies the data from the user space to the page cache (it is a bit more complicated when the system is under stress). Some time later pdflush writes data to disk asynchronously.
File read() first checks the page cache to see if the data is already available in memory to avoid a disk read. What it means is that if one program writes data to files and another program reads it, these two programs are effectively communicating via kernel memory as long as the page cache keeps those files.
If you want to avoid disk writes entirely, that is, the state does not need to be persisted across OS reboots, those files can be put in /dev/shm or in /tmp, which are normally the mount points of in-memory filesystems.