Common way to write an image to disk looks like:
dd if=file.img of=/dev/device
After this command, is it necessary to run sync?
sync(2) explains it only flushes filesystem caches. Since dd command is not related to any filesystem, I think it is not necessary to run sync. However, block layer is complex and in doubt, most people prefers to run sync.
Does anyone has a proof that it is useful or useless?
TL;DR: Run blockdev --flushbufs /dev/device after dd.
I tried to follow the different paths in kernel. Here is what I understood:
ioctl(block_dev, BLKFLSBUF, 0) call blkdev_flushbuf(). Considering its name, it should flush caches associated with device (or I think you can consider there is bug in device driver). I think it should also responsible to flush hardware caches if they exist. Notice e2fsprogs use BLKFLSBUF.
fdatasync() (and fsync()) will call blkdev_fsync(). It looks like blkdev_flushbuf() but it only impact range of data written by current process (It use filemap_write_and_wait_range() while BLKFLSBUF use filemap_write_and_wait).
Closing a block device calls blkdev_close() that do not flush buffers.
sync() will call sync_fs(). It will flush filesystem caches and call fsync() on underlying block device.
Command sync /dev/device will call fsync() on /dev/device. However, I think it is useless since dd didn't touch to any filesystem.
So my conclusions is that call to sync has no (direct) impact on block device. However, passing fdatasync (or fsync) to dd is the only way to guarantee that data are correctly written on media.
If have you run dd but you missed to pass fdatasync, running sync /dev/device is not sufficient. You have have to run dd with fdatasync on whole device. Alternatively, you can call BLKFLSBUF to flush whole device. Unfortunately, there is no standard command for that.
EDIT
You can issue a BLKFLSBUF with blockdev --flushbufs /dev/device.
To ensure the data is flushed on a usb device before to unplug, I use the following command :
echo 1 > /sys/block/${device}/device/delete
This way, the data is flushed, and if the device is a hard drive, then the head is parked.
Related
I tried to use fsync to write some file to SD card ASAP. However fsync does not actually block before the file is physically written to the SD card. It seems to take about 5-6 seconds before the data is actually on the SD card. However mount the file system (I tried ext3, ext4) with commit = 1 or sync option does seem to work, the data is safe after reboot in 1 second. My question is that is there anyway to achieve flushing without resort to partition wide solution? I'm using linux kernel 2.6.37. Thank you
If you want to be sure the content is written on the SD card, you should call blockdev with --flushbufs before exiting the program.
If you want to benchmark the writing process, you can call this after every write.
/sbin/blockdev --flushbufs $dev
I want to know under what circumstances a direct I/O transfer will fail?
I have following three sub-queries for that. As per "Understanding Linux kernel" book..
Linux offers a simple way to bypass the page cache: direct I/O transfers. In each I/O direct transfer, the kernel programs the disk controller to transfer the data directly from/to pages belonging to the User Mode address space of a self-caching application.
-- So to explain failure one needs to check whether application has self caching feature or not? Not sure how that can be done.
2.Furthermore the book says "When a self-caching application wishes to directly access a file, it opens the file specifying the O_DIRECT flag . While servicing the open( ) system call, the dentry_open( ) function checks whether the direct_IO method is implemented for the address_space object of the file being opened, and returns an error code in the opposite case".
-- Apart from this any other reason that can explain direct I/O failure ?
3.Will this command "dd if=/dev/zero of=myfile bs=1M count=1 oflag=direct" ever fail in linux (assuming ample disk space available) ?
The underlying filesystem and block device must support O_DIRECT flag. This command will fail because tmpfs doesn't support O_DIRECT.
dd if=/dev/zero of=/dev/shm/test bs=1M count=1 oflag=direct
The write size must be the multiply of the block size of underlying driver. This command will fail because 123 is not multiply of 512:
dd if=/dev/zero of=myfile bs=123 count=1 oflag=direct
There are many reasons why direct I/O can go on to fail.
So to explain failure one needs to check whether application has self caching feature or not?
You can't do this externally - you have the either deduce this from the source code or watching how the program used resources as it ran (binary disassembly I guess). This is more a property of how the program does its work rather than a "turn this feature on in a call". It would be a dangerous assumption to think all programs that use O_DIRECT have self caching (probabilistically I'd say it's more likely but you don't know for sure).
There are strict requirements for using O_DIRECT and they are mentioned in the man page of open (see the O_DIRECT section of NOTES).
With modern kernels the area being operated on must be aligned to, and its size must be a multiple of, the disk's block size. Failure to do this correctly may even result in silent fallback to buffered I/O.
Yes, for example trying to use it on a filesystem (such as tmpfs) doesn't support O_DIRECT. I suppose it could also fail if the path down to the disk returns failures for some reason (e.g. disk is dying and is returning the error much sooner in comparison to what happens with writeback).
Title pretty much nailed it, Im copying files to a flash drive and then doing some things to those files. Well I have noticed that after running the dd command the flash drive is still flashing and not all the files are on the device.
Does anyone know how to maybe run a simple loop (in script) to wait on the dd process to finish? I have been googling for about 2-3 hours now trying to figure it out and its beyond me if its even possible.
Thanks in advance!
Try the sync command:
sync writes any data buffered in memory out to disk. This can
include (but is not limited to) modified superblocks, modified inodes,
and delayed reads and writes. This must be implemented by the kernel;
The sync program does nothing but exercise the sync system call.
The kernel keeps data in memory to avoid doing (relatively slow)
disk reads and writes. This improves performance, but if the computer
crashes, data may be lost or the file system corrupted as a result.
The sync command ensures everything in memory is written to disk.
Most likely you are seeing the operating system caching the writes. If you really want to make sure that everything is written to the flash drive so that it is safe to remove, it needs to be unmounted.
I have a client node writing a file to a hard disk that is on another node (I am writing to a parallel fs actually).
What I want to understand is:
When I write() (or pwrite()), when exactly does the write call return?
I see three possibilities:
write returns immediately after queueing the I/O operation on the client side:
In this case, write can return before data has actually left the client node (If you are writing to a local hard drive, then the write call employs delayed writes, where data is simply queued up for writing. But does this also happen when you are writing to a remote hard disk?). I wrote a testcase in which I write a large matrix (1GByte) to file. Without fsync, it showed very high bandwidth values, whereas with fsync, results looked more realistic. So looks like it could be using delayed writes.
write returns after the data has been transferred to the server buffer:
Now data is on the server, but resides in a buffer in its main memory, but not yet permanently stored away on the hard drive. In this case, I/O time should be dominated by the time to transfer the data over the network.
write returns after data has been actually stored on the hard drive:
Which I am sure does not happen by default (unless you write really large files which causes your RAM to get filled and ultimately get flushed out and so on...).
Additionally, what I would like to be sure about is:
Can a situation occur where the program terminates without any data actually having left the client node, such that network parameters like latency, bandwidth, and the hard drive bandwidth do not feature in the program's execution time at all? Consider we do not do an fsync or something similar.
EDIT: I am using the pvfs2 parallel file system
Option 3. is of course simple, and safe. However, a production quality POSIX compatible parallel file system with performance good enough that anyone actually cares to use it, will typically use option 1 combined with some more or less involved mechanism to avoid conflicts when e.g. several clients cache the same file.
As the saying goes, "There are only two hard things in Computer Science: cache invalidation and naming things and off-by-one errors".
If the filesystem is supposed to be POSIX compatible, you need to go and learn POSIX fs semantics, and look up how the fs supports these while getting good performance (alternatively, which parts of POSIX semantics it skips, a la NFS). What makes this, err, interesting is that the POSIX fs semantics harks back to the 1970's with little to no though of how to support network filesystems.
I don't know about pvfs2 specifically, but typically in order to conform to POSIX and provide decent performance, option 1 can be used together with some kind of cache coherency protocol (which e.g. Lustre does). For fsync(), the data must then actually be transferred to the server and committed to stable storage on the server (disks or battery-backed write cache) before fsync() returns. And of course, the client has some limit on the amount of dirty pages, after which it will block further write()'s to the file until some have been transferred to the server.
You can get any of your three options. It depends on the flags you provide to the open call. It depends on how the filesystem was mounted locally. It also depends on how the remote server is configured.
The following are all taken from Linux. Solaris and others may differ.
Some important open flags are O_SYNC, O_DIRECT, O_DSYNC, O_RSYNC.
Some important mount flags for NFS are ac, noac, cto, nocto, lookupcache, sync, async.
Some important flags for exporting NFS are sync, async, no_wdelay. And of course the mount flags of the filesystem that NFS is exporting are important as well. For example, if you were exporting XFS or EXT4 from Linux and for some reason you used the nobarrier flag, a power loss on the server side would almost certainly result in lost data.
I'm periodically reading from a file and checking the readout to decide subsequent action. As this file may be modified by some mechanism which will bypass the block file I/O manipulation layer in the Linux kernel, I need to ensure the read operation reading data from the real underlying device instead of the kernel buffer.
I know fsync() can make sure all I/O write operations completed with all data written to the real device, but it's not for I/O read operations.
The file has to be kept opened.
So could anyone please kindly tell me how I can do to meet such requirement in Linux system? is there such a API similar to fsync() that can be called?
Really appreciate your help!
I believe that you want to use the O_DIRECT flag to open().
I think memory mapping in combination with madvise() and/or posix_fadvise() should satisfy your requirements... Linus contrasts this with O_DIRECT at http://kerneltrap.org/node/7563 ;-).
You are going to be in trouble if another device is writing to the block device at the same time as the kernel.
The kernel assumes that the block device won't be written by any other party than itself. This is true even if the filesystem is mounted readonly.
Even if you used direct IO, the kernel may cache filesystem metadata, so a change in the location of those blocks of the file may result in incorrect behaviour.
So in short - don't do that.
If you wanted, you could access the block device directly - which might be a more successful scheme, but still potentially allowing harmful race-conditions (you cannot guarantee the order of the metadata and data updates by the other device). These could cause you to end up reading junk from the device (if the metadata were updated before the data). You'd better have a mechanism of detecting junk reads in this case.
I am of course, assuming some very simple braindead filesystem such as FAT. That might reasonably be implemented in userspace (mtools, for instance, does)