I've got an ARM virtual machine running on top of KVM/QEMU, with a file mounted as the root filesystem. The VM doesn't have networking, so NFS mounting the root is out of the question. I am testing a particular transport mechanism for IO, so I'm kind of stuck with what I've got.
I want to send files into the guest, so I'd like to mount the file on the host, write things to it, and then unmount it to force a flush. The contents of the filesystem are trivial, and I have a backup, so I have no problem with corruption. Likewise, performance is not an issue.
The problem is, when I do this mount-write-unmount thing, the guest never sees the file. I'm guessing this is a result of the kernel's filesystem cache, and that when I do ls, the file isn't there. I'm guessing the metadata concerning the filesystem are cached in memory, and the updates to the filesystem never appear.
I'm guessing if I disable filesystem caching, then all reads will be forced to disk, causing the filesystem to be hit, and my file to appear. Any tips?
I can think of this:
sync
echo 3 > /proc/sys/vm/drop_caches
And this:
qemu -drive cache=none,file=file.img
Related
A remote NAS server provides an NFS share (/myShare) to a linux client machine
From the linux client , I mounted the NFS share ( Eg./mnt/myShare )
My question is , Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
I would like to use this disk as a physical disk itself to a container to store its data.
Can device mapper be of help here.. Any leads would be of help here
--kk
Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
The answer is yes and no. Yes, because you can mount everything anywhere, i.e. you can:
mount -t nfs nas:/myShare /dev/mydevice
(provided that the directory /dev/mydevice exists).
NO, because a disk is a file under /dev, which basically exposes a set of sectors (or clusters) - other OS components use that to present a file system, which then is mounted somewhere else.
You, instead, have already a file which represents a file system. You can mount that file system wherever you want. 99% of your OS, and your programs, will not care about.
But your share is not a disk, because it is something (a directory part of a file system) exported by another machine. And this difference cannot be circumvented. I think you can live with this without problems but, if your question is literally correct, then no: an exported share is not a disk.
If you want to use the raw hard drive, then you don't need a filesystem. Perhaps your NAS server can be configured to export its storage as an iSCSI target.
NFS itself doesn't implement storage as block device.
But what you can do is the following:
Mount /myShare onto /mnt/myShare.
Create a file the size of all of myShare. For example, if myShare is 3TB in size, do truncate -s 3T /mnt/myShare/loop.img. (at this point, if you wanted a filesystem, you could have done mkfs -t ext4 /mnt/myShare/loop.img).
Set up the file as a loop device: sudo losetup /dev/loop7 /mnt/myShare/loop.img
You now have a 3TB block device on /dev/loop7, which you can see in the output of grep loop7 /proc/partitions.
In my linux box, i can able to access one mount path, which is not present in /etc/fstab or /etc/mtab.
I want to disable that mount point. Please help me with the command to show the hidden mount.
Below is the hidden mount in some xxx machine.
/net/bnrdev/bld-views/build
Above path present in bnrdev machine:
/bld-views/build
These are not "hidden" per se, but NFS mounts found by your system.
You can get rid of this functionality by disabling NFS client services, or just the automount daemon.
WARNING - this will likely break automounted home directories, which could cause issues for other system users.
Please! for the LUV of all things cute & cuddly, make a copy of the files you modify. Justin Case could have an issue with your changes, right as you're falling asleep.
I have Varnish 5.1.1 on Centos 6.5 and want to Use a fresh SSD for file storage, (my RAM 64GB get full quickly as I have a lot of objects to cache)
As said in Varnish Doc I have mounted a tmpfs partition for the working directory :
"The shmlog usually is located in /var/lib/varnish and you can feel free to remove any data found in this directory if needed. Varnish suggests make sure this location is mounted on tmpfs by using /etc/fstab to make sure this gets mounted on server reboot / start."
I have a dedicated 256 GB SSD drive for cache storage.
Do I have to mount it as tmpf with noatime like working dir ?
I did not find any suggestion on how to configure SSD for Varnish needs.
No, you do not have to mount anything special if you're just going to use your SSD. tempfs is specific for a RAM drive only, and if you're not going to take advantage of the superior speed of RAM over SSD, then leaving /var/lib/varnish as is on the default install is good enough.
/var/lib/varnish is used for logging, and Varnish logs a lot of data. Since it uses a circular buffer, size isn't an issue, however the I/O will wear your disks down.
TL;DR: always mount the work directory as tmpfs.
First off, my intention is to create a portable, bootable USB drive containing a GNU/Linux distribution. Specifically, I want to use Arch Linux with a squashfs read-only root filesystem.
The squashfs image is based on a snapshot of a working VM. The base system with it's services like ssh work out of the box as expected. But when trying to launch gnome via systemd (systemctl start gdm), all I see is a black screen (supposedly the X-Server started but gdm fails to load). I already tried to figure out whats happening, but failed to identify the exact problem.
Home directories are writeable
/tmp is writeable
/var/log is writeable
/var/run & /run are writeable anyway
/var/log/gdm gets created but stays empty.
Which modules may require write access to any other files? Is there any documentation? What would make sense to strace or similar?
My desire is to know the root of the problem and fix it, instead of using workarounds like unionfs. Thanks for any help or hints!
Although it's not relevant, for those who might wonder why I want to do this, here are some points to consider:
Stability - as you cannot modify system files, you cannot mess up the system (unless you write bogus directly to the drive of course)
Storage - as files are compressed, more data fits on the drive
Performance - as I/O on most USB drives is slow, compression gives you higher I/O speed
Portability - no special treatment for read-only storage, you might copy it on a CD or any other read-only technology and it will still work the same way as it would on a writeable disk
Update
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
journalctl was the debugging command i was missing in the first place.
I have mounted a windows shared folder on a centos box. When i try to read a huge file using read system call and if network connection breaks then the read simply hangs and puts my program into a uninterruptible sleep state. This does not sound right. Even if i open the file using O_NONBLOCK even then the read does hang. I was hoping that read will eventually time out but it does not.
How do you implement a reliable copy operation over network if the read is simply going to block without returning any error?
I dont think the using async mode And select call is going to help me either.
Is read always a blocking call?
Thanks
Ghanaku
You could try mounting the remote filesystem as cifs instead of smb. mount.cifs supports the soft options (and it is the default too) that will cause an error to be returned in the case of a network or server failure instead of a hang.
From the man page:
soft: (default) The program accessing a file on the cifs mounted file system will not hang when the server crashes and will return errors to the user application.