A remote NAS server provides an NFS share (/myShare) to a linux client machine
From the linux client , I mounted the NFS share ( Eg./mnt/myShare )
My question is , Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
I would like to use this disk as a physical disk itself to a container to store its data.
Can device mapper be of help here.. Any leads would be of help here
--kk
Is it possible to convert this /mnt/myShare as a disk device (eg./dev/mydevice)
The answer is yes and no. Yes, because you can mount everything anywhere, i.e. you can:
mount -t nfs nas:/myShare /dev/mydevice
(provided that the directory /dev/mydevice exists).
NO, because a disk is a file under /dev, which basically exposes a set of sectors (or clusters) - other OS components use that to present a file system, which then is mounted somewhere else.
You, instead, have already a file which represents a file system. You can mount that file system wherever you want. 99% of your OS, and your programs, will not care about.
But your share is not a disk, because it is something (a directory part of a file system) exported by another machine. And this difference cannot be circumvented. I think you can live with this without problems but, if your question is literally correct, then no: an exported share is not a disk.
If you want to use the raw hard drive, then you don't need a filesystem. Perhaps your NAS server can be configured to export its storage as an iSCSI target.
NFS itself doesn't implement storage as block device.
But what you can do is the following:
Mount /myShare onto /mnt/myShare.
Create a file the size of all of myShare. For example, if myShare is 3TB in size, do truncate -s 3T /mnt/myShare/loop.img. (at this point, if you wanted a filesystem, you could have done mkfs -t ext4 /mnt/myShare/loop.img).
Set up the file as a loop device: sudo losetup /dev/loop7 /mnt/myShare/loop.img
You now have a 3TB block device on /dev/loop7, which you can see in the output of grep loop7 /proc/partitions.
Related
I have Varnish 5.1.1 on Centos 6.5 and want to Use a fresh SSD for file storage, (my RAM 64GB get full quickly as I have a lot of objects to cache)
As said in Varnish Doc I have mounted a tmpfs partition for the working directory :
"The shmlog usually is located in /var/lib/varnish and you can feel free to remove any data found in this directory if needed. Varnish suggests make sure this location is mounted on tmpfs by using /etc/fstab to make sure this gets mounted on server reboot / start."
I have a dedicated 256 GB SSD drive for cache storage.
Do I have to mount it as tmpf with noatime like working dir ?
I did not find any suggestion on how to configure SSD for Varnish needs.
No, you do not have to mount anything special if you're just going to use your SSD. tempfs is specific for a RAM drive only, and if you're not going to take advantage of the superior speed of RAM over SSD, then leaving /var/lib/varnish as is on the default install is good enough.
/var/lib/varnish is used for logging, and Varnish logs a lot of data. Since it uses a circular buffer, size isn't an issue, however the I/O will wear your disks down.
TL;DR: always mount the work directory as tmpfs.
I was curious to see if I could display contents of /proc virtual filesystem which was mounted over network. So I exported "/" and mounted it in another system over NFS. Then I did cd in to proc directory and did ls. It displayed nothing. Someone please explain why it is empty.
Please read man 5 exports:
nohide This option is based on the option of the same name provided in
IRIX NFS. Normally, if a server exports two filesystems one of
which is mounted on the other, then the client will have to
mount both filesystems explicitly to get access to them. If it
just mounts the parent, it will see an empty directory at the
place where the other filesystem is mounted. That filesystem is
"hidden".
By default the client doesn't see nested mounts.
I would like to know how to create a root file system for an embedded Linux system that is stored on a hard drive. Would this be the same procedure if it was on a flash card?
No, your boot loader would need to know how to initialize the hard drive. With flash cards the boot loader initializes as an MTD and can understand the file system.
You might be able to make progress with an IDE HD and IDE support in the boot loader.
On a regular computer (e.g., PC) the BIOS takes care of initializing all peripherals, like a primary HD.
Typically Linux embedded system is not operate directly in disk based filesystem, but use a mechanism to load the OS from a persistent storage (hard drive, flash card or memory, etc.) to volatile memory space (RAM). In general, these OS's file (commonly called as firmware) are kernel image file and a initrd (Initial RAM Disk) file, the initrd file contains root filesystem's files and any system's related files, upon boot the initrd will be uncompressed and deployed into a RAM based filesystem such as tmpfs, once completed, the system will use the tmpfs filesystem just like any disk based filesystem (ext3, btrfs), for example to run init program or script to do system initialization. Embedded system is tend to minimize I/O on persistent storage for some advantages: reliability, speed and cost.
You can learn how to accomplish this by learning any general Linux distribution on how to create and modify a initrd file.
I've got an ARM virtual machine running on top of KVM/QEMU, with a file mounted as the root filesystem. The VM doesn't have networking, so NFS mounting the root is out of the question. I am testing a particular transport mechanism for IO, so I'm kind of stuck with what I've got.
I want to send files into the guest, so I'd like to mount the file on the host, write things to it, and then unmount it to force a flush. The contents of the filesystem are trivial, and I have a backup, so I have no problem with corruption. Likewise, performance is not an issue.
The problem is, when I do this mount-write-unmount thing, the guest never sees the file. I'm guessing this is a result of the kernel's filesystem cache, and that when I do ls, the file isn't there. I'm guessing the metadata concerning the filesystem are cached in memory, and the updates to the filesystem never appear.
I'm guessing if I disable filesystem caching, then all reads will be forced to disk, causing the filesystem to be hit, and my file to appear. Any tips?
I can think of this:
sync
echo 3 > /proc/sys/vm/drop_caches
And this:
qemu -drive cache=none,file=file.img
I'm trying to get a usable core dump from code that I am writing. My source is on a NTFS partition that I share between Windows and Linux OSes. I'm doing the development under Linux and have set ulimit -c unlimited in my bash shell. When I execute the code in my project directory on the NTFS partition, and purposely cause a SIGSEGV or SIGABRT, the system writes a core dump file of zero bytes.
If I execute the binary in my home directory (an ext4 partition), the core dump is generated fine. I've had a look at the man page for core, which gives a list of various circumstances in which a core dump file is not produced. However, I don't think it's a permissions issue as all the files and directories on that partition have full rights (chmod 777).
Any help or thoughts appreciated.
Maybe you should check the this file (/proc/sys/kernel/core_pattern)
The directory where the application sits is a mount point to another linux machine. The core file can't be written to a mounted drive but must be written to the local drive.
http://www.experts-exchange.com/OS/Linux/Q_23677186.html
You can create ram disk and put the core dump on the ram disk.
Has your ntfs partition enough free space to generate the core dump ?
Is your ntfs partition mounted with read/write rights (not only read) ?