Linux : where exactly is a file saved when there are multiple logical volumes? - linux

I've mostly worked in Windows environments and am still very noobish in everything Linux, so it's very likely I'm missing basic Linux concepts. That being said, I have questions about logical volumes and their interactions with files :
I have to use an Ubuntu machine (which I did not set up). On this machine, there is a physical volume /dev/sda2 which is in a volume group vg0.
That volume group vg0 has 4 logical volumes : lv1, mounted on /, lv2, mounted on /boot, lv3, mounted on /var and lv4, mounted on /tmp
My questions are as follows :
If I save a file (for example foo.txt) in the /var directory, will it be stored on the lv3(/var) logical volume ?
If the lv3(/var) logical volume is full and I try to save foo.txt in the /var directory, will it be stored on the lv1(/) logical volume (after all, /var is in /) ?
If the lv1(/) volume is full and I try to save foo.txt somewhere outside of /var (for example in /home), will it be stored on the lv3(/var) logical volume ?
What could be the point of having all these logical volumes, would 1 volume on / not be much simpler ?
It's quite obvious, from my questions, that I don't really get the relations between logical volumes, mount points and files. Is there somewhere a good tutorial where I could educate myself ?
Thanks in advance.

Yes, because lv3 is mounted on /var any files put in /var go there.
No, there are no special cases that happen when the device is full - you just get a device is full error. Despite /var appearing to be a child of /, that has been overridden by mounting lv3 on /var
No, again because there are no special cases for the device being full. It doesn't care, it just tries to put the file where it goes.
Yes, it is much simpler to have it all in /. But it can cause problems. For example, /boot is often its own volume so that you can't fill it up and prevent your system from working if you download a bunch of stuff in your home folder. There are different schools of thought on how much/how little you should separate your file system into different volumes. It is somewhat just opinion, but those opinions are based on various use cases and problems.
I don't have a great answer other than use the search engine of your choice! Honestly, when you are starting out it doesn't matter so much as long as you have space to put your stuff! If you are a newbie, it might be good to just put everything in one volume - as long as you keep an eye and don't let it fill up.

Related

MiniDLNA: not recognizing additional mount points

I have a volume mounted at /media.
I have an additional volume mounted at /media/dvd.
I have mounted things this way as I understand that minidlna can only use a single directory for, say, videos.
Each volume is structured with sub directories. On the main director, /media everything works, the folder structure works, and adding new files appear automatically.
I cannot get minidlna to recognise the /media/dvd directory. I have tried to restart, force-reload, and even deleted the files.db file to force a refresh.
I don’t know whether the issue is:
The fact that I have one mount point inside another
The permissions need fixing
The fact that the second volume /media/dvd is HFS+; the volume mounts well enough and I can read & write from the shell (touch test).
Something else … ?
I am running Centos 6.

Disk inexplicably filled

I have two linux machines which should be near enough identical clones of each other. One of them has 89% useage of /dev/sda1, and the other has 27% useage.
I've tried the rather manual process of du -h in the root file system and comparing the two, but there are no substantial discreneable differences. Is there any other way to find out where the missing 20GB are?
Thanks!
Problem solved, there was an issue with an unmounted drive which caused it :)
ncdu will display the size of each directory from an ncurses interface. Probably what you're looking for.
Try to look the information provided with the command
tune2fs -l /dev/sda1
Do you see any difference in the block size or anything else?
You can also try baobab -disk usage analyzer-, a gui tool which displays disk usage in a clever visualization of nested piecharts.

Deployment over GPRS to embedded devices

I've got quite a head scratcher here. We have multiple Raspberry Pis on the field hundreds of kilometers apart. We need to be able to safe(ish)ly upgrade them remotely, as the price for local access can cost up to few hundred euros.
The raspis run rasbian, / is on SD-card mounted in RO to prevent corruption when power is cut (usually once/day). The SD cards are cloned from same base image, but contain manually installed packages and modified files that might differ between devices. The raspis all have a USB flash as a more corruption resistant RW-drive and a script to format it on boot in case the drive is corrupted. They call home via GPRS connection with varying reliability.
The requirements for the system are as follows:
Easy versioning of config files, scripts and binaries, at leasts /etc, /root and home preferably Git
Efficient up-/downgrade from any verion to other over GPRS -> transfer file deltas only
Possibility to automatically roll back recently applied patch, if connection is no longer working
Root file system cannot be in RW mode while downloading changes, the changes need to be stored locally before applying to /
The simple approach might be keeping a complete copy of the file system in a remote git repository, generate a diff file between commits, upload the patch to the field and apply it. However, at the the moment the files on different raspis are not identical. This means, at least when installing the system, the files would have to be synchronized through something similar to rsync -a.
The procedure should be along the lines of "save diff between / and ssh folder to a file on the USB stick, mount / RW, apply diff from file, mount / RO". Rsync does the diff-getting and applying simultaneously, so my first question becomes:
1 Does there exist something like rsync that can save the file deltas from local and remote and apply them later?
Also, I have never made a system like this and the drawt is "closest to legit I can come up with". There's a lot of moving parts here and I'm terrified that something I didn't think of beforehand will cause things to go horribly wrong. Rest of my questions are:
Am I way off base here and is there actually a smarter/safe(r) way to do this?
If not, what kind of best practices should I follow and what kind of things to be extremely careful with (to not brick the devices)?
How do I handle things like installing new programs? Bypass packet manager, install in /opt?
How to manage permissions/owners (root+1 user for application logic)? Just run everything as root and hope for the best?
Yes, this is a very broad question. This will not be a direct answer to your questions, but rather provide guidelines for your research.
One means to prevent file system corruption is use an overlay file system (e.g., AUFS, UnionFS) where the root file system is mounted read-only and a tmpfs (RAM based) or flash based read-write is mount "over" the read-only root. This requires your own init scripts including use of the pivot_root command. Since nothing critical is mounted RW, the system robustly handles power outages. The gist is before the pivot_root, the FS looks like
/ read-only root (typically flash)
/rw tmpfs overlay
/aufs AUFS union overlay of /rw over /
after the pivot_root
/ Union overlay (was /aufs
/flash read only root (was /)
Updates to the /flash file system are done by remounting it read-write, doing the update, and remounting read-only. For example,
mount -oremount,rw <flash-device> /flash
cp -p new-some-script /flash/etc/some-script
mount -oremount,ro <flash-device> /flash
You may or may not immediately see the change reflected in /etc depending upon what is in the tmpfs overlay.
You may find yourself making heavy use of the chroot command especially if you decide to use a package manager. A quick sample
mount -t proc none /flash/proc
mount -t sysfs none /flash/sys
mount -o bind /dev /flash/dev
mount -o bind /dev/pts /flash/dev/pts
mount -o bind /rw /flash/rw #
mount -oremount,rw <flash-device> /flash
chroot /flash
# do commands here to install packages, etc
exit # chroot environment
mount -oremount,ro <flash-device> /flash
Learn to use the patch command. There are binary patch commands How do I create binary patches?.
For super recovery when all goes wrong, you need hardware support with watchdog timers and the ability to do fail-safe boot from alternate (secondary) root file system.
Expect to spend significant amount of time and money if you want a bullet-proof product. There are no shortcuts.

ext2 "image" files vs real ext2 devices

I'm tasked with writing a reader program for windows that is able read an ext2 partition.
For my testing I'm using a drive I formatted to ext2 and a file I created using mkfs (a file that does mount and work well under linux)
For some reason when I read the superblock from the drive (the real one) I get all the right meta-data (i.e. block size, inode count etc..) but doing the exact same thing to the file returns bad results (which make no sense).
is there a difference between the 2?
I open the drive using \.\X:
and I make the file using mkfs.
There shouldn't be any difference between ext2 on a partition and stored within a file (and indeed there isn't; I just checked); however, IIRC, the offset of the primary superblock is 2048 instead of 1024, if ext2 is installed on a bare disk (e.g. /dev/sda instead of /dev/sda1). This is to accomodate the MBR and other junk. (I can't find it in the docs from skimming just now, but this sticks out in my mind as something I ran into.) However, it's somewhat unusual to install to a bare drive, so I doubt this is your problem.
I wrote some ext2 utilities a few years ago, and after starting writing it by hand, I switched to using Ted Ts'o (the ext2 filesystem creator)'s e2fsprogs, which come with headers/libraries/etc. for doing all this in a more flexible and reliable fashion.
You may also want to check at offset 0x438 into the file/partition for the magic number 0xEF53, and consider it not an ext2/3 filesystem if that's not there, before pulling in the entire superblock, just as a sanity check.
Here's some docs that will probably be helpful: http://www.nongnu.org/ext2-doc/ext2.html

How to tell whether two NFS mounts are on the same remote filesystem?

My Linux-based system displays statistics for NFS-mounted filesystems, something like this:
Remote Path Mounted-on Stats
server1:/some/path/name /path1 100 GB free
server2:/other/path/name /path2 100 GB free
Total: 200 GB free
That works fine. The problem is when the same filesystem on the NFS server has been mounted twice on my client:
Remote Path Mounted-on Stats
server1:/some/path/name /path1 100 GB free
server1:/some/path/name2 /path2 100 GB free
Total: 200 GB free
server1's /some/path/name and /some/path/name2 are actually on the same filesystem, which has 100 GB free, but I erroneously add them up and report 200 GB free.
Is there any way to detect that they're on the same partition?
Approaches that won't work:
"Use statfs()": statfs() returns a struct statfs, which has a "file system ID" field, f_fsid. Unfortunately it's undefined and gets zeroed out over NFS.
"Don't mount the same partion multiple times." This is outside of my control.
"Use a heuristic based on available space." The method has to definitively work. Also, statfs() caches its output so it would be difficult to get this right in the face of large data movement.
If there's no solution I'll have to generate a config file in every potential mount point on the server side, but it would be a lot nicer if there was
some clean way to avoid that.
Thanks!
I guess if "stat -c %d /mountpoint" do what you want (I cannot test it right now)?
You probably want to read the remote system's shared file systems - using:
showmount -e server
That will give you the real paths that are being shared. When walking mounts from the remote system, prune them to the common root from the remote system and use that to determine if the mount points are from the same underlying file system.
This doesn't help you in the case that the file systems are separately shared from the same underlying file system.
You could add in a heuristic of checking for the overall file system size and space available, and assuming that if they're the same, and from the same remote server that it's on the same partition mapped to the shortest common path of the mount devices.
None of these help if you share from a loopback mounted file system that looks completely different in form from the others.
It doesn't help you in the case of a server that can be addressed in different names and addresses.

Resources