Linux boot process -- iniramfs & root (\) - linux

I have some question related to linux boot process. Initramfs is the first stage rootfile system loaded.
Init process inside iniramfs is responsible to mount actual rootfile system from harddisk to / directory.
Now my question is where is / directory created by init (init process of initramfs) to mount actual root partition. Is it in ram or hardisk ?
Also once actual root partiton is mounted then what happens to initramfs ?
If initramfs is deleted from ram then what happens to / folder created by initramfs ?
Please suggest , can some explain how does this magic works.
//Allan

What /sbin/init (of initramfs) does is, loads the filesystems and necessary modules. Then it tries to load the targeted real "rootfs". Then it switches from initramfs to real rootfs and "/" is on the harddisk. "/" is created when you installed the systems, done harddrive formating. Note, it's about reading the filesystem's content thus it's a prerequisite to load the required module first. If you've a ext3 partition of "/", then ext3.ko will be loaded and so.
Answer to second question - after doing the required fs module loading, it switches from initramfs's init to real rootfs's init and the usual booting process starts of and initramfs is removed from memory. This switching is done through pivot_root().
Answer to third - initramfs doesn't create any directory, it just load existing initramfs.img image into ram.
So, in short, loading iniramfs or rootfs isn't about creating any directory, it's about loading existing filesystem images. Just after boot - it uses initramfs to load must needed filesystems module, as if it can read the real filesystem. Hope it'll help!

With initrd there are two options:
Using pivot_root to rotate the final filesystem into position, or
Emptying the root and mounting the final filesystem over it.
More info can be found here.

Related

What should I do after the "usebackuproot" mount option works on BTRFS?

After changing a BCache cache device, I was unable to mount my BTRFS filesystem without the "usebackuproot" option; I suspect that there is corruption without any disk failures. I've tried recreating the previous caching setup but it doesn't seem to help.
So the question is, what now? Both btrfs rescue chunk-recover and btrfs check failed, but with the "usebackuproot" option I can mount it r/w, the data seems fine, and btrfs rescue super-recover reports no issues. I'm currently performing a scrub operation but it will take several more hours.
Can I trust the data stored on that filesystem? I made a read-only snapshot shortly before the corruption occurred, but still within the same filesystem; can I trust that? Will btrfs scrub or any other operation truly check whether any of my files are damaged? Should I just add "usebackuproot" to my /etc/fstab (and update-initramfs) and call it a day?
I don't know if this will work for everyone, but I saved my filesystem with the following steps*:
Mount the filesystem with mount -o usebackuproot†
Scrub the mounted filesystem with btrfs scrub
Unmount (or remount as ro) the filesystem and run btrfs check --mode=lowmem‡
*These steps assume that you're unable to mount the filesystem normally and that btrfs check has failed. Otherwise, try that first.
†If this step fails, try running btrfs rescue super-recover, and if that alone doesn't fix it, btrfs rescue chunk-recover.
‡This command will not fix your file systems if problems are found, but it's otherwise very memory intensive and will be killed by the kernel if run in a live image. If problems are found, make or use a separate installation to run btrfs check --repair.

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

get execution path without /proc mounted

I have a shared hosting provider who does not mount /proc for security reason.
I want to execute a binary file written in GO which needs the path in which it was started. This is done by using readlink and virtual link /proc/self/exe
(see source https://github.com/golang/go/blob/master/src/os/executable_procfs.go)
But this link can't be found due to the fact, that /proc is not mounted.
Arg[0] is not possible because you can call the file via "./app".
Is there a nother option to get execution path? Thanks for any help!

Linux kernel can't mount /dev file system

I'm building a custom linux image, using a non-manipulated Linux kernel 2.6.32.65.
the kernel boots just fine until it reaches this:
EXT2-fs warning: mounting unchecked fs, running e2fsck is recommended
VFS: Mounted root (ext2 filesystem) on device 3:1.
Freeing unused kernel memory: 304k freed
init: Unable to mount /dev filesystem: No such device
init: ureadahead main process (983) terminated with status 5
init: console-setup main process (1052) terminated with status 1
I tried the solutions mentioned here although the error is not exactly the same, but no luck. I tried multiple "reference" .config files. I have been googling for a bit but I can't find anything with the same problem.
I'm running this custom image on gem5 simulator, with file system from ubuntu-core and a clean kernel. earlier in the output the kernel shows this:
hda: max request size: 128KiB
hda: 16514064 sectors (8455 MB), CHS=16383/16/63
hda: hda1
So the kernel is able to see partitions just fine. I don't think this is caused by something in the file system. maybe initrd? or the kernel itself? how can I fix it?
1.) The problem is not in devfs, it seems issue is console setup. 2.) This is init issue not a linux kernel issue. 3.) Try to pass /bin/sh instead of init to kernel cmd line
I have the same problem with custom built embedded Linux.
Check you have enabled devfs in kernel .config
# core filesystems
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
## devfs
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y

Solutions to resize root partition on live mounted system

I'm writing a Chef recipe to automate setting up software RAID 1 on an existing system with. The basic procedure is:
Clear partition table on new disk (/dev/sdb)
Add new partitions, and set then to raid using parted (sdb1 for /boot and sdb2 with LVM for /)
Create a degraded RAID with /dev/sdb using mdadm --create ... missing
pvcreate /dev/md1 && vgextend VolGroup /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce VolGroup /dev/sda2 && pvremove /dev/sda2
...
...
I'm stuck on no. 5. With 2 disks of the same size I always get an error:
Insufficient free space: 10114 extents needed, but only 10106 available
Unable to allocate mirror extents for pvmove0.
Failed to convert pvmove LV to mirrored
I think it's because when I do the mdadm --create, it adds extra information to the disk so it has slightly less physical extents.
To remedy the issue, one would normally reboot the system off a live distro and:
e2fsck -f /dev/VolGroup/lv_root
lvreduce -L -0.5G --resizefs ...
pvresize --setphysicalvolumesize ...G /dev/sda2
etc etc
reboot
and continue with step no. 5 above.
I can't do that with Chef as it can't handle the rebooting onto a live distro and continuing where it left off. I understand that this obviously wouldn't be idempotent.
So my requirements are to be able to lvreduce (somehow) on the live system without using a live distro cd.
Anyone out there have any ideas on how this can be accomplished?
Maybe?:
Mount a remote filesystem as root and remount current root elsewhere
Remount the root filesystem as read-only (but I don't know how that's possible as you can't unmount the live system in the first place).
Or another solution to somehow reboot into a live distro, script the resize and reboot back and continue the Chef run (Not sure if this is even popssible
Ideas?
I'm quite unsure chef is the right tool.
Not a definitive solution but what I would do for this case:
Create a live system with a chef and a cookbook
Boot on this
run chef as chef-solo with the recipe doing the work (which should work as the physical disk are unmounted at first)
The best way would be to write cookbooks to be able to redo the target boxes from scratch, once it's done you can reinstall entirely the target with the correct partitionning at system install time and let chef rebuild your application stack after that.

Resources