Solutions to resize root partition on live mounted system - linux

I'm writing a Chef recipe to automate setting up software RAID 1 on an existing system with. The basic procedure is:
Clear partition table on new disk (/dev/sdb)
Add new partitions, and set then to raid using parted (sdb1 for /boot and sdb2 with LVM for /)
Create a degraded RAID with /dev/sdb using mdadm --create ... missing
pvcreate /dev/md1 && vgextend VolGroup /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce VolGroup /dev/sda2 && pvremove /dev/sda2
...
...
I'm stuck on no. 5. With 2 disks of the same size I always get an error:
Insufficient free space: 10114 extents needed, but only 10106 available
Unable to allocate mirror extents for pvmove0.
Failed to convert pvmove LV to mirrored
I think it's because when I do the mdadm --create, it adds extra information to the disk so it has slightly less physical extents.
To remedy the issue, one would normally reboot the system off a live distro and:
e2fsck -f /dev/VolGroup/lv_root
lvreduce -L -0.5G --resizefs ...
pvresize --setphysicalvolumesize ...G /dev/sda2
etc etc
reboot
and continue with step no. 5 above.
I can't do that with Chef as it can't handle the rebooting onto a live distro and continuing where it left off. I understand that this obviously wouldn't be idempotent.
So my requirements are to be able to lvreduce (somehow) on the live system without using a live distro cd.
Anyone out there have any ideas on how this can be accomplished?
Maybe?:
Mount a remote filesystem as root and remount current root elsewhere
Remount the root filesystem as read-only (but I don't know how that's possible as you can't unmount the live system in the first place).
Or another solution to somehow reboot into a live distro, script the resize and reboot back and continue the Chef run (Not sure if this is even popssible
Ideas?

I'm quite unsure chef is the right tool.
Not a definitive solution but what I would do for this case:
Create a live system with a chef and a cookbook
Boot on this
run chef as chef-solo with the recipe doing the work (which should work as the physical disk are unmounted at first)
The best way would be to write cookbooks to be able to redo the target boxes from scratch, once it's done you can reinstall entirely the target with the correct partitionning at system install time and let chef rebuild your application stack after that.

Related

What should I do after the "usebackuproot" mount option works on BTRFS?

After changing a BCache cache device, I was unable to mount my BTRFS filesystem without the "usebackuproot" option; I suspect that there is corruption without any disk failures. I've tried recreating the previous caching setup but it doesn't seem to help.
So the question is, what now? Both btrfs rescue chunk-recover and btrfs check failed, but with the "usebackuproot" option I can mount it r/w, the data seems fine, and btrfs rescue super-recover reports no issues. I'm currently performing a scrub operation but it will take several more hours.
Can I trust the data stored on that filesystem? I made a read-only snapshot shortly before the corruption occurred, but still within the same filesystem; can I trust that? Will btrfs scrub or any other operation truly check whether any of my files are damaged? Should I just add "usebackuproot" to my /etc/fstab (and update-initramfs) and call it a day?
I don't know if this will work for everyone, but I saved my filesystem with the following steps*:
Mount the filesystem with mount -o usebackuproot†
Scrub the mounted filesystem with btrfs scrub
Unmount (or remount as ro) the filesystem and run btrfs check --mode=lowmem‡
*These steps assume that you're unable to mount the filesystem normally and that btrfs check has failed. Otherwise, try that first.
†If this step fails, try running btrfs rescue super-recover, and if that alone doesn't fix it, btrfs rescue chunk-recover.
‡This command will not fix your file systems if problems are found, but it's otherwise very memory intensive and will be killed by the kernel if run in a live image. If problems are found, make or use a separate installation to run btrfs check --repair.

How to fix problem with zfs mount after upgrade to 12.0-RELEASE?

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)
Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

AWS EC2: Moving /var to EBS

I'm banging my head against a wall for the last 5 hours or so.
I have a brand new Centos 6 installation with Plesk. Once the machine is booted up I'm trying to move the /var folder to an attached EBS (/dev/xvdj):
#copy original /var to /dev/xvdj
mkdir /mnt/new
mount /dev/xvdj /mnt/new
cd /var
cp -Rax * /mnt/new
cd /
mv var var.old
#mount EBS as new /var
umount /dev/xvdj
mkdir /var
mount /dev/xvdj /var
I know prior to moving /var I'm supposed to boot the instance into runlevel 1 (single user) to prevent anything writing and reading from /var. However, this locks me out from the instance which I learned the hard way.
I tried to manually stop mysql, webserver and mail server, but after I move /var I can't bring these services back up, they just state [FAILED] when I attempt to start. They also don't write anything into /var/log. On a first glance permission of the directories inside /var look alright, symlinks exist too.
Any ideas?
This is a very common requirement for all corporate clients, having separate partition does help a lot in order to increase volume size at any given point of time.
Most of the people get stuck with SSH connection problem after doing partitioning that's when they use a more generalized approach to do partitioning.
I have specially written a blog for this with a detail step by step procedure to perform such operation on AWS EBS.
Steps to create separate /var partition on AWS EBS volume
Also if you choose to do partitioning using LVM then here is one more post which has detailed step by step procedure with screenshots.
Create root swap and LVM partition on AWS EBS volume
Hope this helps! :)
The best way to do that is probably offline. Detach your EBS disks from the first instance, attach to another one, mount them and make the changes, including the fstab of the root EBS. Then, detach and attach it again on the original instance and boot. I would do that way.

What is /dev/mapper/vg_root-lv_root directory? Why my docker relies on this directory?

du
lists
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_root-lv_root 38929872 36450548 503672 99% /
tmpfs 4025936 0 4025936 0%
/dev/shm /dev/sda1 53871 39618 201146 17% /boot
xxx.in:/vol/software/arch* 653053184 408978688 244074496 63% /usr/software
xxxx:/users003/gopir 3435973888 1024638080 2411335808 30% /u/gopir
I am not able to format it. In this case, my /dev/mapper/vg* is full. But i have space in my directory. How should i make use of my space? Why do my docker depends on this space rather than my space? Due to this, i get write error. I referred this but it doesn't help me. And what is this directory?
With below command, I managed to find which folder this drive maps to.
tree -LP 1 /dev/mapper/vg_os-lv_root
After that I was able to move folder to different drive (which have more space), in my case it was /var/lib/docker
You might have to install tree package using command yum install tree if not available by default.
Docker by design has a flaw OOB. It gets mounted on aufs by default. The better alternative to the same is to use Device mapper with lvm configured.
[https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#write-examples][1]
So in your case it seems you already have device mapper in place. So I suggest you create and extra loop-lvm config and add to your existing Volume Group based on the above documentation.

Linux boot process -- iniramfs & root (\)

I have some question related to linux boot process. Initramfs is the first stage rootfile system loaded.
Init process inside iniramfs is responsible to mount actual rootfile system from harddisk to / directory.
Now my question is where is / directory created by init (init process of initramfs) to mount actual root partition. Is it in ram or hardisk ?
Also once actual root partiton is mounted then what happens to initramfs ?
If initramfs is deleted from ram then what happens to / folder created by initramfs ?
Please suggest , can some explain how does this magic works.
//Allan
What /sbin/init (of initramfs) does is, loads the filesystems and necessary modules. Then it tries to load the targeted real "rootfs". Then it switches from initramfs to real rootfs and "/" is on the harddisk. "/" is created when you installed the systems, done harddrive formating. Note, it's about reading the filesystem's content thus it's a prerequisite to load the required module first. If you've a ext3 partition of "/", then ext3.ko will be loaded and so.
Answer to second question - after doing the required fs module loading, it switches from initramfs's init to real rootfs's init and the usual booting process starts of and initramfs is removed from memory. This switching is done through pivot_root().
Answer to third - initramfs doesn't create any directory, it just load existing initramfs.img image into ram.
So, in short, loading iniramfs or rootfs isn't about creating any directory, it's about loading existing filesystem images. Just after boot - it uses initramfs to load must needed filesystems module, as if it can read the real filesystem. Hope it'll help!
With initrd there are two options:
Using pivot_root to rotate the final filesystem into position, or
Emptying the root and mounting the final filesystem over it.
More info can be found here.

Resources