Whenever I reboot my FreeBSD system, I have to log on to one of my jails to manually mount a filesystem with zfs mount. The partition has the jailed zfs property. The jail is managed with ezjail but jails themself are all on a ufs partition. The dataset is listed in the ezjail config file for the jail.
Problem is, that the dataset only is attached to the jail, after the rc script is done. So the zfs rc script inside the jail doesn't find the dataset and thus can not mount it.
Once the rc.d/jail script has an understanding of dataset, the dataset can be attached, after the jail is created but before it is executed.
Related
I run a Linux VM on VirtualBox and just made a mistake by moving the contents in /usr to a different partition mounted as /u01. My intention was to free up space for the / file system but realized I should have used cp instead of mv. It's not possible to run any command now. Is there any way I can recover the system?
Fixed by booting with iso image and mounting the root and restored the contents back to original partition.
I'm building an LFS (Linux From Scratch) system in a VM and so far I've managed to get a workable, desktop system, booting from a known device, /dev/sda1 in my case. I'm now trying to make a live system that boots from an ISO image. Instead of using /dev/sr0 as the root, which I've already established is possible (and, since it's more likely to be used from a USB stick than an actual CD-ROM, is too inflexible) I've set my mind on booting it into an initrd root. The idea is to use that as the system's root instead of using it as a temporal root to load the "real" root, and since it's already in memory, it saves me the trouble of setting up a tmpfs root, copy all the files, and switch to it.
I had been previously been experimenting with a squashfs image as I had seen that Ubuntu seemed to use that and has what I needed: a small sized root, being faster to load, using less memory, and is fast (xz is SSLLOOWW to extract and gzip is slow to load). At first I was having trouble booting it, so i switched to the cpio based initrd. After some initial trouble due to missing files on the archive I did manage to boot it.
I left that aside for the time being (around a month ago) to do other tasks on the system. I lost the original GRUB2 settings and kernel config so went about doing it again but Ive been running into a brick wall. I'm hoping someone here might know what I'm missing.
When I boot up I never see any message about the loading of the initrd file, it goes straight into the loading, uncompressing and booting up of the kernel. And this ends up in a kernel panic with the message
VFS: cannot open root device "(null)" or unknown-block (1,0): error -6
Please append the correct "root=" boot option; here are the available partitions:
No partitions are shown and following that is the "kernel panic" message, just repeating the first line. If I use the "rootfstype=ramfs" I get:
VFS: mounted root (ramfs filesystem) readonly on device 0:15.
devtmpfs: error mounting -2
Essentially, it's mounting an EMPTY ramfs file system as root, so mounting the devtmpfs fails because the /dev entry doesn't exist. But certain I used that boot option before.
Here's my GRUB 2 config:
menuentry = "LFS (inird test)" {
linux /boot/kernel/initrd/linux ro rdinit=/etc/init
initrd /boot/kernel/initrd/root.cpio.gz
}
Yes, /boot/kernel/initrd/ directory entry exists, linux is the kernel (the bzimage file produced by compiling the kernel), and root.cpio.gz is my compressed initrd root cpio archive.
Here's my kernel's .config file (sorry can't paste it here).
If any more info is needed, don't hesitate to ask. That you.
OK, I managed to solve the problem! Apparently, it wasn't the kernel's configuration, GRUB2, or even the bootup sequence. It was the initrd archive itself. Deep in the bowels of the Linux kernel's configuration lied the answer: the archive must be built using cpio's --newc option. The one I built manually lacked this option, so the kernel was ignoring the archive and just proceeding with the normal boot procedure.
This came about because I managed to stumble across an older script I used to build them and saw all the options in it for cpio. I checked the much more recent script I hastily put together and double-checked the kernel documentation (as well as the init/do_mounts.c and init/initramfs.c files) and realized what was going on. I tried it with the corrections and the system now happily boots into the initrd with no problem! :D
How to replace the contents of rootfs partition while the device is booted up?
I am using pine64 (1GB) with stripped debian version and struck in writing a factory reset script which will replace all files in the rootfs partition while the device is running? resident restore file could be tar or img file.
I have already tried two approaches
dd the partition from image to the partition on device.
sudo dd if=pine-debian.img skip=*start of rootfs partition* seek=*start of rootfs partition* of=/dev/mmcblk0
Extract the compressed content to the / directory.
sudo tar -C / -zxvf pine-debian.tar.gz
After both the approaches, the system can recognize any command, not even ls. Any help will be appreciated about how to solve this. how to replace fs content while the device is running?
Ideally, you should have two partitions each with a copy of the rootfs. You can write the partition that is currently not in use with dd, and then update the bootloader configuration to point to the just written partition as the root. swupdate supports such a dual-bank scenario, but it only has native support for U-Boot; if you use a different bootloader, you'll have to add a script to perform the swap.
If you really need to overwrite in-place, directly overwriting the partition is not possible because that filesystem is currently in use. Untarring will also fail because some files are currently in use - in particular libc. You could try to add the --unlink-first option to the untar command, but I'm not sure if that works.
Two other options:
Instead of overwriting the full rootfs, use Debian package upgrades. They have pre- and post-install scripts to make the upgrade safe.
Swap to a (temporary, small) in-RAM root filesystem to perform the upgrade. This root filesystem should just contain busybox and the script that performs the upgrade. You can either kill all processes and then do a pivot_root into the temporary rootfs, or you can use kexec --initrd=... to boot into the in-RAM root filesystem.
I am developing applications on Beaglebone board with Angstrom Linux distro.
I tend to mount root file system as read only because, it is not robust on readable/writeable configuration across power offs.
Can you make suggestions about how to mount root file system as read only?
What are the steps for mounting root file system read only and then turn it back to readable/writable?
With these step i tend to get a more robust file system.
Regards
You would need to edit the boot arguments that you pass to the kernel to use ro instead of rw for mounting the root file system. For example root=/dev/mmcblk0p1 ro. They are modifiable via the uboot environment variables
On a similar Angstrom-based system, I got the same "must specify the filesystem type" message.
After trying a few different things, I was able to remount root as ro using:
busybox mount -o remount,ro /
I have to admit I'm not certain why calling busybox directly worked when the "mount" command (which is a link to busybox) did not work, but I didn't have time to dig further.
I have a rootfs boot image that I want to test by mounting on my local file system. How can I do this ?
EDIT: The file was a rootfs.img but it turned out I did not have the correct filesystem support in my custom kernel. pjz's answer works once the fs support is there.
Need more info - what kind of image is it?
is it a file that's a filesystem? if so you mount it like:
mount -o loop rootfs.img /mnt/rootfs
if it's a subdir of your filesystem that you'r exporting via nfs, you can simulat ethe environment you've created by chrooting to it:
chroot /path/to/nfs/rootdir/