How to replace the contents of rootfs partition while the device is booted up?
I am using pine64 (1GB) with stripped debian version and struck in writing a factory reset script which will replace all files in the rootfs partition while the device is running? resident restore file could be tar or img file.
I have already tried two approaches
dd the partition from image to the partition on device.
sudo dd if=pine-debian.img skip=*start of rootfs partition* seek=*start of rootfs partition* of=/dev/mmcblk0
Extract the compressed content to the / directory.
sudo tar -C / -zxvf pine-debian.tar.gz
After both the approaches, the system can recognize any command, not even ls. Any help will be appreciated about how to solve this. how to replace fs content while the device is running?
Ideally, you should have two partitions each with a copy of the rootfs. You can write the partition that is currently not in use with dd, and then update the bootloader configuration to point to the just written partition as the root. swupdate supports such a dual-bank scenario, but it only has native support for U-Boot; if you use a different bootloader, you'll have to add a script to perform the swap.
If you really need to overwrite in-place, directly overwriting the partition is not possible because that filesystem is currently in use. Untarring will also fail because some files are currently in use - in particular libc. You could try to add the --unlink-first option to the untar command, but I'm not sure if that works.
Two other options:
Instead of overwriting the full rootfs, use Debian package upgrades. They have pre- and post-install scripts to make the upgrade safe.
Swap to a (temporary, small) in-RAM root filesystem to perform the upgrade. This root filesystem should just contain busybox and the script that performs the upgrade. You can either kill all processes and then do a pivot_root into the temporary rootfs, or you can use kexec --initrd=... to boot into the in-RAM root filesystem.
Related
I run a Linux VM on VirtualBox and just made a mistake by moving the contents in /usr to a different partition mounted as /u01. My intention was to free up space for the / file system but realized I should have used cp instead of mv. It's not possible to run any command now. Is there any way I can recover the system?
Fixed by booting with iso image and mounting the root and restored the contents back to original partition.
I'm using Yocto on Ubuntu 18.04 with the Warrior Branch of Meta-Tegra in order to try to integrate the RAUC Open Source project for Linux Firmware Updates.
I've learned that U-Boot has issues writing to EXT4 partitions ( to update the U-Boot Env ) if the EXT4 filesystem it is writing to has the metadata_csum attribute. Linux cannot mount the Root Filesystem if that attribute is enabled and U-Boot writes at all to it.
Here are some posts on that:
https://patchwork.ozlabs.org/patch/818337/
http://u-boot.10912.n7.nabble.com/PATCH-1-1-fs-ext4-do-not-write-on-filesystem-with-metadata-csum-feature-td362715.html
I proved that this is the case by mounting the resulting SDCARD image from Yocto on Ubuntu and running the following command to disable metadata_csum:
sudo tune2fs -O ^metadata_csum /dev/sdb1
tune2fs 1.44.1 (24-Mar-2018)
Disabling checksums could take some time.
Proceed anyway (or wait 5 seconds to proceed) ? (y,N) y
After running that command - U-Boot can read/write at will from U-Boot space and Linux can mount the Root File System.
I am trying to figure out how to disable the checksums with Tune2fs on Ubuntu at image creation time with Yocto. Where/How can I add this so that the image has checksums disabled at image creation time from Yocto?
I briefly looked over meta-tegra and I think it uses the ext4 root filesystem image created through image_class.bcclass. You can add parameters to the mkfs.ext4 thorugh EXTRA_IMAGECMD. It should be possible to just create the filesystem with metadata_csum disabled, instead of turning it off later.
Try
EXTRA_IMAGECMD_append = " -O ^metadata_csum"
in your local.conf
I'm building an LFS (Linux From Scratch) system in a VM and so far I've managed to get a workable, desktop system, booting from a known device, /dev/sda1 in my case. I'm now trying to make a live system that boots from an ISO image. Instead of using /dev/sr0 as the root, which I've already established is possible (and, since it's more likely to be used from a USB stick than an actual CD-ROM, is too inflexible) I've set my mind on booting it into an initrd root. The idea is to use that as the system's root instead of using it as a temporal root to load the "real" root, and since it's already in memory, it saves me the trouble of setting up a tmpfs root, copy all the files, and switch to it.
I had been previously been experimenting with a squashfs image as I had seen that Ubuntu seemed to use that and has what I needed: a small sized root, being faster to load, using less memory, and is fast (xz is SSLLOOWW to extract and gzip is slow to load). At first I was having trouble booting it, so i switched to the cpio based initrd. After some initial trouble due to missing files on the archive I did manage to boot it.
I left that aside for the time being (around a month ago) to do other tasks on the system. I lost the original GRUB2 settings and kernel config so went about doing it again but Ive been running into a brick wall. I'm hoping someone here might know what I'm missing.
When I boot up I never see any message about the loading of the initrd file, it goes straight into the loading, uncompressing and booting up of the kernel. And this ends up in a kernel panic with the message
VFS: cannot open root device "(null)" or unknown-block (1,0): error -6
Please append the correct "root=" boot option; here are the available partitions:
No partitions are shown and following that is the "kernel panic" message, just repeating the first line. If I use the "rootfstype=ramfs" I get:
VFS: mounted root (ramfs filesystem) readonly on device 0:15.
devtmpfs: error mounting -2
Essentially, it's mounting an EMPTY ramfs file system as root, so mounting the devtmpfs fails because the /dev entry doesn't exist. But certain I used that boot option before.
Here's my GRUB 2 config:
menuentry = "LFS (inird test)" {
linux /boot/kernel/initrd/linux ro rdinit=/etc/init
initrd /boot/kernel/initrd/root.cpio.gz
}
Yes, /boot/kernel/initrd/ directory entry exists, linux is the kernel (the bzimage file produced by compiling the kernel), and root.cpio.gz is my compressed initrd root cpio archive.
Here's my kernel's .config file (sorry can't paste it here).
If any more info is needed, don't hesitate to ask. That you.
OK, I managed to solve the problem! Apparently, it wasn't the kernel's configuration, GRUB2, or even the bootup sequence. It was the initrd archive itself. Deep in the bowels of the Linux kernel's configuration lied the answer: the archive must be built using cpio's --newc option. The one I built manually lacked this option, so the kernel was ignoring the archive and just proceeding with the normal boot procedure.
This came about because I managed to stumble across an older script I used to build them and saw all the options in it for cpio. I checked the much more recent script I hastily put together and double-checked the kernel documentation (as well as the init/do_mounts.c and init/initramfs.c files) and realized what was going on. I tried it with the corrections and the system now happily boots into the initrd with no problem! :D
I intend to set some capabilities on binaries included in a Yocto image using "setcap". For some reason the solutions mentioned here did not work for me:
Linux capabilities with yocto
. I have checked that by running "getcap" on my binary within the rootfs creation directory:
getcap ${IMAGE_ROOTFS}/usr/bin/mybinary
does not return anything. Nor do I find the capabilities in the final running sdcard image.
Next I tried the approach using IMAGE_PREPROCESS_COMMAND. I wrapped up setcap commands in small shell functions such as:
my_setcap_function() {
sudo setcap cap_ipc_owner+ep ${IMAGE_ROOTFS}/usr/bin/mybinary
}
and append the function names to IMAGE_PREPROCESS_COMMAND. This works to the extent that now running getcap on my binary within the {IMAGE_ROOTFS} directory does show the correct caps set. However I still do not get the capabilities in the final running sdcard image.
Also if I mount the rootfs ext4 (which is used to create the final sdcard image) on a directory using -o loop, I do not see the capabilities on my binary.
It seems to me that the capabilitiess somehow get lost when the ext4 is created using mkfs.ext4.
I had to attach sudo to setcap because otherwise it complains saying "unable to set CAP_SETFCAP effective capability: Operation not permitted". Although my understanding was that IMAGE_PREPROCESS_COMMAND commands are run using fakeroot so this sudo should not be required.
So, to summarize my question:
How can I get the capabilities on the sdcard image made using ext4 rootfs image?
I want to use a way that does not require using "sudo".
I am using Yocto Krogoth and currently cannot upgrade that.
Did you really test it on the final image or in the rootfs folder from the yocto build?
I run getcap on the files in the rootfs folder, and there where nothing set.
Because yocto uses pseudo lib to intercept chown, chmod calls, track them in a sqlite db (uses LD_PRELOAD for interception).
So this attributes are not set for the files in the "rootfs" folder, however added at image/rootfs creation.
you can use setcap in recipe, you have to add
DEPENDS = "libcap-native"
in your recipe.
I m using buildroot to create a filesystem for a Raspberry Pi. I have uncompressed the filesystem image in the Root partition of my SD card but I can't boot the operative system. I get the following errors:
Can't open /dev/null no such file or directory
Can't open /dev/ttyS0 no such file or directory
Which line of the configuration tool should I enable or modify in order to boot the system?
EDIT
I've followed the steps provided by Thomas Petazzoni and used a preconfigured version of buildroot. Now the system works but I still don't know which option in the kernel configuration tool was causing the problem.
You don't have devtmpfs enabled in your kernel.
Also, you should start by using the raspberrypi_defconfig in Buildroot instead of rolling your own. Do:
make distclean
make raspberrypi_defconfig
make
And then follow the instructions in board/raspberrypi/readme.txt to know how to use the resulting images.