Set-id bits on an ISO - iso

I'm creating an ISO of a Debian system with:
mkisofs -V "Debian ISO" -cache-inodes -J -l -o file.iso debian-system/
The problem is: when I mount the ISO (mount -o loop) ping and sudo don't work because their suid bits have not been set.
I know that special bis are cleared by the -r flag. This flag generates the "rationalized Rock Ridge directory information" which enables to retain the original file permissions, but also clears any set-id bits.
But if I don't use -r, file permissions will be the same for all files, as specified at runtime when the ISO is mounted.
Question: how to add set-id files like ping and sudo to a linux "live" ISO?

You need to use an alternate file system, that supports those permissions.
The way a LiveCD/DVD works is there is a squashfs file that is mounted with changes made in RAM.
You could "fake" the same by creating a file full of zeros using dd, make a file system on it wtih mkfs.ext4, mount it, and copy the files onto it. Then on your custom disk, mount it as loop (mount -o loop /path/to/file /mnt/point) and symlink/etc the binaries over.

Related

Howto change Clonezilla default menu selection items

I am using clonezilla-live-2.6.1-11-amd64.iso
I would like to change the default section when booting off the live USB to perform full backups of the whole drive. for example:
on screen "Mount Clonezilla image directory" I would like to change the default from local_dev to use samba_server
on screen "Mount Samba server" I would like to change the default from 192.168.1.1 to 192.168.1.2
on screen "Mount Samba server" account change the default administrator to clonezilla
When I enter the items in
/syslinux/syslinux.cfg
.
ocs_repository="smb://clonezilla:password#192.168.1.2/zilla/
the menu's still ask me the default address of 192.18.1.1 and username administrator
so it appears I am not understanding the documentation. Does anyone have an example cfg?
I have delved into customizing "LiveISO's" and CloneZilla specifically so I will give a general idea of how I would attack this.
Looking at my notes this is all I had. To enable SSH Deamon I would unpack the ISO, edit the following and repack the iso using mksquashfs.
Eg:
Preparing to unpack ISO:
sudo apt-get install -y squashfs-tools
Copy iso to /tmp & rename live.iso
mkdir /tmp/mnt
sudo mount -o loop /tmp/live.iso /tmp/mnt
sudo find /tmp/mnt \( -name '*.squashfs' -o -name "*.SQFS" \) -exec unsquashfs -d /tmp/squashfs-root/ {} \;
sudo umount /tmp/mnt
sudo rm /tmp/mnt -R
cd /tmp/squashfs-root
This leaves you with:
/tmp/live.iso
/tmp/squashfs-root/FilesFromSquashedFS
Make Changes…..
sudo nano /tmp/squashfs-root/etc/ocs/ocs-live.conf
scroll to bottom & add:
ocs_daemon=\"ssh\"
Then Repack ISO:
cd /tmp
sudo mksquashfs /tmp/squashfs-root filesystem.squashfs
sudo rm /tmp/squashfs-root -R
This leaves you with:
/tmp/live.iso
/tmp/filesystem.squashfs
Now use an ISO editing Program to insert the filesystem.squashfs into the original ISO making sure to use the same name as the original ISO "squasedfs" used. Sometimes it's a different extension.
The above method is quite "General" but I found some LiveOS creators have scripts for booting the OS, making changes and then creating an ISO from the running OS.
For CloneZilla this is what I have found after a quick google.
https://clonezilla.org/advanced/customized-clonezilla-live.php
Simple Version of that Link:
Create Custom Script named custom-ocs ( A sample script file /usr/share/drbl/samples/custom-ocs)
Mount /home/partimag/
Copy script to /home/partimag/ and cd to /home/partimag/
Run the following to generate ISO
ocs-iso -g en_US.UTF-8 -k NONE -s -m ./custom-ocs
For other options, please run ocs-iso -h or ocs-live-dev -h to get more info.
Another Link (https://clonezilla.org/related-articles/012_Automated_USB_thumb_drive_using_Custom/Automated_USB_thumb_drive_using_Custom.html) shows this method which seems to indicate to me that if you place a script inside the ISO and then point to it via an edited syslinux.cfg (You could edit it using either of the above methods) you can auto-run it that way. The link says to boot USB and select first menu option, but I would want it to be fully automated where if you do nothing that option is selected regardless.
Here is the edit to syslinux.cfg that he uses:
kernel /live/vmlinuz1
append initrd=/live/initrd1.img boot=live union=aufs noprompt noprompt ocs_live_run="/live/image/live/custom-ocs" ocs_live_extra_param="" ocs_live_keymap="NONE" ocs_live_batch="yes" ocs_lang="en_US.UTF-8" vga=791 ip=frommedia nolocales
Note: ocs_live_run="/live/image/live/custom-ocs" This to me means run this script after booting, but I haven't tested/messed with CloneZilla in a while.
Personal Opinion: I love Parted Magic but some people don't like that it has some weird licensing now and isn't really free, but old 2013 version can be found and/or buy it for like $10. It has CloneZilla built in and also an MKISO script for making an ISO out of the booted/edited/LiveOS, but again, I generally would unpack the ISO using squashfs and then repack and inject into ISO.
Here are my links to what I've done customizing "LiveISO's". My final project years ago was a "Parted Magic" LiveISO that booted, started a PWD protected VNC sessions + ssh and e-mailed me the DHCP IP address. (I had hit and miss results with the e-mail portion, but depending on your setup you could use static IP or check router for DHCP IP address)
https://www.freesoftwareservers.com/display/FREES/Customize+LiveISO%27s
You can indeed have your Samba share automatically pre-mounted by using ocs_repository= in your vmlinuz kernel boot arguments.
However, it needs to be in the right boot file.
According to the boot parameters documentation, the relevant file is one of:
/syslinux/isolinux.cfg when booting from CD on a MBR machine
/syslinux/syslinux.cfg when booting from USB flash drive on a MBR machine
/boot/grub/grub.cfg when booting from a uEFI machine
/tftpboot/pxelinux.cfg/default or similar on your PXE server, when booting from PXE on a MBR machine
/tftpboot/grub/grub.cfg or similar on your PXE server, when booting from a uEFI netboot machine
Depending on your Samba server, you might also need to specify the SMB version to be used. From the same documentation page:
To assign the image repository via URI (Uniform Resource Identifier),
use "ocs_repository". URI supported in Clonezilla live:
[dev|smb|smb1|smb1.0|smb2|smb2.0|smb2.1|smb3|smb3.0|smb3.11|smb3.1.1|ssh|nfs|nfs4|http|https|ram]:[//[user:password#]host[:port]][/]path

Failure of rsync of multi-user directory with sshfs fuse mount

I use rsync for automatic periodic syncing of the home folder (root user) in a linux server that is used by several people. A service that users need is the possibility of mounting remote directories through sshfs. However, when there is an sshfs mount, rsync fails giving the following messages
rsync: readlink_stat("/home/???/???") failed: Permission denied (13)
IO error encountered -- skipping file deletion
...
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
Because of this error, the automated sync does not work as expected, in particular due to skipping the file deletion and a non-zero exit code. The sync is only necessary for the file system where home is mounted, so the wanted behavior is that the sshfs mounts be ignored. The -x / --one-file-system rsync option does not resolve it.
This problem is clearly explained in https://www.agwa.name/blog/post/how_fuse_can_break_rsync_backups . The follow-up article (https://www.agwa.name/blog/post/easily_running_fuse_in_an_isolated_mount_namespace) proposes a solution, though not an acceptable one because fuse mounts are only visible to the process created the mount.
I am looking for a solution that does not affect sshfs usability and is transparent for the users.
The problem is that FUSE denies stat access to other users, including root. Rsync requires stat access on all source files and directories specified. But when an rsync process owned by another user stats a FUSE mount-point, FUSE denies that process access to the mount-point's attributes, causing rsync to throw the said "permission denied" error. Mauricio Villega's solution works by telling rsync to skip FUSE mount-points listed by the mount command. Here is another version of Villega's solution that specifies a white-list of filesystem types using the findmnt command. I chose ext3 and ext4 but you may add other types as needed.
#!/bin/sh
# Which paths to rsync (note the lack of trailing slash tells rsync to preserve source path name at destination).
SOURCES=(
/home
)
# Which filesystem types are supported.
FSTYPES=(
ext3
ext4
)
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
# Build exclusion list (array of "--exclude=PATH").
excludedPaths=$(findmnt --invert --list --noheadings --output TARGET --types $(IFS=',';echo "${FSTYPES[*]}"))
printf -v exclusionList -- "--exclude=%s " ${excludedPaths[#]}
# Rsync.
rsync --archive ${exclusionList[#]} --hard-links --delete --inplace --one-file-system ${SOURCE} /backup
done
Note that it builds the exclusion list inside the loop to address a fundamental problem with this solution. That problem is due to rsync'ing from a live system where a user could create new FUSE mount-points while rsync is running. The exclusion list needs to be updated frequently enough to include new FUSE mount-points. You may divide the home directory further by each username by modifying the SOURCES array as shown.
SOURCES=(
/home/user1
/home/user2
)
If you are using LVM, an alternative solution is rsync from an LVM snapshot. An LVM snapshot provides a simple (e.g., no FUSE mount-points) and frozen view of the logical volume it is linked to. The downside is that you must reserve space for the LVM snapshot's copy-on-write (COW) activity. It is crucial that you discard the LVM snapshot after you are done with it; otherwise the LVM snapshot will continue to grow in size as modifications are made. Here is a sample script that uses LVM snapshots. Note that it does not need to build an exclusion list for rsync.
# Create and mount LVM snapshot.
lvcreate --extents 100%FREE --snapshot --name snapRoot /dev/vgSystem/lvRoot
mount -o ro /dev/mapper/snapRoot /root/mnt # Note that only root has access to this mount-point.
# Rsync each source.
for SOURCE in ${SOURCES[#]}; do
rsync --archive --hard-links --delete --inplace --one-file-system /root/mnt/${SOURCE} /backup
done
# Discard LVM snapshot.
umount /root/mnt
lvremove vgSystem/snapRoot
References:
"How FUSE Can Break Rsync Backups"
This error does not appear if the fuse mount points are excluded in the rsync command. Since it is an automated sync, the mount command can be used to obtain all fuse mount points. The output of the mount command may differ depending on the system, but in a debian jessie sshfs mounts appear as USER#HOST:MOUNTED_DIR on /path/to/mount/point type fuse.sshfs (rw,...). A simple way to automate the exclusion of fuse mounts in bash+sed is the following
SOURCE="/home/"
FUSEEXCLUDE=( $( mount |
sed -rn "
/ type fuse/ {
s|^[^ ]+ on ([^ ]+) type fuse.+|\1|;
/^${SOURCE//\//\\\/}.+/ {
s|^${SOURCE//\//\\\/}| --exclude |;
p;
}
}" ) )
rsync $OPTIONS "${FUSEEXCLUDE[#]}" "$SOURCE" "$TARGET"

How can I replace my home directory without coorupting my system?

After setting up my Raspberry Pi, I made an image to make reverting to older software states easier. Recently I wanted to do that so I saved the content of my /home/pi folder, formated the sd-card and wrote the image onto it.
So far everything worked fine. Then I tried to simply delete the complete /home/pi folder and replace it with my previously saved folder from the old image. Now it seems like all files are there. But it doesnt boot correctly.
At some point it just stops to boot. I can then use it normally like the terminal, but Desktop is not starting.
So, how can I replace my home directory the right way so I don't make any damage to the system?
edit:
I just tried to do this again.
sudo cp -a /home/pi/fileserver/backup /home/backup
(i mounted a network drive in fileserver. Since network is on windows i assume all permissions are already gone here)
cp -a /home/pi/. /home/original
sudo umount /home/pi/fileserver
rm -r /home/pi/
mv /home/backup /home/pi
sudo chmod -R 755 /home/pi (So far everything still works)
sudo reboot
After reboot it doesnt boot correctly anymore. When I wait long enough I see errors of X Server.
That's quite doubtful approach to archiving the data. First of all, as you mentioned, windows will remove the permission bits. Running chmod -R 755 afterwards has very bad consequences because some programs in order to work require very specific access bits on some files (ssh keys for example). Not to mention that making everything executable is bad for security.
Considering your scenario, you may either
a) backup everything into Tar or Zip archives - this way permissions will be intact
b) Make virtual disk file which will be stored on shared windows drive and mounted to /home/pi
How to do scenario A:
cd /home/pi
tar cvpzf backup.tar.gz .
Copy backup.tar.gz to shared drive
to unpack:
cd /home/pi
tar xpvzf backup.tar.gz
Pros:
One-line backup
Takes small amount of space
Cons:
Packing/unpacking takes time
How to do scenario B:
1) Create a new file to hold the virtual drive volume:
cd /mnt/YourNetworkDriveMountPoint
fallocate -l 500M HomePi.img
dd if=/dev/zero of=HomePi.img bs=1M count=500
mkfs -t ext3 HomePi.img
2) Mount it to home dir
mount -t auto -o loop HomePi.img /home/pi/
500 means the disk will be 500 megabytes in size
This way your whole pi will be saved as a file on windows shared drive, but all the content will be in ext3 so all permissions are preserved.
I suggest you though to keep the current version image file on Pi device itself and the old versions on shared drive. Just copy files over if you need to switch because otherwise if all images are on shared drive then read/write performance will be 100% dependant on network speed.
You can then easily make copies of this file and swap them instantly by unmounting existing image and mounting new one
Pros:
Easy swap between backup versions
Completely transparent process
Cons:
If current image file is on shared drive, performance will be reduced
It will consume considerably more space because all 500 megs will be preallocated.
Pi user must be logged off during image swap for obvious reasons
Now, as for issues with Desktop not displayed, you need to check /var/log/Xorg.0.log for detailed messages. Likely this is caused by messed permissions. I would try to rename/remove your current Xorg settings and cache which are located somewhere in /home/Pi/.config/ (depends on what you're using - XFCE, Gnome, etc.) and let X server recreate them. But again, before doing this please check Xorg.0.log for exact messages - maybe there's another error. If you need any further help please comment to this answer

Do modification to rootfs (petalinux on zynq)

I've installed Petalinux 2014.4 on my Zynq board, but the NAND flash is not mounted when I boot up the board. I'm wondering if it's possible to change rootfs.cpio by extracting the package and then do changes to fstab and so make a cpio arhcive back. If yes, is it enough to just run petalinux-build after that?
Thanks :)
If you have access to the ramdisk image file, then yes, you can modify its contents. I assume that your image file is compressed using gzip. Furthermore I assume that you use U-Boot and your compressed ramdisk image has a U-Boot preamble.
First you need to strip the U-Boot header:
dd bs=64 skip=1 if=uramdisk.cpio.gz of=ramdisk.cpio.gz
Next, we decompress:
gunzip ramdisk.cpio.gz
Finally we extract the CPIO archive:
mkdir ramdisk && cd ramdisk
cpio -i -F ../ramdisk.cpio
Either you execute the latter command as root or you change the file ownership back to root before archiving again. This is necessary for your init program to start. After your modifications you can create your image file again:
find . | cpio -o -H newc | gzip -9 > ../ramdisk_new.cpio.gz
mkimage -A arm -T ramdisk -C gzip -d ramdisk_new.cpio.gz uramdisk.image.gz
Notice that the mkimage tool is part of U-Boot and is located in the respective sources in the tools directory.
I am not familiar with PetaLinux so I don't know whether this general answer suits your needs and expectations.
Using cpio package tools is OK. But it needs to be done every time you updates rootfs.
You can also use PetaLinux built-in tool to accomplish this. It doesn't need extra steps once you set it up.
Create the app:
petalinux-create -t apps -n fstab_mount_sd --template install --enable
In the created components/apps/fstab_mount_sd directory, modify the Makefile to append contents to current fstab file or replace the original fstab with your version of fstab file.
Here's an example of the fstab_mount_sd Makefile:
install:
$(TARGETINST) -a "/dev/mmcblk0p1 /media/card auto defaults,sync,noauto 0 0" /etc/fstab
$(TARGETINST) -a means append the following text to the destination file.
Note: commands in makefile should start with Tab. Replace the spaces before $(TARGETINST) in previous code block with a Tab.
You can read the help of the $(TARGETINST) command by going to PetaLinux install directory and run components/rootfs/targetroot-inst.sh
More convenient while development is using any standard distribution.
Petalinux can be used to create the kernel, u-boot files.
Then install a Linux of your favor on the sd card and boot it up.
You can use the standard tools apt for example to install packages.

How to mount a qnx partition as read -write enabled only for executing particular lines of code?

ie the partition of interest is already mounted as read-only.the partition need to be mounted as a rw enabled partition for executing particular lines of script alone.After that the partition should go to it's previous state of read only.
Question is for QNX operating system. And correct way to remount partition as read/write can be done using below command.
mount -uw /
To remout a partition read-write:
mount /mnt/mountpoint -oremount,rw
and to remout read-only
mount /mnt/mountpoint -oremount,ro
you may be interested in remount option.
for example, this command is widely used in rooted androids.
mount -o remount,rw /system
mount -o remount,ro /system
mount(8) - Linux man page
Filesystem Independent Mount Options
remount
Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point.
The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified.
mount -o remount,rw /dev/foo /dir
After this call all old mount options are replaced and arbitrary stuff from fstab is ignored, except the loop= option which is internally generated and maintained by the mount command.
mount -o remount,rw /dir
After this call mount reads fstab (or mtab) and merges these options with options from command line ( -o ).

Resources