How can I replace my home directory without coorupting my system? - linux

After setting up my Raspberry Pi, I made an image to make reverting to older software states easier. Recently I wanted to do that so I saved the content of my /home/pi folder, formated the sd-card and wrote the image onto it.
So far everything worked fine. Then I tried to simply delete the complete /home/pi folder and replace it with my previously saved folder from the old image. Now it seems like all files are there. But it doesnt boot correctly.
At some point it just stops to boot. I can then use it normally like the terminal, but Desktop is not starting.
So, how can I replace my home directory the right way so I don't make any damage to the system?
edit:
I just tried to do this again.
sudo cp -a /home/pi/fileserver/backup /home/backup
(i mounted a network drive in fileserver. Since network is on windows i assume all permissions are already gone here)
cp -a /home/pi/. /home/original
sudo umount /home/pi/fileserver
rm -r /home/pi/
mv /home/backup /home/pi
sudo chmod -R 755 /home/pi (So far everything still works)
sudo reboot
After reboot it doesnt boot correctly anymore. When I wait long enough I see errors of X Server.

That's quite doubtful approach to archiving the data. First of all, as you mentioned, windows will remove the permission bits. Running chmod -R 755 afterwards has very bad consequences because some programs in order to work require very specific access bits on some files (ssh keys for example). Not to mention that making everything executable is bad for security.
Considering your scenario, you may either
a) backup everything into Tar or Zip archives - this way permissions will be intact
b) Make virtual disk file which will be stored on shared windows drive and mounted to /home/pi
How to do scenario A:
cd /home/pi
tar cvpzf backup.tar.gz .
Copy backup.tar.gz to shared drive
to unpack:
cd /home/pi
tar xpvzf backup.tar.gz
Pros:
One-line backup
Takes small amount of space
Cons:
Packing/unpacking takes time
How to do scenario B:
1) Create a new file to hold the virtual drive volume:
cd /mnt/YourNetworkDriveMountPoint
fallocate -l 500M HomePi.img
dd if=/dev/zero of=HomePi.img bs=1M count=500
mkfs -t ext3 HomePi.img
2) Mount it to home dir
mount -t auto -o loop HomePi.img /home/pi/
500 means the disk will be 500 megabytes in size
This way your whole pi will be saved as a file on windows shared drive, but all the content will be in ext3 so all permissions are preserved.
I suggest you though to keep the current version image file on Pi device itself and the old versions on shared drive. Just copy files over if you need to switch because otherwise if all images are on shared drive then read/write performance will be 100% dependant on network speed.
You can then easily make copies of this file and swap them instantly by unmounting existing image and mounting new one
Pros:
Easy swap between backup versions
Completely transparent process
Cons:
If current image file is on shared drive, performance will be reduced
It will consume considerably more space because all 500 megs will be preallocated.
Pi user must be logged off during image swap for obvious reasons
Now, as for issues with Desktop not displayed, you need to check /var/log/Xorg.0.log for detailed messages. Likely this is caused by messed permissions. I would try to rename/remove your current Xorg settings and cache which are located somewhere in /home/Pi/.config/ (depends on what you're using - XFCE, Gnome, etc.) and let X server recreate them. But again, before doing this please check Xorg.0.log for exact messages - maybe there's another error. If you need any further help please comment to this answer

Related

How to find and delete items from a specific drive in Linux

I have 2 hard drives in my Linux server. One SSD and one HDD. My SSD is currently full and because of that I am not able to run several applications.
Can anyone tell me the exact way in which I can access the files stored in the SSD and delete some of them?
If you know how to open a "Terminal", then you can type a command df -h. The first column lists the disks and the last column shows the directories which are mounted on these disks.
You need to mount it first (if you haven't already):
Terminal way:
To list connected disks:
lsblk
sudo mkdir /media/user/disk1
sudo mount /dev/sdXN /media/user/disk1
(replace /dev/sdXN with correct partiton symbol e.g. /dev/sdb1 and user with your username)
GUI way:
You can also do it through file explorer, by clicking on drive and it will mount it for you.
If you don't have any file explorer, you can install Nemo (or any other you want) using this command (assuming you are using Debian/Ubuntu/Mint):
sudo apt update
sudo apt install nemo -y
After mounting it you can open this drive (partition) in your file explorer. You also may delete files from terminal using rm command, but I think GUI would be easier.

Howto change Clonezilla default menu selection items

I am using clonezilla-live-2.6.1-11-amd64.iso
I would like to change the default section when booting off the live USB to perform full backups of the whole drive. for example:
on screen "Mount Clonezilla image directory" I would like to change the default from local_dev to use samba_server
on screen "Mount Samba server" I would like to change the default from 192.168.1.1 to 192.168.1.2
on screen "Mount Samba server" account change the default administrator to clonezilla
When I enter the items in
/syslinux/syslinux.cfg
.
ocs_repository="smb://clonezilla:password#192.168.1.2/zilla/
the menu's still ask me the default address of 192.18.1.1 and username administrator
so it appears I am not understanding the documentation. Does anyone have an example cfg?
I have delved into customizing "LiveISO's" and CloneZilla specifically so I will give a general idea of how I would attack this.
Looking at my notes this is all I had. To enable SSH Deamon I would unpack the ISO, edit the following and repack the iso using mksquashfs.
Eg:
Preparing to unpack ISO:
sudo apt-get install -y squashfs-tools
Copy iso to /tmp & rename live.iso
mkdir /tmp/mnt
sudo mount -o loop /tmp/live.iso /tmp/mnt
sudo find /tmp/mnt \( -name '*.squashfs' -o -name "*.SQFS" \) -exec unsquashfs -d /tmp/squashfs-root/ {} \;
sudo umount /tmp/mnt
sudo rm /tmp/mnt -R
cd /tmp/squashfs-root
This leaves you with:
/tmp/live.iso
/tmp/squashfs-root/FilesFromSquashedFS
Make Changes…..
sudo nano /tmp/squashfs-root/etc/ocs/ocs-live.conf
scroll to bottom & add:
ocs_daemon=\"ssh\"
Then Repack ISO:
cd /tmp
sudo mksquashfs /tmp/squashfs-root filesystem.squashfs
sudo rm /tmp/squashfs-root -R
This leaves you with:
/tmp/live.iso
/tmp/filesystem.squashfs
Now use an ISO editing Program to insert the filesystem.squashfs into the original ISO making sure to use the same name as the original ISO "squasedfs" used. Sometimes it's a different extension.
The above method is quite "General" but I found some LiveOS creators have scripts for booting the OS, making changes and then creating an ISO from the running OS.
For CloneZilla this is what I have found after a quick google.
https://clonezilla.org/advanced/customized-clonezilla-live.php
Simple Version of that Link:
Create Custom Script named custom-ocs ( A sample script file /usr/share/drbl/samples/custom-ocs)
Mount /home/partimag/
Copy script to /home/partimag/ and cd to /home/partimag/
Run the following to generate ISO
ocs-iso -g en_US.UTF-8 -k NONE -s -m ./custom-ocs
For other options, please run ocs-iso -h or ocs-live-dev -h to get more info.
Another Link (https://clonezilla.org/related-articles/012_Automated_USB_thumb_drive_using_Custom/Automated_USB_thumb_drive_using_Custom.html) shows this method which seems to indicate to me that if you place a script inside the ISO and then point to it via an edited syslinux.cfg (You could edit it using either of the above methods) you can auto-run it that way. The link says to boot USB and select first menu option, but I would want it to be fully automated where if you do nothing that option is selected regardless.
Here is the edit to syslinux.cfg that he uses:
kernel /live/vmlinuz1
append initrd=/live/initrd1.img boot=live union=aufs noprompt noprompt ocs_live_run="/live/image/live/custom-ocs" ocs_live_extra_param="" ocs_live_keymap="NONE" ocs_live_batch="yes" ocs_lang="en_US.UTF-8" vga=791 ip=frommedia nolocales
Note: ocs_live_run="/live/image/live/custom-ocs" This to me means run this script after booting, but I haven't tested/messed with CloneZilla in a while.
Personal Opinion: I love Parted Magic but some people don't like that it has some weird licensing now and isn't really free, but old 2013 version can be found and/or buy it for like $10. It has CloneZilla built in and also an MKISO script for making an ISO out of the booted/edited/LiveOS, but again, I generally would unpack the ISO using squashfs and then repack and inject into ISO.
Here are my links to what I've done customizing "LiveISO's". My final project years ago was a "Parted Magic" LiveISO that booted, started a PWD protected VNC sessions + ssh and e-mailed me the DHCP IP address. (I had hit and miss results with the e-mail portion, but depending on your setup you could use static IP or check router for DHCP IP address)
https://www.freesoftwareservers.com/display/FREES/Customize+LiveISO%27s
You can indeed have your Samba share automatically pre-mounted by using ocs_repository= in your vmlinuz kernel boot arguments.
However, it needs to be in the right boot file.
According to the boot parameters documentation, the relevant file is one of:
/syslinux/isolinux.cfg when booting from CD on a MBR machine
/syslinux/syslinux.cfg when booting from USB flash drive on a MBR machine
/boot/grub/grub.cfg when booting from a uEFI machine
/tftpboot/pxelinux.cfg/default or similar on your PXE server, when booting from PXE on a MBR machine
/tftpboot/grub/grub.cfg or similar on your PXE server, when booting from a uEFI netboot machine
Depending on your Samba server, you might also need to specify the SMB version to be used. From the same documentation page:
To assign the image repository via URI (Uniform Resource Identifier),
use "ocs_repository". URI supported in Clonezilla live:
[dev|smb|smb1|smb1.0|smb2|smb2.0|smb2.1|smb3|smb3.0|smb3.11|smb3.1.1|ssh|nfs|nfs4|http|https|ram]:[//[user:password#]host[:port]][/]path

Set-id bits on an ISO

I'm creating an ISO of a Debian system with:
mkisofs -V "Debian ISO" -cache-inodes -J -l -o file.iso debian-system/
The problem is: when I mount the ISO (mount -o loop) ping and sudo don't work because their suid bits have not been set.
I know that special bis are cleared by the -r flag. This flag generates the "rationalized Rock Ridge directory information" which enables to retain the original file permissions, but also clears any set-id bits.
But if I don't use -r, file permissions will be the same for all files, as specified at runtime when the ISO is mounted.
Question: how to add set-id files like ping and sudo to a linux "live" ISO?
You need to use an alternate file system, that supports those permissions.
The way a LiveCD/DVD works is there is a squashfs file that is mounted with changes made in RAM.
You could "fake" the same by creating a file full of zeros using dd, make a file system on it wtih mkfs.ext4, mount it, and copy the files onto it. Then on your custom disk, mount it as loop (mount -o loop /path/to/file /mnt/point) and symlink/etc the binaries over.

How to make an incremental backup with rsync from ext4 to xfs network drive?

I'm on Ubuntu 14.04.
I try to make an incremental backup of some files on my Ubuntu HD (ext4) to a Buffalo network HD (XFS).
My script mounts the Buffalo HD with this command :
sudo mount.cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword
After the disk is mounted, I use rsync trying to make an incremental backup with rsync and --link-dest. Each day, when the script is launched, all the folders change according to actual date of the day. Here is an example when the script is launched on 2017-03-09. It should check on 2017-03-08 backup if files already exist :
sudo rsync -arR --link-dest="/mnt/Sauvegardes/racine_2017-03-08" --timeout=30 /home/flooder/Sauvegardes/ /mnt/Sauvegardes/racine_2017-03-09/
The problem : rsync doesn't seem to check on the --link-dest destination. It copies all the files all the day. So the disk will be full quickly and the backup is very very long each day...
Would you have an idea for me?
Should I mount the network drive an other way?
Do I have the right rsync command?
I have mounted my network disk with this line instead. It works well now. If the file already exists in --link-dest, only an hard link is created. Second pass is very very quick!
sudo mount -t cifs //192.168.1.12/Sauvegardes /mnt/Sauvegardes -o username=myusername,password=mypassword,uid=1000,gid=1000
uid and gid are those of my logged user.

linux symlink - move logs from root to a mounted drive

My app uses log4j and writes the logs to directory A which is in root directory. I want to move the logs out to a mounted drive without making any change in the application.
Can I use soft symlink to do this? I have created a symlink like this -
ln -s A mounted_drive_directory
But I still see logs written to directory A.
ln [OPTION]... [-T] TARGET LINK_NAME, so your arguments order is wrong. You'll have to delete (or move) A first before creating the link, or filename conflict will occur.
You could also use mountpoint bindings for that, e.g. mount --rbind /mounted/drive/directory /full/path/to/A, but it have to be done on each system boot (or saved in /etc/fstab to be auto-executed on boot).
ln works a little bit different:
first argument is real folder\file, second - symlink.
mv /root/A /root/B;
ln -s mounted_drive_directory /root/A;

Resources