I am using clonezilla-live-2.6.1-11-amd64.iso
I would like to change the default section when booting off the live USB to perform full backups of the whole drive. for example:
on screen "Mount Clonezilla image directory" I would like to change the default from local_dev to use samba_server
on screen "Mount Samba server" I would like to change the default from 192.168.1.1 to 192.168.1.2
on screen "Mount Samba server" account change the default administrator to clonezilla
When I enter the items in
/syslinux/syslinux.cfg
.
ocs_repository="smb://clonezilla:password#192.168.1.2/zilla/
the menu's still ask me the default address of 192.18.1.1 and username administrator
so it appears I am not understanding the documentation. Does anyone have an example cfg?
I have delved into customizing "LiveISO's" and CloneZilla specifically so I will give a general idea of how I would attack this.
Looking at my notes this is all I had. To enable SSH Deamon I would unpack the ISO, edit the following and repack the iso using mksquashfs.
Eg:
Preparing to unpack ISO:
sudo apt-get install -y squashfs-tools
Copy iso to /tmp & rename live.iso
mkdir /tmp/mnt
sudo mount -o loop /tmp/live.iso /tmp/mnt
sudo find /tmp/mnt \( -name '*.squashfs' -o -name "*.SQFS" \) -exec unsquashfs -d /tmp/squashfs-root/ {} \;
sudo umount /tmp/mnt
sudo rm /tmp/mnt -R
cd /tmp/squashfs-root
This leaves you with:
/tmp/live.iso
/tmp/squashfs-root/FilesFromSquashedFS
Make Changes…..
sudo nano /tmp/squashfs-root/etc/ocs/ocs-live.conf
scroll to bottom & add:
ocs_daemon=\"ssh\"
Then Repack ISO:
cd /tmp
sudo mksquashfs /tmp/squashfs-root filesystem.squashfs
sudo rm /tmp/squashfs-root -R
This leaves you with:
/tmp/live.iso
/tmp/filesystem.squashfs
Now use an ISO editing Program to insert the filesystem.squashfs into the original ISO making sure to use the same name as the original ISO "squasedfs" used. Sometimes it's a different extension.
The above method is quite "General" but I found some LiveOS creators have scripts for booting the OS, making changes and then creating an ISO from the running OS.
For CloneZilla this is what I have found after a quick google.
https://clonezilla.org/advanced/customized-clonezilla-live.php
Simple Version of that Link:
Create Custom Script named custom-ocs ( A sample script file /usr/share/drbl/samples/custom-ocs)
Mount /home/partimag/
Copy script to /home/partimag/ and cd to /home/partimag/
Run the following to generate ISO
ocs-iso -g en_US.UTF-8 -k NONE -s -m ./custom-ocs
For other options, please run ocs-iso -h or ocs-live-dev -h to get more info.
Another Link (https://clonezilla.org/related-articles/012_Automated_USB_thumb_drive_using_Custom/Automated_USB_thumb_drive_using_Custom.html) shows this method which seems to indicate to me that if you place a script inside the ISO and then point to it via an edited syslinux.cfg (You could edit it using either of the above methods) you can auto-run it that way. The link says to boot USB and select first menu option, but I would want it to be fully automated where if you do nothing that option is selected regardless.
Here is the edit to syslinux.cfg that he uses:
kernel /live/vmlinuz1
append initrd=/live/initrd1.img boot=live union=aufs noprompt noprompt ocs_live_run="/live/image/live/custom-ocs" ocs_live_extra_param="" ocs_live_keymap="NONE" ocs_live_batch="yes" ocs_lang="en_US.UTF-8" vga=791 ip=frommedia nolocales
Note: ocs_live_run="/live/image/live/custom-ocs" This to me means run this script after booting, but I haven't tested/messed with CloneZilla in a while.
Personal Opinion: I love Parted Magic but some people don't like that it has some weird licensing now and isn't really free, but old 2013 version can be found and/or buy it for like $10. It has CloneZilla built in and also an MKISO script for making an ISO out of the booted/edited/LiveOS, but again, I generally would unpack the ISO using squashfs and then repack and inject into ISO.
Here are my links to what I've done customizing "LiveISO's". My final project years ago was a "Parted Magic" LiveISO that booted, started a PWD protected VNC sessions + ssh and e-mailed me the DHCP IP address. (I had hit and miss results with the e-mail portion, but depending on your setup you could use static IP or check router for DHCP IP address)
https://www.freesoftwareservers.com/display/FREES/Customize+LiveISO%27s
You can indeed have your Samba share automatically pre-mounted by using ocs_repository= in your vmlinuz kernel boot arguments.
However, it needs to be in the right boot file.
According to the boot parameters documentation, the relevant file is one of:
/syslinux/isolinux.cfg when booting from CD on a MBR machine
/syslinux/syslinux.cfg when booting from USB flash drive on a MBR machine
/boot/grub/grub.cfg when booting from a uEFI machine
/tftpboot/pxelinux.cfg/default or similar on your PXE server, when booting from PXE on a MBR machine
/tftpboot/grub/grub.cfg or similar on your PXE server, when booting from a uEFI netboot machine
Depending on your Samba server, you might also need to specify the SMB version to be used. From the same documentation page:
To assign the image repository via URI (Uniform Resource Identifier),
use "ocs_repository". URI supported in Clonezilla live:
[dev|smb|smb1|smb1.0|smb2|smb2.0|smb2.1|smb3|smb3.0|smb3.11|smb3.1.1|ssh|nfs|nfs4|http|https|ram]:[//[user:password#]host[:port]][/]path
Related
I have 2 hard drives in my Linux server. One SSD and one HDD. My SSD is currently full and because of that I am not able to run several applications.
Can anyone tell me the exact way in which I can access the files stored in the SSD and delete some of them?
If you know how to open a "Terminal", then you can type a command df -h. The first column lists the disks and the last column shows the directories which are mounted on these disks.
You need to mount it first (if you haven't already):
Terminal way:
To list connected disks:
lsblk
sudo mkdir /media/user/disk1
sudo mount /dev/sdXN /media/user/disk1
(replace /dev/sdXN with correct partiton symbol e.g. /dev/sdb1 and user with your username)
GUI way:
You can also do it through file explorer, by clicking on drive and it will mount it for you.
If you don't have any file explorer, you can install Nemo (or any other you want) using this command (assuming you are using Debian/Ubuntu/Mint):
sudo apt update
sudo apt install nemo -y
After mounting it you can open this drive (partition) in your file explorer. You also may delete files from terminal using rm command, but I think GUI would be easier.
i run a mixed windows and linux network with different desktops, notebooks and raspberry pis. i am trying to establish an off-site backup between a local raspberry pi and an remote raspberry pi. both run on dietpi/raspbian and have an external hdd with ntfs to store the backup data. as the data to be backuped is around 800GB i already initially mirrored the data on the external hdd, so that only the new files have to be sent to the remote drive via rsync.
i now tried various combinations of options including --ignore-existing --size-only -u -c und of course combinations of other options like -avz etc.
the problem is: nothing of the above really changes anything, the system tries to upload all the files (although they are remotely existing) or at least a good number of them.
could you give me hint how to solve this?
I do this exact thing. Here is my solution to this task.
rsync -re "ssh -p 1234” -K -L --copy-links --append --size-only --delete pi#remote.server.ip:/home/pi/source-directory/* /home/pi/target-directory/
The options I am using are:
-r - recursive
-e - specifies to utilize ssh / scp as a method to transmit data, a note, my ssh command uses a non-standard port 1234 as is specified by -p in the -e flag
-K - keep directory links
-L - copy links
--copy-links - a duplicate flag it would seem...
--append - this will append data onto smaller files in case of a partial copy
--size-only - this skips files that match in size
--delete - CAREFUL - this will delete local files that are not present on the remote device..
This solution will run on a schedule and will "sync" the files in the target directories with the files from the source directory. to test it out, you can always run the command with --dry-run which will not make any changes at all and only show you what will be transferred and/or deleted...
all of this info and additional details can be found in the rsync man page man rsync
**NOTE: I use ssh keys to allow connection/transfer without having to respond to a password prompt between these devices.
I'm creating an ISO of a Debian system with:
mkisofs -V "Debian ISO" -cache-inodes -J -l -o file.iso debian-system/
The problem is: when I mount the ISO (mount -o loop) ping and sudo don't work because their suid bits have not been set.
I know that special bis are cleared by the -r flag. This flag generates the "rationalized Rock Ridge directory information" which enables to retain the original file permissions, but also clears any set-id bits.
But if I don't use -r, file permissions will be the same for all files, as specified at runtime when the ISO is mounted.
Question: how to add set-id files like ping and sudo to a linux "live" ISO?
You need to use an alternate file system, that supports those permissions.
The way a LiveCD/DVD works is there is a squashfs file that is mounted with changes made in RAM.
You could "fake" the same by creating a file full of zeros using dd, make a file system on it wtih mkfs.ext4, mount it, and copy the files onto it. Then on your custom disk, mount it as loop (mount -o loop /path/to/file /mnt/point) and symlink/etc the binaries over.
After setting up my Raspberry Pi, I made an image to make reverting to older software states easier. Recently I wanted to do that so I saved the content of my /home/pi folder, formated the sd-card and wrote the image onto it.
So far everything worked fine. Then I tried to simply delete the complete /home/pi folder and replace it with my previously saved folder from the old image. Now it seems like all files are there. But it doesnt boot correctly.
At some point it just stops to boot. I can then use it normally like the terminal, but Desktop is not starting.
So, how can I replace my home directory the right way so I don't make any damage to the system?
edit:
I just tried to do this again.
sudo cp -a /home/pi/fileserver/backup /home/backup
(i mounted a network drive in fileserver. Since network is on windows i assume all permissions are already gone here)
cp -a /home/pi/. /home/original
sudo umount /home/pi/fileserver
rm -r /home/pi/
mv /home/backup /home/pi
sudo chmod -R 755 /home/pi (So far everything still works)
sudo reboot
After reboot it doesnt boot correctly anymore. When I wait long enough I see errors of X Server.
That's quite doubtful approach to archiving the data. First of all, as you mentioned, windows will remove the permission bits. Running chmod -R 755 afterwards has very bad consequences because some programs in order to work require very specific access bits on some files (ssh keys for example). Not to mention that making everything executable is bad for security.
Considering your scenario, you may either
a) backup everything into Tar or Zip archives - this way permissions will be intact
b) Make virtual disk file which will be stored on shared windows drive and mounted to /home/pi
How to do scenario A:
cd /home/pi
tar cvpzf backup.tar.gz .
Copy backup.tar.gz to shared drive
to unpack:
cd /home/pi
tar xpvzf backup.tar.gz
Pros:
One-line backup
Takes small amount of space
Cons:
Packing/unpacking takes time
How to do scenario B:
1) Create a new file to hold the virtual drive volume:
cd /mnt/YourNetworkDriveMountPoint
fallocate -l 500M HomePi.img
dd if=/dev/zero of=HomePi.img bs=1M count=500
mkfs -t ext3 HomePi.img
2) Mount it to home dir
mount -t auto -o loop HomePi.img /home/pi/
500 means the disk will be 500 megabytes in size
This way your whole pi will be saved as a file on windows shared drive, but all the content will be in ext3 so all permissions are preserved.
I suggest you though to keep the current version image file on Pi device itself and the old versions on shared drive. Just copy files over if you need to switch because otherwise if all images are on shared drive then read/write performance will be 100% dependant on network speed.
You can then easily make copies of this file and swap them instantly by unmounting existing image and mounting new one
Pros:
Easy swap between backup versions
Completely transparent process
Cons:
If current image file is on shared drive, performance will be reduced
It will consume considerably more space because all 500 megs will be preallocated.
Pi user must be logged off during image swap for obvious reasons
Now, as for issues with Desktop not displayed, you need to check /var/log/Xorg.0.log for detailed messages. Likely this is caused by messed permissions. I would try to rename/remove your current Xorg settings and cache which are located somewhere in /home/Pi/.config/ (depends on what you're using - XFCE, Gnome, etc.) and let X server recreate them. But again, before doing this please check Xorg.0.log for exact messages - maybe there's another error. If you need any further help please comment to this answer
I've installed Petalinux 2014.4 on my Zynq board, but the NAND flash is not mounted when I boot up the board. I'm wondering if it's possible to change rootfs.cpio by extracting the package and then do changes to fstab and so make a cpio arhcive back. If yes, is it enough to just run petalinux-build after that?
Thanks :)
If you have access to the ramdisk image file, then yes, you can modify its contents. I assume that your image file is compressed using gzip. Furthermore I assume that you use U-Boot and your compressed ramdisk image has a U-Boot preamble.
First you need to strip the U-Boot header:
dd bs=64 skip=1 if=uramdisk.cpio.gz of=ramdisk.cpio.gz
Next, we decompress:
gunzip ramdisk.cpio.gz
Finally we extract the CPIO archive:
mkdir ramdisk && cd ramdisk
cpio -i -F ../ramdisk.cpio
Either you execute the latter command as root or you change the file ownership back to root before archiving again. This is necessary for your init program to start. After your modifications you can create your image file again:
find . | cpio -o -H newc | gzip -9 > ../ramdisk_new.cpio.gz
mkimage -A arm -T ramdisk -C gzip -d ramdisk_new.cpio.gz uramdisk.image.gz
Notice that the mkimage tool is part of U-Boot and is located in the respective sources in the tools directory.
I am not familiar with PetaLinux so I don't know whether this general answer suits your needs and expectations.
Using cpio package tools is OK. But it needs to be done every time you updates rootfs.
You can also use PetaLinux built-in tool to accomplish this. It doesn't need extra steps once you set it up.
Create the app:
petalinux-create -t apps -n fstab_mount_sd --template install --enable
In the created components/apps/fstab_mount_sd directory, modify the Makefile to append contents to current fstab file or replace the original fstab with your version of fstab file.
Here's an example of the fstab_mount_sd Makefile:
install:
$(TARGETINST) -a "/dev/mmcblk0p1 /media/card auto defaults,sync,noauto 0 0" /etc/fstab
$(TARGETINST) -a means append the following text to the destination file.
Note: commands in makefile should start with Tab. Replace the spaces before $(TARGETINST) in previous code block with a Tab.
You can read the help of the $(TARGETINST) command by going to PetaLinux install directory and run components/rootfs/targetroot-inst.sh
More convenient while development is using any standard distribution.
Petalinux can be used to create the kernel, u-boot files.
Then install a Linux of your favor on the sd card and boot it up.
You can use the standard tools apt for example to install packages.