I have saved a disk to an image.
I can restore this image to a new drive when I use the manual approach where Clonezilla asks all these questions.
I answer them using the Beginner Mode.
I don't change anything or do anything fancy.
Now I am trying to automatically deploy this image using a script:
#
set prefix=/boot/grub/
set default="0"
set timeout="5"
set hidden_timeout_quiet=false
if loadfont $prefix/unicode.pf2; then
set gfxmode=auto
insmod efi_gop
insmod efi_uga
insmod gfxterm
terminal_output gfxterm
fi
if background_image $prefix/logo-jpik.png; then
set color_normal=black/black
set color_highlight=red/black
else
set color_normal=cyan/blue
set color_highlight=white/blue
fi
menuentry "Apply Software Image from Pendrive"{
search --set -f /live/vmlinuz
linux /live/vmlinuz boot=live union=overlay username=user hostname=JPSC config components quiet noswap edd=on nomodeset nodmraid noeject locales=en_US.UTF-8 keyboard-layouts=pt vga=791 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes nouveau.blacklist=yes vmwgfx.enable_fbdev=1 ocs_live_batch="no" ocs_prerun="ln -s /lib/live/mount/medium/Image/ /home/partimag/" ocs_live_run="ocs-sr -e1 auto -e2 -batch -r -j2 -scr -p poweroff restoredisk Image/ mmcblk0" ocs_live_extra_param=""
echo "Loading Clonezilla..."
initrd /live/initrd.img
}
Using this script, I get the follow error:
"Unable to find target hard drive"
Does anybody know why and can tell me what I need to change?
Thank you!
Here is what my image files look like:
I got it:
I needed to edit \boot\grub\grub.cfg from this version...
linux /live/vmlinuz boot=live union=overlay username=user hostname=JPSC config components quiet noswap edd=on nomodeset nodmraid noeject locales=en_US.UTF-8 keyboard-layouts=pt vga=791 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes nouveau.blacklist=yes vmwgfx.enable_fbdev=1 ocs_live_batch="no" ocs_prerun="ln -s /lib/live/mount/medium/Image/ /home/partimag/" ocs_live_run="ocs-sr -e1 auto -e2 -batch -r -j2 -scr -p poweroff restoredisk Image/ mmcblk0" ocs_live_extra_param=""
... to this:
linux /live/vmlinuz boot=live union=overlay username=user hostname=JPSC config components quiet noswap edd=on nomodeset nodmraid noeject locales=en_US.UTF-8 keyboard-layouts=pt vga=791 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes nouveau.blacklist=yes vmwgfx.enable_fbdev=1 ocs_live_batch="no" ocs_prerun="ln -s /lib/live/mount/medium/Image/ /home/partimag/" ocs_live_run="ocs-sr -e1 auto -e2 -batch -r -j2 -scr -p poweroff restoredisk Image/ nvme0n1" ocs_live_extra_param=""
Related
While updating some Docker Baseimages (which previously were based on this image openjdk/openjdk-8-rhel8) to this image: ubi8/openjdk-8 I (suspect that I) was unable to add a user with the useradd cammand.
It appears inside the /etc/shadow file, but when I try to login into the container I get that messenge:
NWRAP_ERROR(4677) - nwrap_files_cache_reload: Unable to open '/home/jboss/passwd' readonly -1:Permission denied
NWRAP_ERROR(4677) - nwrap_files_getpwuid: Error loading passwd file
the Dockerfile, which worked well with the previous Image is:
FROM xxxx.azurecr.io/ubi8/openjdk-8:1.3-9
ARG uid=60000
ARG gid=60000
ARG user=testuser
ARG group=testuser
ARG shell=/bin/bash
ARG home=/home/$user
ARG port=8080
USER root
RUN mkdir -p $home \
&& chown ${uid}:${gid} $home \
&& groupadd -g ${gid} ${group} \
&& useradd --uid ${uid} --gid ${gid} --shell ${shell} --home ${home} $user
I don't know what could cause that problem, and searching for NWRAP_ERROR(4677) gave me no results. Dis someone had similar problems and could tell what went wrong and if there is a different way to add the user with the Dockerfile?
I had a similar issue with this image when I tried to :
[root#3ee1b7206f33 ~]# useradd -m -u 15001 myaccount_azpcontainer
[root#3ee1b7206f33 ~]# groupadd azure_pipelines_sudo
[root#3ee1b7206f33 ~]# usermod -a -G azure_pipelines_sudo myaccount_azpcontainer
usermod: user 'myaccount_azpcontainer' does not exist
if I look in /ect/passwd, the user is present :
[root#3ee1b7206f33 ~]# cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
jboss:x:185:0:JBoss user:/home/jboss:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
systemd-coredump:x:999:997:systemd Core Dumper:/:/sbin/nologin
systemd-resolve:x:193:193:systemd Resolver:/:/sbin/nologin
unbound:x:998:996:Unbound DNS resolver:/etc/unbound:/sbin/nologin
myaccount_azpcontainer:x:15001:15001::/home/myaccount_azpcontainer:/bin/bash
but usermod doesn't work and getent passwd doesn't work
[root#3ee1b7206f33 ~]# getent passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
jboss:x:185:0:JBoss user:/home/jboss:/sbin/nologin
I have no problem with others redhat images, only with openjdk
By looking the image DockerFile I found :
USER root
RUN [ "sh", "-x", "/tmp/scripts/jboss.container.user/configure.sh" ]
and the content of file :
https://github.com/jboss-openshift/cct_module/blob/master/jboss/container/user/configure.sh
groupadd -r jboss -g 185 && useradd -u 185 -r -g root -G jboss -m -d /home/jboss -s /sbin/nologin -c "JBoss user" jboss
cp /etc/passwd /home/jboss/passwd
chmod ug+rwX /home/jboss /home/jboss/passwd
I tested it in mydockerfile and add cp /etc/passwd /home/jboss/passwd after useradd and It works !!
My colleague explained me the problem is the usage of nss_wrapper :https://cwrap.org/nss_wrapper.html
Maybe, there is another way to manage the useradd, but not yet tested
Thanks,
I have a bootable CloneZilla on USB. It works great with systems that boot using syslinux. Here's the menu entry in syslinux.cfg and a short description of what it does (note that I use & to imply that it's the same line, I broke it up to multiple lines for readability):
label Clone Default Image
#MENU DEFAULT
MENU LABEL Clone from \\bilbo
kernel /live/vmlinuz
append initrd=/live/initrd.img boot=live union=overlay config quiet noswap
& noeject nosplash username=user hostname=yakkety components edd=on nomodeset
& noprompt nolocales keyboard-layouts=sv locales=sv_SE.UTF-8
& ocs_live_run="ocs-sr --batch -g auto -e1 auto -e2 -r -j2
& -p poweroff restoredisk windows-10-base-clone sda" ocs_live_extra_param=""
& ocs_live_batch=no vga=791 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes
& nouveau.blacklist=yes vmwgfx.blacklist=yes
& ocs_repository="smb://administrator:**********#192.168.10.41/common/utveckling/clone"
& {lots of ocs_preruns to setup dhcp on one of two possible eth-interfaces}
It setups an eth-interface to use dhcp. It then tries to mount a samba share, as seen in ocs_repository="user:pass#path". After that, it should run the ocs_live_run="cmd" entry, which performs a clone from the samba location to the main disk of the booted device.
This is my attempt at creating an equivalent grub.cfg entry:
menuentry "Clone from \\\\bilbo"{
search --set -f /live/vmlinuz
linux /live/vmlinuz boot=live union=overlay config quiet noswap
& noeject nosplash username=user hostname=yakkety components edd=on nomodeset
& noprompt nolocales keyboard-layouts=sv locales=sv_SE.UTF-8
& ocs_live_run="ocs-sr --batch -g auto -e1 auto -e2 -r -j2
& -p poweroff restoredisk windows-10-base-clone sda" ocs_live_extra_param=""
& ocs_live_batch=no vga=791 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes
& nouveau.blacklist=yes vmwgfx.blacklist=yes
& ocs_repository="smb://administrator:**********#192.168.10.41/common/utveckling/clone"
& {same preruns, it seems to work well}
initrd /live/initrd.img
}
For whatever reasons, the grub one fails. It seems to try to do the same things, but I would wager that there's something going wrong with mounting the samba-location (I can mount it manually).
It stops with the error message "/home/partimag/windows-10-base-clone" does not exist, which certainly should exist, had it mounted the provided samba location to /home/partimag/.
Any suggestions?
I was having a very similar issue, and I noticed by adding longer 'sleep' calls that the mount command invokes the help file for mount, so I knew that there was a syntax error, but the same syntax works in the iso/syslinux.cfg and at prompt after the ocs_live_run fails.
I discovered that the ocs_prerun="Mount..." won't run without escaping the " with \.
Thus:
OCS_PRERUN5=\"Mount -o user=,pass= //host/path/ /home/partimag\"
I hope that if you are still working on this it works for you also.
i want to create/restore an image of a system completely unattended with an clonezilla-live usb-stick. So far the unattended backup/restore works fine. I just plug in the stick, boot up the pc and after the work is done the pc shut down.
Now i need an confirmation that the backup/restore was successfull.
For this purpose i want to execute an shell script which copy the log-file into an specific file to a other partition on the usb-stick after the work is done.
I tried to execute the script as postrun-method in the syslinux.cfg but this always led to an error.
Furthermore i tried it with drbl-ocs but i'm not sure if i did it right.
here is the shell script i want to execute:
#!/bin/sh
#############
/opt/drbl/sbin/ocs-sr -q2 -j2 -z1p -i 4096 -p true savedisk img sda
#############
dir=/home/partimag/logs/
time=$(date +"%H-%M-%S") # current time
i=$(ls -l $dir | wc -l) # index
# create log-directory if it didn't exist
if [ ! -e $dir ]
then
sudo mkdir $dir
fi
# create new log-directory (
sudo mkdir $dir/$i"_"#$time
# copy all log-files to the created directory
sudo cp /var/log/* $dir/$i"_"#$time
# shut-down the machine
sudo shutdown -h now
the first instruction (after the shebang) was my attempt to use the drbl-ocs but i have not really an idea what this is. I believe it's another interpreter which can handle shell scripts too.. Am i right ?
an here is the syslinux.cfg i use:
append initrd=/live/initrd.img boot=live username=user config quiet noswap edd=on nomodeset nodmraid noeject locales=en_US.UTF-8 keyboard-layouts=NONE ocs_prerun="mount /dev/sdb2 /mnt/" ocs_prerun1="mount --bind /mnt/ /home/partimag/" ocs_live_run="/lib/live/mount/medium/syslinux/clonezilla.sh" ocs_live_extra_param="" ocs_live_batch="yes" vga=788 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes nouveau.blacklist=yes vmwgfx.enable_fbdev=1
please help !
Thanks :)
ok got it :)
the only thing to do was to add the interpreter in front of the ocs_live_run method.
Now the syslinux.cfg looks like this:
append initrd=/live/initrd.img boot=live username=user config quiet noswap edd=on nomodeset nodmraid noeject locales=en_US.UTF-8 keyboard-layouts=NONE ocs_prerun="mount /dev/sdb2 /mnt/" ocs_prerun1="mount --bind /mnt/ /home/partimag/" ocs_live_run="bash /lib/live/mount/medium/syslinux/clonezilla.sh" ocs_live_extra_param="" ocs_live_batch="yes" vga=788 ip= nosplash i915.blacklist=yes radeonhd.blacklist=yes nouveau.blacklist=yes vmwgfx.enable_fbdev=1
please rate if this was usefull for you :)
I am building my own Debian-based Linux with own kernel and software. One of the last steps of the make-process has to be done in a chrooted environment:
Install the custom kernel using dpkg
Create symbolic links to the kernel and initrd.img
Execute ldconfig
Set my custom theme for the splash screen using plymouth
Update the initrd.img
While the installation of the kernel succeeds and the symbolic links are acutally created, all other commands do not seem to work. If I boot into the system the splash screen is set to the default and the initrd.img cannot find the HDD nor the kernel. So the updating of the initrd.img inside the dpkg-installation process seems to fail somehow. The plymouth script to set the theme does not work either.
To fix this, I manually chroot into the system and do the following:
Set my custom theme for the splash screen using plymouth
Execute ldconfig
Update the initrd.img
This works perfectly fine. Next time I boot the system, my splash screen is shown and everything starts properly.
Here is my approach to get this done in my Makefile:
cp $(INTEGRATION_KERNEL_IMAGE) $(ROOTFS)/tmp/kernel.deb
cd $(ROOTFS); /usr/bin/sudo /bin/mount -t proc proc proc/; /usr/bin/sudo /bin/mount -t sysfs sys sys/; /usr/bin/sudo /bin/mount -o bind /dev dev/
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/usr/bin/dpkg --force-not-root -i /tmp/kernel.deb"
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/bin/ln -nsf vmlinuz-3.2.54-rt75custom /boot/vmlinuz"
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/bin/ln -nsf initrd.img-3.2.54-rt75custom /boot/initrd.img"
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/sbin/ldconfig"
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/bin/bash /usr/sbin/plymouth-set-default-theme my_theme"
/usr/sbin/chroot --userspec=0:0 $(ROOTFS) /usr/bin/env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin:/tools/bin HOME=/root bash -c "/bin/bash /usr/sbin/update-initramfs -u"
/usr/bin/sudo /bin/umount $(ROOTFS)/proc; /usr/bin/sudo /bin/umount $(ROOTFS)/sys; /usr/bin/sudo /bin/umount $(ROOTFS)/dev
The output of make does not provide any errors on this topic. Well, it possibly cannot do this because make does not know what is going on inside the chrooted environment. But how can I find out what is going wrong?
A possible workaround would be to put everything I mentioned above in a shell script and execute this in the chrooted environment. But I would prefer to do everything in the Makefile and I do not know if the workaround really works. I have not verified this yet.
Have you tried saving command output in the chroot environment and extracting it later? For example:
/usr/sbin/chroot [...] bash -c "/usr/bin/dpkg [...] >> /root/chroot.log"
or
/usr/sbin/chroot [...] bash -c "/usr/bin/dpkg [...] | tee -a /root/chroot.log"
followed by
cp $(ROOTFS)/root/chroot.log .
In the long run I would suggest to avoid code duplication and Makefile clutter, either by passing everything in a single chroot command or by copying over a script.
You should be able to get rid of most or all of the bash -c and /bin/bash invocations. That should simplify things even more.
I'm working with Xen 4.0 on a debian Lenny (5.0) .
I wanted to clone a VM , but it seems that i didn't do it well . What i did is the following :
Creating the config file of the new VM and setting it up.
#cd /etc/xen/vms/
#cp original.foo.com.cfg copy.foo.com.cfg
Copying virtual disks
#cd /dev/mapper/
#cp -rv vg--xen-original.foo.com--disk vg--xen-copy.foo.com--disk
#cp -rv vg--xen-original.foo.com--swap vg--xen-copy.foo.com--swap
#chmod g+w vg--xen-copy.foo.com--*
#chown root:disk vg--xen-copy.foo.com--*
Symlinks
#cd /dev/vg-xen/
#ln -s ../mapper/vg--xen-copy.foo.com--disk copy.foo.com-disk
#ln -s ../mapper/vg--xen-copy.foo.com--disk copy.foo.com-disk
Everything is set up, let's create the VM
#xm create /ect/xen/vms/copy.foo.com.cfg
#Using config file "./copy.foo.com.cfg".
#Error: Device 51714 (vbd) could not be connected.
#Device /dev/mapper/vg--copy.foo.com--disk is mounted in a guest domain,
#and so cannot be mounted now.
Could you please help me sort out that issue ?
All i wanted was to duplicate original.foo.com
Thanks
I found the solution.
#lvcreate -L size -n VM_NAME-disk xen-data
#lvcreate -L size -n VM_NAME-swap xen-data
Then a byte by byte copy
#dd if=/dev/mapper/vg-xen-original.foo.com--disk of=/dev/mapper-vg-xen-copy.foo.com--disk
#dd if=/dev/mapper/vg-xen-original.foo.com--swap of=/dev/mapper-vg-xen-copy.foo.com--swap
Et VoiĆ !!!