Xen clone vm can't be created - linux

I'm working with Xen 4.0 on a debian Lenny (5.0) .
I wanted to clone a VM , but it seems that i didn't do it well . What i did is the following :
Creating the config file of the new VM and setting it up.
#cd /etc/xen/vms/
#cp original.foo.com.cfg copy.foo.com.cfg
Copying virtual disks
#cd /dev/mapper/
#cp -rv vg--xen-original.foo.com--disk vg--xen-copy.foo.com--disk
#cp -rv vg--xen-original.foo.com--swap vg--xen-copy.foo.com--swap
#chmod g+w vg--xen-copy.foo.com--*
#chown root:disk vg--xen-copy.foo.com--*
Symlinks
#cd /dev/vg-xen/
#ln -s ../mapper/vg--xen-copy.foo.com--disk copy.foo.com-disk
#ln -s ../mapper/vg--xen-copy.foo.com--disk copy.foo.com-disk
Everything is set up, let's create the VM
#xm create /ect/xen/vms/copy.foo.com.cfg
#Using config file "./copy.foo.com.cfg".
#Error: Device 51714 (vbd) could not be connected.
#Device /dev/mapper/vg--copy.foo.com--disk is mounted in a guest domain,
#and so cannot be mounted now.
Could you please help me sort out that issue ?
All i wanted was to duplicate original.foo.com
Thanks

I found the solution.
#lvcreate -L size -n VM_NAME-disk xen-data
#lvcreate -L size -n VM_NAME-swap xen-data
Then a byte by byte copy
#dd if=/dev/mapper/vg-xen-original.foo.com--disk of=/dev/mapper-vg-xen-copy.foo.com--disk
#dd if=/dev/mapper/vg-xen-original.foo.com--swap of=/dev/mapper-vg-xen-copy.foo.com--swap
Et VoiĆ  !!!

Related

How to deploy files to /boot partition with Yocto

I'm trying to deploy some binary files to /boot in a Yocto image for RPi CM3 but it deploys them to the wrong location.
do_install() {
install -d ${D}/boot/overlays
install -m 0664 ${WORKDIR}/*.dtb ${D}/boot/overlays/
install -m 0664 ${WORKDIR}/*.dtbo ${D}/boot/overlays/
}
The files are deployed to /boot in the / partition of the final image, but not to the /boot partition. So they are not available at boot time.
I already googled and studied the kernel recipes (and classes) of the Poky distribution but I didn't find the mechanism it uses how to ensure that the files are deployed to the boot image (and not to the /boot dir in the root image).
Any help is appreciated :)
Update #1
In my local.conf I did:
IMAGE_BOOT_FILES_append = " \
overlays/3dlab-nano-player.dtbo \
overlays/adau1977-adc.dtbo \
...
"
And in my rpi3-overlays.bb
do_deploy() {
install -d ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtb ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtbo ${DEPLOYDIR}/${PN}
touch ${DEPLOYDIR}/${PN}/${PN}-${PV}.stamp
}
Using this the image builds, but the files stillt don't get deployed in the /boot partition.
Using RPI_KERNEL_DEVICETREE_OVERLAYS I get a build error because the kernel recipe tries to build the dtbo files like dts files.
RPI images are created with sdimage-raspberrypi.wks WIC wks file. It contains:
part /boot --source bootimg-partition ...
so it uses bootimg-partition.py wic plugin to generate /boot partition. It copies every files defined by IMAGE_BOOT_FILES variable.
It seems you want to add some devicetree overlays, so you need to modify machine configuration and more specifically RPI_KERNEL_DEVICETREE_OVERLAYS variable. IMAGE_BOOT_FILES variable is set in rpi-base.inc.
If you don't have any custom machine or custom distro defined, you can add it in local.conf:
RPI_KERNEL_DEVICETREE_OVERLAYS_append = " <deploy-path>/<dto-path>"
You can see here how to add files in deploy directory.
After too many hours of investigation it turned out, that deploying files to other partitions than / is not easily possible. I now went the way of a post-processing script that mounts the final image, deploys the additional files and unmounts it.
# Ensure the first loopback device is free to use
sudo -n losetup -d /dev/loop0 || true
# Create a loopback device for the given image
sudo -n losetup -Pf ../deploy/images/bapi/ba.rootfs.rpi-sdimg
# Mount the loopback device
mkdir -p tmp
sudo -n mount /dev/loop0p1 tmp
# Deploy files
sudo -n cp -n ../../meta-ba-rpi-cm3/recipes-core/rpi3-overlays/files/* tmp/overlays/
sudo -n cp ../../conf/config.txt tmp/config.txt
sudo -n cp ../../conf/cmdline.txt tmp/cmdline.txt
# Unmount the image and free the loopback device
sudo -n umount tmp
sudo -n losetup -d /dev/loop0

How to mount azure file share to existing directory on linux vm

I have an existing directory on an Ubuntu 16.04 LTS virtual machine at /etc/elasticsearch. I also have created a file share in azure. I am able to mount file share to the VM successfully when the mount point is a new directory. However, when I attempt to mount the file share to /etc/elasticsearch, an existing directory that contains data, the existing directory's data gets overwritten completely by the contents of the file share. This causes me to lose the data that previously existed in /etc/elasticsearch, which I obviously do not want. I want the file share to be added in addition to the existing data in /etc/elasticsearch.
Here is what I tried:
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/credentials.cred" ]; then
sudo bash -c 'echo "username=username" >> /etc/smbcredentials/credentials.cred'
sudo bash -c 'echo "password=password" >> /etc/smbcredentials/credentials.cred'
fi
sudo chmod 600 /etc/smbcredentials/credentials.cred
sudo bash -c 'echo "//pathtofileshare/analysis /etc/elasticsearch cifs nofail,vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //pathtofileshare/analysis /etc/elasticsearch -o vers=3.0,credentials=/etc/smbcredentials/credentials.cred,dir_mode=0777,file_mode=0777,serverino
Link to file share documentation
Many thanks in advance for any help
I don't believe this is an issue, it just how Linux mount works
http://man7.org/linux/man-pages/man8/mount.8.html
The previous contents (if any) and owner and mode of dir become invisible, and as long as this filesystem remains mounted, the pathname dir refers to the root of the filesystem on device.

How to install wget on LFS system

I am pretty newbie to Linux and started LFS because I needed it for school. So my system is now perfectly running with an internet connection, but I still don't have any packet manager or something. The first binary I would like to have is basically wget, but I really don't know how to do...
Could someone explain to me please ?
I personally used (and would highly recommend) using the existing Linux system (the host) to download the wget package and its dependencies before booting your LFS system for the first time. However, seeing that you're already using your LFS system, if you still have the ability to log using the host, then use it to download wget as if it was one of the sources that you got when building the LFS system.
For me, I used a Linux Mint Host running in VirtualBox to build my LFS. To get wget I just had to re-add the Linux Mint host storage, and download wget and added it to the LFS sources. I then removed the Linux Mint host storage, logged in to my LFS machine, then followed the steps in BLFS.
Note: this is mainly just from parts of lfs and the wget page of blfs.
1. Boot into your host OS.
2. Enter the following commands in the command line to get into chroot(edit depending on your partitions and where you mount lfs):
sudo su -
export LFS=/mnt/lfs
mount -vt ext4 /dev/sda4 $LFS
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
fi
chroot "$LFS" /usr/bin/env -i \
HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
PATH=/bin:/usr/bin:/sbin:/usr/sbin \
/bin/bash --login
3. Download wget from http://ftp.gnu.org/gnu/wget/wget-1.19.1.tar.xz and copy it into /mnt/lfs/sources from your host os.
4. Unpack and cd into it with:
tar -xf wget-1.19.1.tar.xz
cd wget-1.19.1
5. Configure and install wget with:
./configure --prefix=/usr \
--sysconfdir=/etc \
--with-ssl=openssl &&
make
make install
6. Delete the wget-1.19.1 folder if you want and your done!

xen create new virtual machine using command line

I want to create (CentOS) virtual machine using xen by virt-install command
I am using kickstart and put it in http://192.168.1.8/centos/kickstart.cfg
and put Centos 6.5 in http://192.168.1.8/centos/os/
use
[root#CentOS ~]# dd if=/dev/zero of=/var/lib/xen/images/vserver.img bs=1M count=4000
[root#CentOS ~]# virt-install -p -n vserver -r 512 -f /var/lib/xen/images/vserver.img -l http://192.168.1.8/centos/os -x ks=http://192.168.1.8/centos/kickstart.cfg -w bridge:xenbr0 --vcpus=1
the result
Starting install...
ERROR Could not find an installable distribution at 'http://192.168.1.8/centos/os'
The location must be the root directory of an install tree.
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh --connect xen:/// start vserver
otherwise, please restart your installation.
it's need the .treeinfo file,
vi /var/www/html/centos/os/.treeinfo
and add this lines:
[general]
family = CentOS
timestamp = 1341518023.56
variant =
totaldiscs = 1
version = 6.5
discnum = 1
packagedir =
arch = i386
[images-i386]
initrd = images/pxeboot/initrd.img
[images-xen]
initrd = isolinux/initrd.img
kernel = isolinux/vmlinuz
[stage2]
mainimage = images/install.img
don't copy it directly from the browser, write it by keyboard or copy it to a text file first, in order to remove any special characters.
finally give Apache ownership to the files:
chown -R apache:apache /var/www/html/*
In the virt-install command add the CentOS (URL) as string
and use --bridge=xenbr0 instead of bridge:xenbr0
[root#CentOS ~]# virt-install -p -n vserver -r 512 -f /var/lib/xen/images/vserver.img -l "http://192.168.1.8/centos/os" -x -w --bridge=xenbr0 --vcpus=1
P.S: change the arch field if you are using 64bit version to x86_64

Using Optware packages and startup scripts on dd-wrt router

I'm trying to run a mumble server (umurmur) on my dd-wrt router (Buffalo WZR-HP-AG300H). I flashed one of the recent community versions of dd-wrt on the device (SVN Rev.: 23320), it has an Atheros CPU inside.
After that I mounted a USB pendrive into the filesystem using these guides (Guide 1, Guide 2) and created writable directories. Here is my startup-script saved to nvram (via web-gui)
EDIT: USB pendrive should be partioned before using it with DD-Wrt.
#!/bin/sh
sleep 5
insmod mbcache
insmod jbd
insmod ext3
mkdir '/mnt/part1'
mkdir '/mnt/part2'
mount -t ext3 -o noatime /dev/sda5 /mnt/part1 # /dev/sda5 -> partition on USB pendrive
mount -t ext3 -o noatime /dev/sda7 /mnt/part2 # /dev/sda7 -> partition on USB pendrive
swapon /dev/sda6 # /dev/sda6 -> partition on USB pendrive
sleep 2
if [ -f /mnt/part1/optware.enable ];then
#mount -o bind /mnt/part2 /mnt/part1/root
mount -o bind /mnt/part1 /jffs
mount -o bind /mnt/part1/etc /etc
mount -o bind /mnt/part1/opt /opt
mount -o bind /mnt/part1/root /tmp/root
else
exit
fi
if [ -d /opt/usr ]; then
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib:/usr/lib:/opt/lib:/opt/usr/lib:/jffs/usr/lib:/jffs/usr/local/lib
export PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/jffs/bin:/opt/bin:/opt/sbin:/opt/usr/bin:/opt/usr/sbin
export IPKG_INSTROOT=/opt
else
exit
fi
The script works well and I can use opkg to install packages. I can also run umurmur manually but I'm struggling on making umurmur autostart. I recognized that the umurmur startup script placed in /opt/etc/init.d/ requires arguments like start and stop but it seems they are called without any arguments.
Another way described here did not work too.
Has anyone a working solution on problems like these? Please help!
Optware runs on Broadcom routers only. Your's has an Atheros chipset.
Taken from this page: Link
Its unclear i the page you referred to has changed - and indeed my setup is fairly different to yours, but to get scripts working on startup I did the following -
mkdir -p /jffs/etc/config
copy script into /jffs/etc/config directory, renaming it to end with .startup
chmod 755 /jffs/etc/config/scriptname.startup

Resources