How to access original harddisk files in live CD session started using qemu? - linux

I have 32bit ubuntu OS. On it I download lubuntu OS iso (64 bit). Then I ran qemu command
qemu-system-x86_64 -boot d -cdrom image.iso -m 512
After choosing live CD option I can access terminal.
What path do I use to access files on my original harddisk?
I don't see anything under /media/
also no directories of the type /dev/sda are shown under / in the live CD session.

Warning: this can unrecoverable destroy your data! Concurrent writing access to a disk is dangerous.
It is better to transfer files via nfs or ssh.
That said, it can be done this way (where '/dev/sdX' is '/dev/sdb' or alike):
qemu-system-x86_64 -boot d -cdrom image.iso -m 512 -hda /dev/sdX

Related

qemu: CPU model 'host' requires KVM or HVF, but kvm-ok is fine

Problem
I try to run a qcow image with the following configuration:
:~$ sudo ~/Downloads/qemu-7.1.0/bin/debug/native/x86_64-softmmu/qemu-system-x86_64
-L -enable-kvm -cpu host -s -kernel bzImage -m 2048
-hda rootfs.qcow2-append "root=/dev/sda rw
nokaslr" -net nic,model=virtio -net user,hostfwd=tcp::5555-:22
Error Message:
qemu-system-x86_64: CPU model 'host' requires KVM or HVF
But kvm should be fine:
:~$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
What I did:
I'd like to use qemu in version 7.1.0 and installed it following the wiki by using the tar archive.
# Switch to the QEMU root directory.
cd qemu
# Prepare a native debug build.
mkdir -p bin/debug/native
cd bin/debug/native
# Configure QEMU and start the build.
../../../configure --enable-debug
make
# Return to the QEMU root directory.
cd ../../..
The simple test from the wiki works fine.
bin/debug/native/x86_64-softmmu/qemu-system-x86_64 -L pc-bios
The "-L" option needs an argument (a path to the BIOS and other binary files), but you haven't given it one. QEMU's command line parser therefore thinks that you are asking it to look in a directory named "-enable-kvm", and that you haven't given '-enable-kvm' as an option at all. So it's running in TCG, where '-cpu host' is not valid.
You need to fix your command line: either specify the -L option correctly, or if you don't need it then just drop it.
You are also missing a space before '-append'.
If you got this command line from a tutorial, re-check it carefully and make sure you've got it exactly right, including any necessary pieces to add in and that all the spaces and punctuation are matching.

How to boot FreeBSD image under Qemu

I have a FreeBSD image that contains /boot/loader* and /boot/kernel and more. It boots fine under an EC2 instance but I would like to boot it with Qemu. I have tried various methods, but they have not worked. See below.
qemu-system-x86_64 -kernel kernel -nographic -append 'console=ttyS0' disk.img
qemu-system-x86_64 -kernel loader -nographic -append disk.img
This boots on Ubuntu 20.04 amd64 host, QEMU 4.2.1:
wget https://download.freebsd.org/ftp/releases/VM-IMAGES/12.1-RELEASE/amd64/Latest/FreeBSD-12.1-RELEASE-amd64.qcow2.xz
unxz FreeBSD-12.1-RELEASE-amd64.qcow2.xz
sudo apt install qemu-system-x86
qemu-system-x86_64 -drive file=FreeBSD-12.1-RELEASE-amd64.qcow2,format=qcow2 -enable-kvm
The username is root with empty password:
The download page for that image is: https://www.freebsd.org/where.html
Unfortunately, trying to add:
-serial mon:stdio -nographic
to get rid of the GUI only shows the bootloader images on the terminal, but not the rest of boot. https://lists.freebsd.org/pipermail/freebsd-hackers/2005-March/011051.html mentions how to fix that by modifying the image, which is annoying, but worked. On the GUI boot, I did:
echo 'console="comconsole"' > /boot/loader.conf
and then the next nographic boot worked fully on my terminal.
You can quit QEMU -nographic with Ctrl-A X as shown at: https://superuser.com/questions/1087859/how-to-quit-the-qemu-monitor-when-not-using-a-gui/1211516#1211516
The next issue is that the disk is full, I have to learn to increase its size. From interactive df -Th inspection, the image appears to contain a single raw UFS partition. I tried:
qemu-img resize FreeBSD-12.1-RELEASE-amd64.qcow2 +1G
but that is not enough presumably because the partition itself was not resized to fit the disk. Likely this can be achieved with gpart as shown at: https://www.freebsd.org/doc/handbook/disks-growing.html but I don't have the patience right now.
The FreeBSD wiki has some recipes how to run FreeBSD inside Qemu
https://wiki.freebsd.org/QemuRecipes

How to install wget on LFS system

I am pretty newbie to Linux and started LFS because I needed it for school. So my system is now perfectly running with an internet connection, but I still don't have any packet manager or something. The first binary I would like to have is basically wget, but I really don't know how to do...
Could someone explain to me please ?
I personally used (and would highly recommend) using the existing Linux system (the host) to download the wget package and its dependencies before booting your LFS system for the first time. However, seeing that you're already using your LFS system, if you still have the ability to log using the host, then use it to download wget as if it was one of the sources that you got when building the LFS system.
For me, I used a Linux Mint Host running in VirtualBox to build my LFS. To get wget I just had to re-add the Linux Mint host storage, and download wget and added it to the LFS sources. I then removed the Linux Mint host storage, logged in to my LFS machine, then followed the steps in BLFS.
Note: this is mainly just from parts of lfs and the wget page of blfs.
1. Boot into your host OS.
2. Enter the following commands in the command line to get into chroot(edit depending on your partitions and where you mount lfs):
sudo su -
export LFS=/mnt/lfs
mount -vt ext4 /dev/sda4 $LFS
mount -v --bind /dev $LFS/dev
mount -vt devpts devpts $LFS/dev/pts -o gid=5,mode=620
mount -vt proc proc $LFS/proc
mount -vt sysfs sysfs $LFS/sys
mount -vt tmpfs tmpfs $LFS/run
if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
fi
chroot "$LFS" /usr/bin/env -i \
HOME=/root TERM="$TERM" PS1='\u:\w\$ ' \
PATH=/bin:/usr/bin:/sbin:/usr/sbin \
/bin/bash --login
3. Download wget from http://ftp.gnu.org/gnu/wget/wget-1.19.1.tar.xz and copy it into /mnt/lfs/sources from your host os.
4. Unpack and cd into it with:
tar -xf wget-1.19.1.tar.xz
cd wget-1.19.1
5. Configure and install wget with:
./configure --prefix=/usr \
--sysconfdir=/etc \
--with-ssl=openssl &&
make
make install
6. Delete the wget-1.19.1 folder if you want and your done!

Vagrant unable to mount in Linux guest with VirtualBox Guest Additions on Windows 7

I'm trying to get a Linux VM using Virtual Box, Virtual Box Guest Additions, and Vagrant running and to mount a folder on my Windows 7 machine. I've tried the suggestions in this question, but still get the same error.
I'm running the following versions:
Virtual Box: 4.3.18 r96516
Virtual Box Guest Additions: 4.3.18
Vagrant: 1.6.5
Vagrant Plug-ins:
vagrant-login: 1.0.1
vagrant-share: 1.1.2
vagrant-vbguest: 0.10.0
When I run vagrant reload I get the following error:
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t vboxsf -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,
nolock,vers=3,udp,noatime core /tbm
mount -t vboxsf -o uid=`id -u vagrant`,gid=`id -g vagrant`,nolock,vers=3,udp,noa
time core /tbm
The error output from the last command was:
stdin: is not a tty
unknown mount option `noatime'
valid options:
rw mount read write (default)
ro mount read only
uid =<arg> default file owner user id
gid =<arg> default file owner group id
ttl =<arg> time to live for dentry
iocharset =<arg> i/o charset (default utf8)
convertcp =<arg> convert share name from given charset to utf8
dmode =<arg> mode of all directories
fmode =<arg> mode of all regular files
umask =<arg> umask of directories and regular files
dmask =<arg> umask of directories
fmask =<arg> umask of regular files
I've tried un-installing, installing, updating the vagrant-vbguest plugin:
vagrant plugin install vagrant-vbguest
I've tried running the following command after running vagrant ssh, but still get the same error message:
sudo ln -s /opt/VBoxGuestAdditions-4.3.18/lib/VBoxGuestAdditions /usr/lib/VBoxGuestAdditions
I'm not super familiar with mount options, but I tried executing your command in a similar VM I'm running and got the same error regarding the noatime option.
I read through the documentation (man 8 mount) which states somewhere after line 300 or so, in the FILESYSTEM INDEPENDENT MOUNT OPTIONS that: Some of these options are only useful when they appear in the /etc/fstab file.
I suspect this is your problem. I edited my /ect/fstab file to change one of my mounts to /dev/mapper/precise64-root / ext4 noatime,errors=remount-ro 0 1 this option and then ran the following:
sudo mount -oremount /
vagrant#precise64:~$ mount
/dev/mapper/precise64-root on / type ext4 (rw,noatime,errors=remount-ro)
...
I edited the file again to remove the option and:
vagrant#precise64:~$ sudo mount -oremount /
vagrant#precise64:~$ mount
/dev/mapper/precise64-root on / type ext4 (rw,errors=remount-ro)
...
I don't know if you're providing these mount commands or if they come from a plugin, but it seems like (at least in your environment), the option works fine, but can't be specified on the command line.

qemu on Raspberry Pi Arch Linux latest sd image

I am trying to set up an Arch image and use qemu in order to cross-compile some stuff before I load the image onto the Pi. I thought the easiest way to do it would be to qemu the latest starter image, prepare it with whatever I needed, and then dd it onto the Pi when I was done.
I downloaded the Arch image from http://downloads.raspberrypi.org/arch_latest, and wanted to run it under Qemu similar to http://xecdesign.com/qemu-emulating-raspberry-pi-the-easy-way/.
I tried many variations on the qemu command line they gave
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-25-wheezy-raspbian.img
substituting the archlinux-hf-2013-07-22.img. But this eventually led to "Kernel panic - not syncing: No init found. Try passing init= option to kernel"
I'm sure this means the kernel-qemu I downloaded won't work with the Arch image, but I'm not sure the right way to fix the issue.
Edit:
Even the latest Raspbian image kernel panics when I use the command line above with it. Which I guess shouldn't have surprised me, since it's most likely an old kernel.
So I guess my real question is, how can I use whatever kernel is included in the image, rather than having to build my own kernel?
In case archlinux-hf-2013-07-22.img
Here there 3 partion are made.
you can check by using
fdisk -l archlinux-hf-2013-07-22.img
rootfs is in sd5 i.e 5th partion.
So pass this parameter "root=/dev/sda5 panic=1" , it will boot perfectly.
In 2013-05-25-wheezy-raspbian.img
You can use same kernel for both image.
Here you have to comment ld.so.preload which will load some shared-library,which will unable login. so kernel panic.
Note:-"root=/dev/sda2 panic=1" pass this parameter only.
You can comment it by doing below.
sudo kpartx -av 2013-05-25-wheezy-raspbian.img
mkdir tmp
sudo mount /dev/mapper/loop0p2 tmp/
cd tmp/etc
sudo vi ld.so.preload
/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so
comment
#/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so
umount /dev/mapper/loop0p2
kpartx -d 2013-05-25-wheezy-raspbian.img
Then run qemu
qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-25-wheezy-raspbian.img
this will perfectly boot without any trouble

Resources