virtual parallel port connector in QEMU - linux

I have been reading Linux Device Drivers 3rd edition and have been running linux 2.6 in QEMU. However, I am at the point where it requires real hardware. I attempted to emulate a paralleport connector in QEMU with no luck. The host has no parallel port connector.
qemu-system-x86_64 -parallel file:outputfile --enable-kvm -smp 2 -initrd initramfs.igz -kernel arch/x86/boot/bzImage -m 1024 -append "console=ttyS0 loglevel=8" -nographic
I then wrote a module to access the parallel port connector via request_region(0x378,1,"parallelport");. However, whenever I attempt to write to the ioport region, none of the output is seen in the file "outputfile" in the working directory where QEMU was booted. I have also added the CONFIG_PARPORT option in the kernel and added some of its associated CONFIG's (i.e CONFIG_PARPORT_PC...) which seemed to only add the linux built-in driver or register an IRQ handler. With this, I was still unable to write to the parallelport "outputfile" on the host machine.
However, I can read and write to the ioport region (meaning whatever I do write with outb can be then read with inb. However, it is not written to the parallel port outfile, which leads me to believe something is still wrong.
I was wondering if I could get some suggestions as to what I might be doing wrong or how to start going about this? The only research I found was the different paralleport options for qemu (use host devices, over udp, etc...), which I tried with no luck. In addition, with using the outputfile option or any option that does not require a host machine parallel port, is it possible to simulate interrupts?
Update: After playing around a little more, it seems the operations of writing and reading from the ioport are working (still don't appear in the outputfile thought). This is because when I do -parallel none, I can obtain the ioport and write to it, but reading from it results in unprintable characters (not the ones I inputed). This doesn't happen by default (specifying no -parallel option or when -parallel file:outputfile.
Update2: From QEMU -device documentation I found that -parallel is legacy. So I changed my command to the following:
qemu-system-x86_64 -chardev file,path=./outputfile,id=parallel0 -device isa-parallel,chardev=parallel0,iobase=0x378,irq=7,index=0 --enable-kvm --smp 2 -initrd initramfs.igz -kernel arch/x86/boot/bzImage -m 1024 -append "console=ttyS0 loglevel=8" -nographic
and then loaded the driver and made a chardev /dev/parallel0. However, this still didn't work.

Related

QEMU Booting for kernel developement not working when using tcp 2222:22 to copy things

I am learning how to implement my own system call in linux kernel by following:
Syscall Guide
Custom Kernel Guide
for getting QEMU set up.
In the end, it tells us to run the following command to Redirect port 2222 on the host OS to the QEMU VM's port 22 which will let me copy files between QEMU and my linux:
qemu-system-x86_64 -m 64M -hda ../debian_squeeze_amd64_standard.qcow2 - append "root=/dev/sda1 console=tty0 console=ttyS0,115200n8" -kernel arch/ x86_64/boot/bzImage -nographic -net nic,vlan=1 -net user,vlan=1 -redir tcp: 2222::22
But I get the follwing error on my terminal when I run the code:
qemu-system-x86_64: -: invalid option
Help me out, I am a beginner. Thanks
In this part of your command line: "- append" -- you have an extra space between the "-" and the "append". QEMU command line options are generally of the form "-something". If you put a space in the middle then QEMU won't recognize what you've given it.
If you're following a tutorial and a command it gives you doesn't work then it's often a good idea to check it carefully for minor typos, or to copy-and-paste the command from the tutorial and try that.

How are requests to /dev/(u)random etc. handled in Docker?

for documentation purposes on our project I am looking for the following information:
We are using Docker to deploy various applications which require entropy for SSL/TLS and other stuff. These applications may use /dev/random, /dev/random, getrandom(2), etc.. I would like to know how these requests are handled in Docker containers as opposed to one virtual machine running all services (and accessing one shared entropy source).
So far I have (cursorily) looked into libcontainer and runC. Unfortunately I have not found any answers to my question, although I do have a gut feeling that these requests are passed through to the equivalent call on the host.
Can you lead me to any documentation supporting this claim, or did I get it wrong and these requests are actually handled differently?
A docker container is "chroot on steroids". Anyway, the kernel is the same between all docker containers and the host system. So all the kernel calls share the same kernel.
So we can do on our host (in any folder, as root):
mknod -m 444 urandom_host c 1 9
and in some linux chroot:
wget <alpine chroot> | tar -x <some folder>
chroot <some folder>
mknod -m 444 urandom_in_chroot c 1 9
and we can do
docker run -ti --rm alpine sh -l
mknod -m 444 urandom_in_docker c 1 9
Then all calls open(2) and read(2) by any program to any urandom_in_docker and urandom_in_chroot and urandom_host will go into the same kernel into the same kernel urandom module binded to special character file with major number 1 and minor number 9, which is according to this list the random number generator.
As for virtual machine, the kernel is different (if there is any kernel at all). So all the calls to any block/special character files are translated by different kernel (also maybe using different, virtualized architecture and different set of instructions). From the host the virtualmachine is visible as a single process (implementation depended) which may/or may not call the hosts /dev/urandom if the virtualized system/program calls /dev/urandom. In virtualization anything can happen, and that is dependent on particular implementation.
So, the requests to /dev/urandom in docker are handled the same way as on the host machine. As how urandom is handled in kernel, maybe here is a good start.
If you require entropy, be sure to use and install haveged.

Enabling virtio_blk_device for qemu

I am using qemu 2.2.0 to emulate x86 Linux guest on x86 Linux host.
I want to use the existing dataplane mechanism in QEMU (implemented using virtqueue & IOThreads )for achieving parallel R/W operations in my device.
It requires enabling virtio-blk-device & verifying concurrency in existing framework before implementing the same for my device.
I use the following commnand to enable virtio block device & boot qemu:
./qemu-system-x86_64_exe -m 2048 -usbdevice mouse -usbdevice keyboard -usbdevice tablet -enable-kvm -drive if=none,id=drive1,file=debian_wheezy_i386_desktop.raw -object iothread,id=iothread2 -device virtio-blk-device,id=drv0,drive=drive1,iothread=iothread2 -smp 8
This command executes with error:
No 'virtio-bus' found for device 'virtio-blk-device' .
However, querying this device using " ./qemu-system-x86_64_exe -device help" displays following info for virtio-blk-device:
name virtio-blk-device, bus virtio-bus.
Is there something amiss in my Command line options ?
I hit the same problem and couldn't find informations for virtio-blk-device.
I swichted to virtio-blk-pci instead.
virtio-blk-device is a VirtIO device that relies solely on memory-mapped IO (MMIO) and not on the PCI-bus. This does not work with Qemu's default machine type pc-i440fx-X.Y, or at least not out of the box.
You can use the machine type microvm (https://qemu.readthedocs.io/en/latest/system/i386/microvm.html), a minimalistic machine type without PCI and ACPI. To do so you would add -machine microvm to your Qemu commandline. Then virtio-blk-device and also virtio-net-device should work out of the box. You won't be able to use any devices that rely on PCI however.
As already suggested switching to the PCI-variants is probably the best option in most usecases. As for the error message that no PCI bus is found, maybe your Qemu build defaults to some very weird machine type. Try setting it specifically to -machine pc.

Linux vanilla kernel on QEMU and networking with eth0

I have downloaded and compiled vanilla linux kernel (3.7.1)
I used busybox for ramdisk then I booted it using QEMU.
This is my QEMU command line
qemu-system-i386 -kernel bzImage -initrd ramdisk.img -append "root=/dev/ram rw console=ttyS0 rdinit=/bin/ash" -nographic -net nic -net user
everything goes well.
However, I can't use networking on vanilla kernel with busybox.
'ifup eth0' tells me
/ # ifup eth0
ip: SIOCGIFFLAGS: No such device
I googled the Internet but can' get any clue...
some advice would be nice
thank you in advance.
Most probably there is no driver (in your example is should be e1000) loaded or the device has another name.
In /sys/class/net/ you should find a listing of all available net-devices.
If there is none (besides lo) the driver is not loaded.
In qemu monitor type "info pci" and it will show you the PCI-address of your ethernet card. It should look like this:
...
Bus 0, device 3, function 0:
Ethernet controller: PCI device 8086:100e
...
This device corresponds to /sys/devices/pci0000:00/0000:00:03.0/.
The files "vendor" and "device" must contain "0x8086" and "0x100e" which is the PCI-id from above and by which the kernel determines the driver to load.
Try to load it manually with "modprobe e1000" or insmod. If loaded there must be a symlink named "driver". If not "dmesg" should give you the reason why not.

kgdb is starting far away from init.c start_kernel()

Why kgdb always start from kernel/kgdb.c:1749 lines "kgdb:waiting dor connection from remote gdb" just step on the way of kernel of Linux.
I want to start from the beginning.
My environment is:
PC ubuntu10.10
gdb-kernel 2.6.34.1
filesys made by busybox
VirtualMach is qemu
Following the tips from web searches, I have made my linux. I can use it smoothly but when I try to remote-gdb it the kernel always start from:
kernel/kgdb.c:1749 "kgdb:waiting for connection from remote gdb"
which is much too far away from the function start_kernel which I want to meet.
I am using the following:
qemu -kernel /usr/src/work/bzImage -append "root=/dev/sda kgdboc=ttyS0,115200 kgdbwait"
-boot c -hda /usr/src/work/busybox.img -k en-us -serial tcp::4321,server
gdb /usr/src/work/vmlinux (gdb) target remote localhost:4321
Then I add -S so it can start from the beginning. But when I gdb it there is still something wrong.
When I input the command next it doesn't go to the next line and go to other place. For example I set a breakpoint at init.c startkernel() after the next. It is in other file.
If "kgdb:waiting dor connection from remote gdb" isn't early enough for you, you're going to have to try something other than kgdb. Think about this: kgdb is a service provided by the kernel. You can't debug the kernel "from the beginning" because the kernel has to perform enough initialization for it to provide the kgdb service.
Fortunately, there's another option for you. According to this source, if you start qemu with the flags -s -S, qemu will start the system and wait for you to attach a debugger to localhost:1234 before it even loads the kernel. Is that early enough?

Resources