UEFI Memory location - emulation

I am attempting to write an x86_64 PC emulator. I was wondering in what memory location the UEFI is mapped. I know that a BIOS is usually mapped from 0xf0000-0xfffff and 0xf0000000-0xffffffff. Is UEFI mapped to the same locations?

Yes, the UEFI firmware is loaded to the same locations as well as legacy BIOS. Otherwise, why is cs:ip is pointing 0xFFFFFFF0 in its initial state?
Check out the OvmfPkg in EDK II. This is an open-source UEFI firmware for virtual machines. You can load it into famous emulators like Bochs, QEMU.
You can also use VMware's EFI firmware, but in that it is proprietary, you might want to read VMware's license before you really want to proceed with it.

UEFI is not loaded in a specific memory location. Something needs to be placed where the processor starts fetching instructions, and then SEC and PEI stages prepare for DXE to be uncompressed somewhere dynamically - loading individual PE/COFF as it goes.
The way you find out what memory regions are reserved is by calling the GetMemoryMap boot service at runtime.
You may find the documentation of the existing EDK2 virtual machine port helpful.

Related

UEFI Secure Boot Linux Permissions

I've got a CentOS setup with UEFI Secure Boot turned on. I'm working on debugging a set of BIOS tools that were developed quite some time ago. However when Secure Boot is turned on it appears user applications are not allow to get higher privilege levels any more.
Essentially I have a kernel module that maps a virtual to physical memory buffer which is used to communicate to the BIOS. The buffer is setup and a software SMI is called so the BIOS can retrieve it, process it, and put the results back into the buffer.
I'm trying to figure out how this can still be done with Secure Boot enabled? The kernel module can still setup the buffer, but it appears like I'm unable to call into my character device setup by the kernel module.
Any ideas?

Is it possible to simulate Linux on USB devices using VMware?

I have successfully installed RedHat Linux and run them on harddrive using VMware simulation. Things work quite smooth if I put all the nodes VM on my physical machine.
For management purposes, I want to use USB devices to store ISO and plug one if more nodes are needed. I would like to run VMware on my physical machines.
Can I just build one virtual machine on one USB device? So I can plug one node if needed.
I mean, if I simulate machine A one USB 1 and another machine B on USB 2, can I build a network using my physical machine as server?
(1) If so, are there problems I should pay attention to?
(2) If not, are there any alternative solution for my management purpose?(I do not want to make VMs on partitions of my physical machine now) Can I use multiple mobile hard drives instead?
Actually I want to start up master-slaves Hadoop2.x deployments using virtual machines. Are there any good reference for this purpose?
I shall explain that am not too lazy to have a try on my idea, however, it is now rather expensive to do so if I do not even know something about the feasibility of this solution.
Thanks for your time.
I'm not an expert on VMWare, but I know that this is common on almost any virtualization system. You can install a system :
on a physical device (a hard disk, a hard disk partition)
or on a file
The physical device way allows normally better performances since you only use one driver between the OS and the device, while the file way offer greater simplicity to add one VM.
Now for your questions :
Can I just build one virtual machine on one USB device? Yes, you can always do it on a file, and depending on host OS directly on the physical device
... can I build a network using my physical machine as server? Yes, VMWare will allow the VM to communicate with each other and/or with the host and/or with external world depending on how you configure the network interfaces of your VMs.
If so, are there problems I should pay attention to?
USB devices are pluggable and unpluggable. If you unadvertantly unplug one while the OS is active bad things could happen. That's why I advised you to use files on the hard disk to host your VMs.
memory sticks (no concern for USB disks) support a limited number of writes and generally perform poorly on writes. Never put temp filesystem of swap there but use a memory filesystem for that usage, as is done for live filesystems on read-only CD or DVD
every VMs uses memory from the host system. That is often the first limitation for the number of simultaneous VMs on a personnal system

Xen E820 memory map for domU kernels

How does xen handle E820 memory map for domU kernels? In my specific problem I am trying to map Non-Volatile RAM to domU kernels. The dom0 memory map returns the following relevant parts.
100000000-17fffffff : System RAM (4GB to 6GB)
180000000-37fffffff : reserved (6GB to 14GB)
The second line corrosponds to the NVRAM which is a region from 6GB to 14GB in the dom0 kernel. How can I map this NVRAM region to the domU kernel which does not map this region at all.
Ultimately I want to the nvram region to be available in other domU VMs so any solutions or advice would be highly helpful.
P.S. :: If I attempt to write to this region from the domU kernel will Xen intercept this write operation. Actually this is just a memory region write which should not be a problem, but it might appear as a hardware access.
Guest domains in Xen have two different models for x86:
1. Hardware Virtual Machine (HVM) : It takes benefit of Intel VT or AMD SVM extensions to enable true virtualization on x86 platform
2. Paravirtualized (PV) : This mode adds modifications in the source code of the operating system to get rid of the x86 virtualization problems and also add performance boost to the system.
These two different models handle the E820 memory map differently. E820 memory map basically gives an OS the physical address space to operate on along with the location of I/O devices. In PV mode I/O devices are available through Xenstore. The domain builder only provides a console device during boot to the pv guest. All other I/O devices have to be mapped by the guest. The guest in this mode starts execution in protected mode instead of real mode for x86. The domain builder maps the start_info pages into the guest domain's physical address space. This start_info pages contain most of the information to initialize a kernel such as number of available pages, number of CPUs, console information, Xenstore etc. E820 memory map in this context would just consist of the number of available memory pages because BIOS is not emulated and I/O device information is provided separately through Xenstore.
On the otherhand, in HVM guest BIOS and other devices have to be emulated by Xen. This mode should support any unmodified OS, thus we cannot use the previous method. BIOS emulation is done via code borrowed from Bochs, while devices are emulated using QEMU code. Here an OS is provided with an E820 memory map, build by the domain builder. The HVM domain builder would typically pass the memory layout information to the Bochs emulator which then performs the required task.
To get hold of the NVRAM pages you will have to build a separate MMU for NVRAM. This MMU should handle all the NVM pages and allocate/free it on demand just like the RAM pages. It is a lot of work.

How can the linux kernel be forced to enumerate the PCI-e bus?

Linux kernel 2.6
I've got an fpga that is loaded over GPIO connected to a development board running linux.
The fpga will transmit and receive data over the pci-express bus. However, this is enumerated
at boot and as such, no link is discovered (because the fpga is not loaded at boot).
How can I force re-enumeration of the pci-e bus in linux?
Is there a simple command or will I have to make kernel changes?
I need the capability to hotplug pcie devices.
As root, try the following command:
echo "1" > /sys/bus/pci/rescan
See this link for more information: http://www.kernel.org/doc/Documentation/ABI/testing/sysfs-bus-pci
I wonder what platform you are on: A work around (aka hack) for this that works on x86 systems is to have the BIOS basically statically configure a PCI device at whatever bus, device, function the FPGA normally lands on, then the OS will enumerate the device and reserve the PCI space for it (even though the device isn't really there). Then in your device driver you will have to do some extra things like setup the BARs and int lines manually after the fpga has been programmed. Of course this requires modifying the BIOS, which if you are working with a BIOS vendor you can contract them to make this change for you, if you are not working with a BIOS vendor then it will be much harder... Also keep in mind that I was working on VxWorks on x86, and we had a AMI make a custom BIOS for our boards...
If you don't have a BIOS, then consider programming it in the bootloader, there you already have the ability to read from disk, and adding GPIO capabilities probably isn't too difficult (assuming you are using jtag and GPIOs?), in fact depending on what bootloader you use it might already be able to do GPIO?
The issues with modifying the kernel to do this is that you have to find the sweet spot where you can read the bitfile, before the PCI enumeration... If for example the disk device drivers are initialized after PCI, then obviously you must do some radical changes to the kernel just to read the bitfile prior to PCI enumeration, which might cause other annoying problems...
One other option which you may have already discovered, and which is really only ok for development time: Power up the system, program the fpga board, then do a reset (without power cycle, for example: sudo reboot now), the FPGA should keep its configuration, and linux should enumerate it...
After turning on your computer, the BIOS enumerates the PCI bus and attempts to fulfill all IO space and memory mapped IO (MMIO) requests. It sets up these BAR's initially, and when the operating system loads these BAR's can be changed by the OS as it sees fit while the PCI bus driver enumerates the bus yet again. It is even possible for the superuser of the system to run the command setpci to change these BAR's after the BIOS has already attempted to configure them and the OS has loaded (may cause drivers to fail and several other bad things if done improperly).
I have had to do this in cases where the card in question was not assigned any resources by the BIOS since the region requested required a 64-bit address and the BIOS only operated with 32-bit address assignments. I was able to go in after-the-fact and change these addresses (originally assigned by the BIOS) to whatever addresses I saw fit, insert the kernel module, and my driver would map and use these newly-assigned addresses for the card without knowing the difference.
The problem that exists with hotplugging PCI-Express cards is that the power to the slot, itself, cannot be turned on/off without specific hotplug controllers that need to exist on the motherboard/backplane. Not having these hotplug controllers to turn the slot's power off may lead to shorts between the tiny pins when the card is physically inserted and/or removed if power is still present. Hotplug events, however, can be initiated by either end (the host or the endpoint device). This does not seem to be the case, however if your FPGA already has a link established with the root complex, a possible solution to your problem would be to generate hotplug interrupts to cause a bus rescan in the OS.
There is a major problem, though -- if your card does not actually obtain a link to the root complex, it won't be able to generate any hotplug events; which seems to be the case. After booting, the FPGA should toggle the PRESENT line on the PCIe bus to tell the OS there is a card ready to be enumerated. Once detected, the OS should attempt to establish a link to the card and assign memory regions to the device. After the OS enumerates the card you'll be able to load drivers against it and see it in lspci. You stated you're using kernel 2.6, which does have support for hotplugging and dynamic resource allocation so this method should work as long as your FPGA supports the ability to toggle the PRESENT PCIe line, too.

OS reload on a remote linux machine

If we need to do OS reload on a remote machine, how can the network boot be enabled on the client machine without making any changes in the BIOS ??
I am in a try to develop a control panel, in which this feature is included. i.e., fully automated OS reload and thinking of using the pxe boot. But enabling and disabling boot from network is a problem. Any work around, please ?
Hacker approach: Use the bootloader to load GPXE from the harddisk.
You'll need a version that fits to your NIC: Images for many hardware types and booting methods can be generated at ROM-o-matic. Use the PCI ID from the NIC to programmatically select the version that fits best. People may also have add-on network cards, e.g. for gigabit LAN.
This way you don't need to fiddle with the mainboard and network card specific ways to turn PXE on.
First, since your question is not programming related, I suggest you pose it again on the sister site serverfault.com. You might get more/better answers there.
Second, I do not think you will be able to remotely activate PXE on arbitrary machines. Maybe this works when you have Intel's AMT (Active Management Technology) on those machines, but then you already have BIOS access. But nevertheless you could activate PXE boot on all machines and from your PXE server, selectively offer boot images only to those machines you want to. All other machines would then just boot the installed OS. The FAI (Fully Automatic Install) system uses that approach, but is Linux only, AFAIK.
I agree with Dubu that reliable enable/disable of PXE boot in the BIOS across heterogeneous target hardware is not readily achievable. The better suggestion is to configure all your target machines to include PXE prior to local disk in their configured boot order always. You can PXE boot to something like PXELINUX and have the default choice be a local disk boot. Then you can selectively target particular machines to PXE boot into a network loaded OS (for OS reinstallation purposes) by configuring symlinks with the target machine's MAC address inside the PXELINUX TFTP root.

Resources