To check if Intel's VT-X or AMD's AMD-V is enabled in the BIOS/UEFI, I use:
if systool -m kvm_amd -v &> /dev/null || systool -m kvm_intel -v &> /dev/null ; then
echo "AMD-V / VT-X is enabled in the BIOS/UEFI."
else
echo "AMD-V / VT-X is not enabled in the BIOS/UEFI"
fi
I couldn't find a way to check if Intel's VT-D or AMD's IOMMU are enabled in the BIOS/UEFI.
I need a way to detect if it is enabled or not without having the iommu kernel parameters set (iommu=1, amd_iommu=on, intel_iommu=on).
One idea I had was to use rdmsr, but I'm not sure if that would work. Instead of systool I initially wanted to use sudo rdmsr 0x3A, but it didn't work for me. It always reports:
rdmsr: CPU 0 cannot read MSR 0x0000003a
rdmsr is part of msr-tools btw. And to be used requires the msr kenel module to be loaded (sudo modprobe msr) first.
Allegedly sudo rdmsr 0x3A should have returned 3 or 5 to indicate that VT-X/AMD-V is enabled...
If VT-d is enabled, Linux will configure DMA Remapping at boot time. The easiest way to find this is to look in dmesg for DMAR entries. If you don't see errors, then VT-d is enabled.
For example:
[root#localhost ~]# dmesg | grep DMAR
[ 0.000000] ACPI: DMAR 0x00000000BBECB000 0000A8 (v01 LENOVO TP-R0D 00000930 PTEC 00000002)
[ 0.001000] DMAR: Host address width 39
[ 0.001000] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.001000] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.001000] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.001000] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.001000] DMAR: RMRR base: 0x000000bbdd8000 end: 0x000000bbdf7fff
[ 0.001000] DMAR: RMRR base: 0x000000bd000000 end: 0x000000bf7fffff
[ 0.001000] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.001000] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.001000] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.002000] DMAR-IR: Enabled IRQ remapping in x2apic mode
Another example with x2apic opt out:
[root#localhost ~]# dmesg | grep DMAR
[ 0.000000] ACPI: DMAR 0000000079a20300 000C4 (v01 SUPERM SMCI--MB 00000001 INTL 20091013)
[ 0.106389] DMAR: Host address width 46
[ 0.106392] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[ 0.106400] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
[ 0.106402] DMAR: RMRR base: 0x0000007bb24000 end: 0x0000007bb32fff
[ 0.106404] DMAR: ATSR flags: 0x0
[ 0.106407] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[ 0.106409] DMAR-IR: IOAPIC id 8 under DRHD base 0xfbffc000 IOMMU 0
[ 0.106411] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[ 0.106413] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[ 0.106414] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[ 0.106591] DMAR-IR: Enabled IRQ remapping in xapic mode
Either way, you're looking for that last line, DMAR-IR: Enabled IRQ remapping in <whichever> mode.
On a system with VT-d disabled, you will either see an error message, or nothing at all.
[root#localhost ~]# dmesg | grep DMAR
[root#localhost ~]#
I just found another way that seems to work even if the iommu kernel parameters have not been set:
if compgen -G "/sys/kernel/iommu_groups/*/devices/*" > /dev/null; then
echo "AMD's IOMMU / Intel's VT-D is enabled in the BIOS/UEFI."
else
echo "AMD's IOMMU / Intel's VT-D is not enabled in the BIOS/UEFI"
fi
Building on Jo-Erlend Schinstad's answer:
Install cpu-checker
$ sudo apt-get update
$ sudo apt-get install cpu-checker
Then check:
$ kvm-ok
If the CPU is enabled, you should see something like:
INFO: /dev/kvm exists
KVM acceleration can be used
Otherwise, you might see something like:
INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
and then hard poweroff/poweron your system
KVM acceleration can NOT be used
Related
After starting the server, the HBA network card cannot initialize the firmware.
[root#compute8 ~]# dmesg | grep qla2xxx
[ 2.919912] qla2xxx [0000:00:00.0]-0005: : QLogic Fibre Channel HBA Driver: 10.02.06.200-k.
[ 2.922762] Warning: Deprecated Hardware is detected: qla2xxx:2031:1077 # 0000:3b:00.0 will not be maintained in a future major release and may be disabled
[ 2.928626] qla2xxx [0000:00:00.0]-011c: : MSI-X vector count: 32.
[ 2.931519] qla2xxx [0000:00:00.0]-001d: : Found an ISP2031 irq 83 iobase 0x0000000036a45c68.
[ 3.057038] qla2xxx [0000:3b:00.0]-0075:8: ZIO mode 6 enabled; timer delay (200 us).
[ 3.060203] qla2xxx [0000:3b:00.0]-ffff:8: FC4 priority set to NVMe
[ 4.971121] qla2xxx [0000:3b:00.0]-00d2:8: Init Firmware **** FAILED ****.
[ 4.974362] qla2xxx [0000:3b:00.0]-00d6:8: Failed to initialize adapter - Adapter flags 2.
[ 4.983547] Warning: Deprecated Hardware is detected: qla2xxx:2031:1077 # 0000:3b:00.1 will not be maintained in a future major release and may be disabled
[ 4.989622] qla2xxx [0000:00:00.0]-011c: : MSI-X vector count: 32.
[ 4.992605] qla2xxx [0000:00:00.0]-001d: : Found an ISP2031 irq 83 iobase 0x000000009c44281d.
[ 5.056118] qla2xxx [0000:3b:00.1]-0075:8: ZIO mode 6 enabled; timer delay (200 us).
[ 5.059061] qla2xxx [0000:3b:00.1]-ffff:8: FC4 priority set to NVMe
[ 6.770143] qla2xxx [0000:3b:00.1]-00d2:8: Init Firmware **** FAILED ****.
[ 6.773060] qla2xxx [0000:3b:00.1]-00d6:8: Failed to initialize adapter - Adapter flags 2.
[root#compute8 ~]# ls /lib/firmware/ | grep '_fw.bin'
ql2100_fw.bin
ql2200_fw.bin
ql2300_fw.bin
ql2322_fw.bin
ql2400_fw.bin
ql2500_fw.bin
[root#compute8 ~]# lspci -vvv |grep -i -e fib -e hba
Capabilities: [a8] SATA HBA v1.0 BAR4 Offset=00000004
Capabilities: [a8] SATA HBA v1.0 BAR4 Offset=00000004
0000:3b:00.0 Fibre Channel: QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter (rev 02)
Product Name: QLE2672 QLogic 2-port 16Gb Fibre Channel Adapter
0000:3b:00.1 Fibre Channel: QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter (rev 02)
Product Name: QLE2672 QLogic 2-port 16Gb Fibre Channel Adapter
[root#compute8 ~]# lspci -v -s 3b:00.0
0000:3b:00.0 Fibre Channel: QLogic Corp. ISP8324-based 16Gb Fibre Channel to PCI Express Adapter (rev 02)
Subsystem: QLogic Corp. Device fb02
Physical Slot: 7-1
Flags: fast devsel, IRQ 83, NUMA node 0
Memory at 382ffff0a000 (64-bit, prefetchable) [size=8K]
Memory at 382ffff04000 (64-bit, prefetchable) [size=16K]
Memory at 382fffe00000 (64-bit, prefetchable) [size=1M]
Expansion ROM at e0640000 [disabled] [size=256K]
Capabilities: [44] Power Management version 3
Capabilities: [4c] Express Endpoint, MSI 00
Capabilities: [88] Vital Product Data
Capabilities: [90] MSI-X: Enable- Count=32 Masked-
Capabilities: [100] Advanced Error Reporting
Capabilities: [154] Alternative Routing-ID Interpretation (ARI)
Capabilities: [1b0] Secondary PCI Express
Kernel modules: qla2xxx
[root#compute8 ~]# lsmod | grep qla2xxx
qla2xxx 974848 0
nvme_fc 53248 1 qla2xxx
scsi_transport_fc 81920 1 qla2xxx
[root#compute8 ~]# modinfo -a qla2xxx
QLogic Corporation
[root#compute8 ~]# modinfo -d qla2xxx
QLogic Fibre Channel HBA Driver
[root#compute8 ~]# modinfo -l qla2xxx
GPL
[root#compute8 ~]# modinfo -d qla2xxx
QLogic Fibre Channel HBA Driver
[root#compute8 ~]# modinfo qla2xxx | grep version
version: 10.02.06.200-k
[root#compute8 ~]# modinfo -k `uname -r` -n qla2xxx
/lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/scsi/qla2xxx/qla2xxx.ko.xz
[root#compute8 ~]# modprobe --show-depends qla2xxx
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/scsi/scsi_transport_fc.ko.xz
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/block/t10-pi.ko.xz
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/nvme/host/nvme-core.ko.xz
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/nvme/host/nvme-fabrics.ko.xz
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/nvme/host/nvme-fc.ko.xz
insmod /lib/modules/4.18.0-372.26.1.el8_6.x86_64/kernel/drivers/scsi/qla2xxx/qla2xxx.ko.xz
Installed the recommended utility QConvergeConsole CLI - Version 2.4.0 to see information about the card and ports (Menu item 1 is selected). There is no information on the card.
QConvergeConsole
CLI - Version 2.4.0 (Build 20)
Main Menu
1: Adapter Information
2: Adapter Configuration
3: Adapter Updates
4: Adapter Diagnostics
5: Monitoring
6: Refresh
7: Help
8: Exit
Please Enter Selection: 1
QConvergeConsole
CLI - Version 2.4.0 (Build 20)
Adapter Type Selection
(p or 0: Previous Menu; m or 98: Main Menu; ex or 99: Quit)
Please Enter Selection:
I found the following table:
ISP 21XX — ql2100_fw.bin
ISP 22XX — ql2200_fw.bin
ISP 2300 — ql2300_fw.bin
ISP 2322 — ql2322_fw.bin
ISP 24XX — ql2400_fw.bin
ISP 25XX — ql2500_fw.bin
ISP 2031 — ql2600_fw.bin
ISP 27XX — ql2700_fw.bin
As I understood, for my card with the ISP 2031 chip I need ql2600_fw.bin. In the support section of the site https://www.marvell.com/support/downloads.html looked through all the archives for QLE2672 cards, I found nothing.
Tried systems RHEL 8.4, AlmaLinux 8.4/8.6, Centos 7.9/8.3 the result is the same.
A solution to my problem:
Download kernel-firmware-qlogic and unpack. Copy the firmware file and restart the qla2xxx module.
# wget https://rpmfind.net/linux/opensuse/tumbleweed/repo/oss/noarch/kernel-firmware-qlogic-20220902-1.1.noarch.rpm
# rpm2cpio kernel-firmware-qlogic-20220902-1.1.noarch.rpm | cpio -idmv
# unxz ./usr/lib/firmware/ql2600_fw.bin.xz
# cp ./usr/lib/firmware/ql2600_fw.bin /lib/firmware/
# modprobe -r qla2xxx
# vi /etc/modprobe.d/qla2xxx.conf
options qla2xxx ql2xfwloadbin=2
# modprobe qla2xxx
# dmesg | grep qla2xxx
[ 3651.498294] qla2xxx [0000:3b:00.1]-0075:18: ZIO mode 6 enabled; timer delay (200 us).
[ 3651.498826] qla2xxx [0000:3b:00.1]-ffff:18: FC4 priority set to NVMe
[ 3652.628291] qla2xxx [0000:3b:00.1]-500a:18: LOOP UP detected (16 Gbps).
[ 3652.797555] scsi host18: qla2xxx
[ 3652.845337] qla2xxx [0000:3b:00.1]-00fb:18: QLogic QLE2672 - QLE2672 QLogic 2-port 16Gb Fibre Channel Adapter.
[ 3652.845959] qla2xxx [0000:3b:00.1]-00fc:18: ISP2031: PCIe (8.0GT/s x8) # 0000:3b:00.1 hdma+ host#=18 fw=8.04.00 (d0d5).
Check:
# qaucli
QConvergeConsole
FC Adapter Information
1: FC Adapter Information
2: FC Port Information
3: FC VPD Information
4: FC Target/LUN Information
5: FC Flash Information
(p or 0: Previous Menu; m or 98: Main Menu; ex or 99: Quit)
Please Enter Selection: 1
QConvergeConsole
CLI - Version 2.4.0 (Build 20)
Adapter Information
1: HBA Model: QLE2672 SN: BFE1501K49172
Port 1 WWPN: 21-00-00-0E-1E-C2-65-E8 Online
Port 2 WWPN: 21-00-00-0E-1E-C2-65-E9 Online
I have an issue with my beaglebone pocket and I need help. I try to setup can0 with an MCP2551 transceiver. I load BB-CAN0-00A0.dtbo and BB-CAN1-00A0.DTBO but i have no can device in /sys/class/net and device can0 and can1 doesn't exist.
This is my version.sh log
git:/opt/scripts/:[73593ebe3b7d3cc381eeb502d45ccb33a6ec5e78]
eeprom:[A335PBGL00A21748EPB00201]
model:[TI_AM335x_PocketBeagle]
dogtag:[BeagleBoard.org Debian Image 2018-08-30]
bootloader:[microSD]:[/dev/mmcblk0]:[U-Boot 2018.03-00002-gac9cce7c6a]:[location: dd MBR]
kernel:[4.14.67-ti-r73]
nodejs:[v6.14.4]
uboot_overlay_options:[enable_uboot_overlays=1]
uboot_overlay_options:[uboot_overlay_addr4=/lib/firmware/BB-CAN0-00A0.dtbo]
uboot_overlay_options:[uboot_overlay_addr5=/lib/firmware/BB-CAN1-00A0.dtbo]
uboot_overlay_options:[uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-14-TI-00A0.dtbo]
uboot_overlay_options:[enable_uboot_cape_universal=1]
pkg check: to individually upgrade run: [sudo apt install --only-upgrade ]
pkg:[bb-cape-overlays]:[4.4.20180803.0-0rcnee0~stretch+20180804]
pkg:[bb-wl18xx-firmware]:[1.20180517-0rcnee0~stretch+20180517]
pkg:[kmod]:[23-2rcnee1~stretch+20171005]
pkg:[librobotcontrol]:[1.0.2-git20180829.0-0rcnee0~stretch+20180830]
pkg:[firmware-ti-connectivity]:[20170823-1rcnee1~stretch+20180328]
groups:[debian : debian adm kmem dialout cdrom floppy audio dip video plugdev users systemd-journal i2c bluetooth netdev cloud9ide gpio pwm eqep admin spi tisdk weston-launch xenomai]
cmdline:[console=ttyO0,115200n8 bone_capemgr.uboot_capemgr_enabled=1 root=/dev/mmcblk0p1 ro rootfstype=ext4 rootwait coherent_pool=1M net.ifnames=0 quiet]
dmesg | grep pinctrl-single
[ 1.148050] pinctrl-single 44e10800.pinmux: 142 pins at pa f9e10800 size 568
[ 1.241720] pinctrl-single 44e10800.pinmux: pin PIN95 already requested by ocp:P1_28_pinmux; cannot claim for 481cc000.can
[ 1.253051] pinctrl-single 44e10800.pinmux: pin-95 (481cc000.can) status -22
[ 1.260180] pinctrl-single 44e10800.pinmux: could not request pin 95 (PIN95) from group pinmux_dcan0_pins on device pinctrl-single
[ 1.280383] pinctrl-single 44e10800.pinmux: pin PIN97 already requested by ocp:P2_09_pinmux; cannot claim for 481d0000.can
[ 1.291556] pinctrl-single 44e10800.pinmux: pin-97 (481d0000.can) status -22
[ 1.298668] pinctrl-single 44e10800.pinmux: could not request pin 97 (PIN97) from group pinmux_dcan1_pins on device pinctrl-single
dmesg | grep gpio-of-helper
[ 1.156291] gpio-of-helper ocp:cape-universal: ready
END
And my /boot/uEnv.txt:
uname_r=4.14.67-ti-r73
#uuid=
#dtb=
###U-Boot Overlays###
###Documentation: http://elinux.org/Beagleboard:BeagleBoneBlack_Debian#U-Boot_Overlays
###Master Enable
enable_uboot_overlays=1
###
###Overide capes with eeprom
#uboot_overlay_addr0=/lib/firmware/.dtbo
#uboot_overlay_addr1=/lib/firmware/.dtbo
#uboot_overlay_addr2=/lib/firmware/.dtbo
#uboot_overlay_addr3=/lib/firmware/.dtbo
###
###Additional custom capes
uboot_overlay_addr4=/lib/firmware/BB-CAN0-00A0.dtbo
uboot_overlay_addr5=/lib/firmware/BB-CAN1-00A0.dtbo
#uboot_overlay_addr6=/lib/firmware/.dtbo
#uboot_overlay_addr7=/lib/firmware/.dtbo
###
###Custom Cape
#dtb_overlay=/lib/firmware/.dtbo
###
###Disable auto loading of virtual capes (emmc/video/wireless/adc)
#disable_uboot_overlay_emmc=1
#disable_uboot_overlay_video=1
#disable_uboot_overlay_audio=1
#disable_uboot_overlay_wireless=1
#disable_uboot_overlay_adc=1
###
###PRUSS OPTIONS
###pru_rproc (4.4.x-ti kernel)
#uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-4-TI-00A0.dtbo
###pru_rproc (4.14.x-ti kernel)
uboot_overlay_pru=/lib/firmware/AM335X-PRU-RPROC-4-14-TI-00A0.dtbo
###pru_uio (4.4.x-ti, 4.14.x-ti & mainline/bone kernel)
#uboot_overlay_pru=/lib/firmware/AM335X-PRU-UIO-00A0.dtbo
###
###Cape Universal Enable
enable_uboot_cape_universal=1
###
###Debug: disable uboot autoload of Cape
#disable_uboot_overlay_addr0=1
#disable_uboot_overlay_addr1=1
#disable_uboot_overlay_addr2=1
#disable_uboot_overlay_addr3=1
###
###U-Boot fdt tweaks... (60000 = 384KB)
#uboot_fdt_buffer=0x60000
###U-Boot Overlays###
cmdline=coherent_pool=1M net.ifnames=0 quiet
#In the event of edid real failures, uncomment this next line:
#cmdline=coherent_pool=1M net.ifnames=0 quiet video=HDMI-A-1:1024x768#60e
#Use an overlayfs on top of a read-only root filesystem:
#cmdline=coherent_pool=1M net.ifnames=0 quiet overlayroot=tmpfs
##enable Generic eMMC Flasher:
##make sure, these tools are installed: dosfstools rsync
#cmdline=init=/opt/scripts/tools/eMMC/init-eMMC-flasher-v3.sh
If you have any suggestion, I don't understand what's wrong in my configuration.
Thanks
cape-universal can conflict with overlays that you're trying to load. Such as in this case.
A possible route is to disable cape-universal. Another option might be to try set things up at run-time. For a static hardware setup that won't change the former is much preferable though.
Please note that the default images for the BBB keep evolving and thus this is not an universal answer.
Good day,
I am currently working on a project where PCIe SSDs are constantly being swapped out and tested through benchmark programs such as VDBench and Iometer. The problem I face right now, which is only on the Linux side (got it working fine on windows), is that if the drives were not on at initial boot-up, they never appear under GParted or Disks. Here's what I have done:
Cold boot, PCIe Add-in-card SSD is off. It is then powered on through a pass through card that is logically controlled to make sure power and shorts are not an issue.
I turn the device on, then run:
sudo sh -c "echo 1 > /sys/bus/pci/rescan"
Performing a
lspci -tv
The device shows with no issues in the tree. When I check under Disks however, it is not there.
I have tried a bunch of different commands with none of them seeming to help me. I have tried
partprobe
Which did not do anything. and:
sudo sh -c "echo 1 > /sys/bus/pci/devices/0000:82:00.0/remove"
Followed up another rescan:
sudo sh -c "echo 1 > /sys/bus/pci/rescan"
As well as:
sudo sh -c "echo 1 > /sys/bus/pci/devices/0000:82:00.0/enable"
Still nothing. Also ran:
dmesg
Which shows, amongst other things:
[ 68.128778] pci 0000:82:00.0: [8086:0953] type 00 class 0x010802
[ 68.128797] pci 0000:82:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[ 68.128820] pci 0000:82:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
[ 68.133398] pci 0000:84:00.0: [1c58:0003] type 00 class 0x010802
..............................
[ 68.141751] nvme 0000:82:00.0: enabling device (0100 -> 0102)
..............................
I do see a lot of fails in dmesg for other addresses, such as:
[ 1264.718446] pcieport 0000:03:04.0: BAR 14: no space for [mem size 0x00400000]
[ 1264.718448] pcieport 0000:03:04.0: BAR 14: failed to assign [mem size 0x00400000]
[ 1264.718451] pcieport 0000:03:04.0: BAR 13: no space for [io size 0x1000]
[ 1264.718453] pcieport 0000:03:04.0: BAR 13: failed to assign [io size 0x1000]
Although I have a feeling that those are unrelated to what I am doing, although I'd be happy for someone to prove me wrong.
So, after all of these attempts, does anyone know if there is a way (or if it is even possible) to scan for this PCIe Add-in NVMe SSD and be able to use it without rebooting? I also took a look at some of the threads for other HDDs that reference a rescan for sata based drives, but this is NOT that, so referencing that won't help either.
Thanks in advance.
I ran into the same issue benchmarking nvme PCIE passthrough with QEMU / Proxmox.
First take note of the driver in use:
lspci -nnk -s '0000:82:00.0'
It should say
Kernel driver in use: vfio-pci
Now unbind the driver, then reprobe:
echo '0000:82:00.0' > /sys/bus/pci/drivers/vfio-pci/unbind
echo '0000:82:00.0' > /sys/bus/pci/drivers_probe
Check the driver again with:
lspci -nnk -s '0000:82:00.0'
Kernel driver in use: nvme
lsblk should now show the drive. Found the procedure here
I tried doing that to save time which is used by rebooting. The PCI device driver at that time was dodgy at best about successfully rescanning and getting all its ducks in a row. The device was an FPGA presenting a proprietary interface device for a device driver I was developing. That was with kernel 2.6.30-something tried around March 2014. My (substandard, but acceptable) solution was to reboot the system.
Attempt to run the following command with root privilege on kernel 2.6.35 results in error:
% echo 0000:00:03.0 > /sys/bus/pci/drivers/foo/bind
-bash: echo: write error: No such device
UPDATE
The device does exist in /sys/bus/pci/devices/ the output of lspci is as follows:
% lspci -v -s 0000:00:03.0
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
Subsystem: Intel Corporation PRO/1000 MT Desktop Adapter
Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 10
Memory at f0000000 (32-bit, non-prefetchable) [size=128K]
I/O ports at d010 [size=8]
Capabilities: <access denied>
Kernel driver in use: e1000
I think I resolved the issue. It appears that the driver requires to unbind device at first.
It also appears that the shell processes redirection (echo .. > /sys/bus/..) with user permission, and 'sudo' handles only the command, i.e. 'echo' but not the whole command line that follows it, therefore it has to be executed this way:
% sudo sh -c "echo 0000:00:03.0 > /sys/bus/pci/drivers/foo/unbind"
% sudo sh -c "echo 0000:00:03.0 > /sys/bus/pci/drivers/foo_new/bind"
I want to be able to do so from both Windows and from Linux. I know that there are ways by getting sysinfo and using thumb rules related to hardware identifiers.
I want to know if there is a more fundamental method, like looking at a memory address / issuing an interrupt etc.
BTW I am trying to do this on Intel hardware and the virtualization software I use are Vmware Workstation and Windows HyperV.
Here is one more useful command:
$ lscpu | grep -E 'Hypervisor vendor|Virtualization type'
Hypervisor vendor: KVM
Virtualization type: full
Example output of other commands:
$ sudo virt-what
kvm
$ dmesg | grep -i virtual
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.029160] CPU0: Intel QEMU Virtual CPU version 1.0 stepping 03
$ sudo dmidecode | egrep -i 'manufacturer|product|vendor|domU'
Vendor: Bochs
Manufacturer: Bochs
Product Name: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
At least one of these should work to detect if you are running under VMware (or some other common virtual environment) on Linux:
Check for virtual devices detected by kernel when system boots.
dmesg | grep -i virtual
Another way to detect virtualized hardware devices, if dmesg doesn't say anything useful.
dmidecode | egrep -i 'manufacturer|product|vendor|domU'
You can also check for virtual disks:
cat /proc/ide/hd*/model
Virtuozzo can usually be detected by looking for /proc/vz or /dev/vzfs.
Most software check the hypervisor CPUID leaf -
Leaf 0x40000000, Hypervisor CPUID information
EAX: The maximum input value for hypervisor CPUID info (0x40000010).
EBX, ECX, EDX: Hypervisor vendor ID signature. E.g. "KVMKVMKVM"
Leaf 0x40000010, Timing information.
EAX: (Virtual) TSC frequency in kHz.
EBX: (Virtual) Bus (local apic timer) frequency in kHz.
ECX, EDX: RESERVED
Ofcourse, you are still relying on the hypervisor to give you this information. It may very well decide to not report 0x40000000 at all, in turn leading the guest to believe that it's actually running on real hardware