I want to be able to do so from both Windows and from Linux. I know that there are ways by getting sysinfo and using thumb rules related to hardware identifiers.
I want to know if there is a more fundamental method, like looking at a memory address / issuing an interrupt etc.
BTW I am trying to do this on Intel hardware and the virtualization software I use are Vmware Workstation and Windows HyperV.
Here is one more useful command:
$ lscpu | grep -E 'Hypervisor vendor|Virtualization type'
Hypervisor vendor: KVM
Virtualization type: full
Example output of other commands:
$ sudo virt-what
kvm
$ dmesg | grep -i virtual
[ 0.000000] Booting paravirtualized kernel on KVM
[ 0.029160] CPU0: Intel QEMU Virtual CPU version 1.0 stepping 03
$ sudo dmidecode | egrep -i 'manufacturer|product|vendor|domU'
Vendor: Bochs
Manufacturer: Bochs
Product Name: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
Manufacturer: Bochs
At least one of these should work to detect if you are running under VMware (or some other common virtual environment) on Linux:
Check for virtual devices detected by kernel when system boots.
dmesg | grep -i virtual
Another way to detect virtualized hardware devices, if dmesg doesn't say anything useful.
dmidecode | egrep -i 'manufacturer|product|vendor|domU'
You can also check for virtual disks:
cat /proc/ide/hd*/model
Virtuozzo can usually be detected by looking for /proc/vz or /dev/vzfs.
Most software check the hypervisor CPUID leaf -
Leaf 0x40000000, Hypervisor CPUID information
EAX: The maximum input value for hypervisor CPUID info (0x40000010).
EBX, ECX, EDX: Hypervisor vendor ID signature. E.g. "KVMKVMKVM"
Leaf 0x40000010, Timing information.
EAX: (Virtual) TSC frequency in kHz.
EBX: (Virtual) Bus (local apic timer) frequency in kHz.
ECX, EDX: RESERVED
Ofcourse, you are still relying on the hypervisor to give you this information. It may very well decide to not report 0x40000000 at all, in turn leading the guest to believe that it's actually running on real hardware
Related
To check if Intel's VT-X or AMD's AMD-V is enabled in the BIOS/UEFI, I use:
if systool -m kvm_amd -v &> /dev/null || systool -m kvm_intel -v &> /dev/null ; then
echo "AMD-V / VT-X is enabled in the BIOS/UEFI."
else
echo "AMD-V / VT-X is not enabled in the BIOS/UEFI"
fi
I couldn't find a way to check if Intel's VT-D or AMD's IOMMU are enabled in the BIOS/UEFI.
I need a way to detect if it is enabled or not without having the iommu kernel parameters set (iommu=1, amd_iommu=on, intel_iommu=on).
One idea I had was to use rdmsr, but I'm not sure if that would work. Instead of systool I initially wanted to use sudo rdmsr 0x3A, but it didn't work for me. It always reports:
rdmsr: CPU 0 cannot read MSR 0x0000003a
rdmsr is part of msr-tools btw. And to be used requires the msr kenel module to be loaded (sudo modprobe msr) first.
Allegedly sudo rdmsr 0x3A should have returned 3 or 5 to indicate that VT-X/AMD-V is enabled...
If VT-d is enabled, Linux will configure DMA Remapping at boot time. The easiest way to find this is to look in dmesg for DMAR entries. If you don't see errors, then VT-d is enabled.
For example:
[root#localhost ~]# dmesg | grep DMAR
[ 0.000000] ACPI: DMAR 0x00000000BBECB000 0000A8 (v01 LENOVO TP-R0D 00000930 PTEC 00000002)
[ 0.001000] DMAR: Host address width 39
[ 0.001000] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[ 0.001000] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e
[ 0.001000] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[ 0.001000] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[ 0.001000] DMAR: RMRR base: 0x000000bbdd8000 end: 0x000000bbdf7fff
[ 0.001000] DMAR: RMRR base: 0x000000bd000000 end: 0x000000bf7fffff
[ 0.001000] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
[ 0.001000] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[ 0.001000] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[ 0.002000] DMAR-IR: Enabled IRQ remapping in x2apic mode
Another example with x2apic opt out:
[root#localhost ~]# dmesg | grep DMAR
[ 0.000000] ACPI: DMAR 0000000079a20300 000C4 (v01 SUPERM SMCI--MB 00000001 INTL 20091013)
[ 0.106389] DMAR: Host address width 46
[ 0.106392] DMAR: DRHD base: 0x000000fbffc000 flags: 0x1
[ 0.106400] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de
[ 0.106402] DMAR: RMRR base: 0x0000007bb24000 end: 0x0000007bb32fff
[ 0.106404] DMAR: ATSR flags: 0x0
[ 0.106407] DMAR: RHSA base: 0x000000fbffc000 proximity domain: 0x0
[ 0.106409] DMAR-IR: IOAPIC id 8 under DRHD base 0xfbffc000 IOMMU 0
[ 0.106411] DMAR-IR: HPET id 0 under DRHD base 0xfbffc000
[ 0.106413] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[ 0.106414] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[ 0.106591] DMAR-IR: Enabled IRQ remapping in xapic mode
Either way, you're looking for that last line, DMAR-IR: Enabled IRQ remapping in <whichever> mode.
On a system with VT-d disabled, you will either see an error message, or nothing at all.
[root#localhost ~]# dmesg | grep DMAR
[root#localhost ~]#
I just found another way that seems to work even if the iommu kernel parameters have not been set:
if compgen -G "/sys/kernel/iommu_groups/*/devices/*" > /dev/null; then
echo "AMD's IOMMU / Intel's VT-D is enabled in the BIOS/UEFI."
else
echo "AMD's IOMMU / Intel's VT-D is not enabled in the BIOS/UEFI"
fi
Building on Jo-Erlend Schinstad's answer:
Install cpu-checker
$ sudo apt-get update
$ sudo apt-get install cpu-checker
Then check:
$ kvm-ok
If the CPU is enabled, you should see something like:
INFO: /dev/kvm exists
KVM acceleration can be used
Otherwise, you might see something like:
INFO: /dev/kvm does not exist
HINT: sudo modprobe kvm_intel
INFO: Your CPU supports KVM extensions
INFO: KVM (vmx) is disabled by your BIOS
HINT: Enter your BIOS setup and enable Virtualization Technology (VT),
and then hard poweroff/poweron your system
KVM acceleration can NOT be used
Good day,
I am currently working on a project where PCIe SSDs are constantly being swapped out and tested through benchmark programs such as VDBench and Iometer. The problem I face right now, which is only on the Linux side (got it working fine on windows), is that if the drives were not on at initial boot-up, they never appear under GParted or Disks. Here's what I have done:
Cold boot, PCIe Add-in-card SSD is off. It is then powered on through a pass through card that is logically controlled to make sure power and shorts are not an issue.
I turn the device on, then run:
sudo sh -c "echo 1 > /sys/bus/pci/rescan"
Performing a
lspci -tv
The device shows with no issues in the tree. When I check under Disks however, it is not there.
I have tried a bunch of different commands with none of them seeming to help me. I have tried
partprobe
Which did not do anything. and:
sudo sh -c "echo 1 > /sys/bus/pci/devices/0000:82:00.0/remove"
Followed up another rescan:
sudo sh -c "echo 1 > /sys/bus/pci/rescan"
As well as:
sudo sh -c "echo 1 > /sys/bus/pci/devices/0000:82:00.0/enable"
Still nothing. Also ran:
dmesg
Which shows, amongst other things:
[ 68.128778] pci 0000:82:00.0: [8086:0953] type 00 class 0x010802
[ 68.128797] pci 0000:82:00.0: reg 0x10: [mem 0x00000000-0x00003fff 64bit]
[ 68.128820] pci 0000:82:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
[ 68.133398] pci 0000:84:00.0: [1c58:0003] type 00 class 0x010802
..............................
[ 68.141751] nvme 0000:82:00.0: enabling device (0100 -> 0102)
..............................
I do see a lot of fails in dmesg for other addresses, such as:
[ 1264.718446] pcieport 0000:03:04.0: BAR 14: no space for [mem size 0x00400000]
[ 1264.718448] pcieport 0000:03:04.0: BAR 14: failed to assign [mem size 0x00400000]
[ 1264.718451] pcieport 0000:03:04.0: BAR 13: no space for [io size 0x1000]
[ 1264.718453] pcieport 0000:03:04.0: BAR 13: failed to assign [io size 0x1000]
Although I have a feeling that those are unrelated to what I am doing, although I'd be happy for someone to prove me wrong.
So, after all of these attempts, does anyone know if there is a way (or if it is even possible) to scan for this PCIe Add-in NVMe SSD and be able to use it without rebooting? I also took a look at some of the threads for other HDDs that reference a rescan for sata based drives, but this is NOT that, so referencing that won't help either.
Thanks in advance.
I ran into the same issue benchmarking nvme PCIE passthrough with QEMU / Proxmox.
First take note of the driver in use:
lspci -nnk -s '0000:82:00.0'
It should say
Kernel driver in use: vfio-pci
Now unbind the driver, then reprobe:
echo '0000:82:00.0' > /sys/bus/pci/drivers/vfio-pci/unbind
echo '0000:82:00.0' > /sys/bus/pci/drivers_probe
Check the driver again with:
lspci -nnk -s '0000:82:00.0'
Kernel driver in use: nvme
lsblk should now show the drive. Found the procedure here
I tried doing that to save time which is used by rebooting. The PCI device driver at that time was dodgy at best about successfully rescanning and getting all its ducks in a row. The device was an FPGA presenting a proprietary interface device for a device driver I was developing. That was with kernel 2.6.30-something tried around March 2014. My (substandard, but acceptable) solution was to reboot the system.
Attempt to run the following command with root privilege on kernel 2.6.35 results in error:
% echo 0000:00:03.0 > /sys/bus/pci/drivers/foo/bind
-bash: echo: write error: No such device
UPDATE
The device does exist in /sys/bus/pci/devices/ the output of lspci is as follows:
% lspci -v -s 0000:00:03.0
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)
Subsystem: Intel Corporation PRO/1000 MT Desktop Adapter
Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 10
Memory at f0000000 (32-bit, non-prefetchable) [size=128K]
I/O ports at d010 [size=8]
Capabilities: <access denied>
Kernel driver in use: e1000
I think I resolved the issue. It appears that the driver requires to unbind device at first.
It also appears that the shell processes redirection (echo .. > /sys/bus/..) with user permission, and 'sudo' handles only the command, i.e. 'echo' but not the whole command line that follows it, therefore it has to be executed this way:
% sudo sh -c "echo 0000:00:03.0 > /sys/bus/pci/drivers/foo/unbind"
% sudo sh -c "echo 0000:00:03.0 > /sys/bus/pci/drivers/foo_new/bind"
What command should I use to find the processor / chip architecture on Linux?
linux-x86-32
linux-x86-64
linux-ppc-64
To display kernel architecture: uname -p
To display extended CPU details: cat /proc/cpuinfo
In the terminal, type
lscpu
which returns output like this:
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 23
Stepping: 6
CPU MHz: 2670.000
BogoMIPS: 5320.13
L1d cache: 32K
L1i cache: 32K
L2 cache: 3072K
To only get the architecture:
lscpu | grep Architecture
Which in this case is
Architecture: i686
See man lscpu for more.
I'm surprised no one suggested uname -m. On my laptop, this gives armv7l, while uname -a gives me a monstrous two lines of text.
see (man uname):
echo `uname -s`-`uname -p`
A concise command producing information about the current machine is hostnamectl. Example output:
Static hostname: xxxx
Icon name: computer-laptop
Chassis: laptop
Boot ID: b3a1f952c514411c8c4xxxxxxxxxxxx
Operating System: Ubuntu 14.04.3 LTS
Kernel: Linux 3.19.0-43-generic
Architecture: x86_64
It gives you the most basic information about your machine. Other commands like uname, lsb_release, or lscpu return more specific information.
How to get the details of RAID configuration in Linux ?
mdadm -D /dev/mdxx will give you detail of raid configuration.
cat /proc/mdstat will give detail about raid algorithm,level and chunk size etc .
This is real if this RAID is sofware....
In case of RAID hardware, you could type this command :
lspci -vv | grep -i raid
01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2208 [Thunderbolt] (rev 01)
Kernel driver in use: megaraid_sas
Kernel modules: megaraid_sas
If you're talking about a running array:
cat /proc/mdstat
If you're talking about the mdadm config file, it's usually in /etc or /etc/mdadm depending on the distribution you're running on. The following command should find it in any event:
find /etc -name '*mdadm*'
ETA: Also, I would strongly recommend that you carefully study the mdadm man page so that you are very familiar with that utility. Knowing that utility well will save your bacon at some point.
mdadm --detail /dev/md0
(or whatever /dev/mdXXX you are using)