I'm trying to get hold of CPU architecture information under Linux.
I understand the information is available via the sysfs filesystem.
I have CentOS 5 running in a Xen VM. The sysfs filesystem is mounted. However, the /sys/devices/system/cpu/cpu0/ directory is almost empty. The only entry is a single file, "online", with a value of "1".
What gives? where's all my CPU information?
The actual cpu information is still in /proc/cpuinfo.
The sysfs-files are used to control things like scheduling and frequency settings, not to get information on the cpus themselves.
Okay, I've just had a chat with a sysadmin at work.
Looking at some machines, it looks like this information simply is not pushed by VMs. The VMs think they have a virtual CPU - rather than a CPU of the type of the real underlying CPU - and the cache information simply is not published.
It is published (and it's nice to finally see it!) on real machines with reasonably modern kernels.
Related
I am trying to create an installer that checks to see if the current hardware meets minimum system requirements.
In order to do this I need processor, total physical RAM, operating system version and also mainly SSD and TPM information.
I have searched the forums but i haven't found a function that will give me this information's like SSD and TPM
Does anyone have any idea how i might accomplish this using NSIS script?
We are writing a highly concurrent software in C++ for a few hosts, all equipped with a single ST9500620NS as the system drive and an Intel P3700 NVMe Gen3 PCIe SSD card for data. Trying to understand the system more for tuning our software, I dug around the system (two E5-2620 v2 # 2.10GHz CPUs, 32GB RAM, running CentOS 7.0) and was surprised to spot the following:
[root#sc2u0n0 ~]# cat /sys/block/nvme0n1/queue/scheduler
none
This contradicts to everything that I learned about selecting the correct Linux I/O scheduler, such as from the official doc on kernel.org.
I understand that NVMe is a new kid on the block, so for now I won't touch the existing scheduler setting. But I really feel odd about the "none" put in by the installer. If anyone who has some hints as to where I can find more info or share your findings, I would be grateful. I have spent many hours googling without finding anything concrete so far.
The answer given by Sanne in the comments is correct:
"The reason is that NVMe bypasses the scheduler. You're not using the "noop" implementation: you're not using a scheduler."
noop is not the same as none, noop still performs block merging (unless you disable it with nomerges)
If you use an nvme device, or if you enable "scsi_mod.use_blk_mq=Y" at compile time or boot time, then you bypass the traditional request queue and its associated schedulers.
Schedulers for blk-mq might be developed in the future.
"none" (aka "noop") is the correct scheduler to use for this device.
I/O schedulers are primarily useful for slower storage devices with limited queueing (e.g, single mechanical hard drives) — the purpose of an I/O scheduler is to reorder I/O requests to get more important ones serviced earlier. For a device with a very large internal queue, and very fast service (like a PCIe SSD!), an I/O scheduler won't do you any good; you're better off just submitting all requests to the device immediately.
The machine on which I develop has more memory than the one on which the code will eventually run. I dont have access tothe machine on which it will actually run. This is a 64 bit application and I intend to use the address space but cap physical allocation. I dont want to lock down virtual memory, only physical memory. Is there a way to set limits on a linux machine such that it mimics a system with low RAM. I think ulimit does not differentiate between reserved address space vs actual allocation. If there is a way to do it without rebooting with different kernel parameters or, pulling out extra RAM that would be great. May be some /proc tricks.
See https://unix.stackexchange.com/questions/44985/limit-memory-usage-for-a-single-linux-process which suggests using "timeout" from here: https://github.com/pshved/timeout .
If You can change boot command line of the kernel and want to restrict available memory use
mem=
boot parameter.
For more information check:
https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html
I have access to a machine to which I can ssh. How to determine if my OS is running in fully-virtualized (where VMM does binary translation), para-virtualized or non-virtualized environment? I have some idea of how to go about it (some operations like accessing a memory page/disk will take longer time in a virtualized environment) but don't know how to proceed.
It does depends on the VMM you are running on top of. If it's a Xen or Microsoft VM, I believe CPUID with EAX value of 0x40000000 will give you a non-zero value in EAX. Not sure if that works on VMWare, VirtualBox or KVM. I expect that it will work there too...
Measuring access time is unlikely to ALWAYS show you the truth, since in a non-VM system those can vary quite a lot as well, and there is no REAL reason that you'd see a huge difference in an efficient implementation. And of course, you don't know if your VM is running with a REAL hard-disk controller passed through via the PCI, or if your NFS mounted disks are connected via a REAL network card passed through to the VM, or if they are accessed through a virtual network card.
A good VMM shouldn't show you much difference as long as the application is behaving itself.
I have ubuntu and installed on it several qemu-kvm guests, running also ubuntu.
I'm using libvirt to change the guests' memory allocation. But always encounter a constant difference between the requested memory allocation and the actual memory allocation I query from the Total field in the top command inside the guests.
The difference is the same in all the guests, and consistent.
In one machine I installed it is 134MB (allocated is less then requested), In another one it is 348MB.
I can live with it, I just don't know the reason. Does someone encounter this kind of problem? Maybe solved it?
Thanks
This constant difference is likely the space reserved by the kernel. Note that this amount of space will increase (at least in Linux) as you have more physical memory available in the system. The change you're seeing is probably due to kvm giving that particular guest more or less memory to work with than it was before.
If you're interested, here is a quick article on memory ballooning, as implemented by VMWare ESX Server.