kvm balloon driver results in different total-memory then requested - linux

I have ubuntu and installed on it several qemu-kvm guests, running also ubuntu.
I'm using libvirt to change the guests' memory allocation. But always encounter a constant difference between the requested memory allocation and the actual memory allocation I query from the Total field in the top command inside the guests.
The difference is the same in all the guests, and consistent.
In one machine I installed it is 134MB (allocated is less then requested), In another one it is 348MB.
I can live with it, I just don't know the reason. Does someone encounter this kind of problem? Maybe solved it?
Thanks

This constant difference is likely the space reserved by the kernel. Note that this amount of space will increase (at least in Linux) as you have more physical memory available in the system. The change you're seeing is probably due to kvm giving that particular guest more or less memory to work with than it was before.
If you're interested, here is a quick article on memory ballooning, as implemented by VMWare ESX Server.

Related

Debugging Memory Leak

I'm trying to figure out where my memory leak is coming from since lately i'm experiencing a lot of performance drop when just opening a new tab on my browser FireFox ver.51
Just to be sure I've disabled all non-Microsoft startup services in msconfig even after reboot it still gets stuck on this.
Looking up on the vendors updates for this machine then it would be up to date on the drivers, i do occasionally check for Intel Chipset and onboard Graphics drivers (stable versions only) myself that are a few years newer then the vendor.
MS Resource monitor
MS Taskmgr Perfomance monitor
In the Taskmgr Performance monitor you can see I'm barely using any CPU and I/O leaving out any form of I/O wait issues due to swapping.
When looking in the Resource monitor actual physical RAM in is about 6.3GB while Cached is only 1.6GB making it roughly 4GB RAM missing where it's usage is coming from.
So i did do a offline MemTest (oh yes the old blue gorgeous BIOS screen) and all checks were passed, luckily it's only 8GB RAM so the downtime is manage-able ;)
Any ideas or other handy tools I can use to find the culprit?
Already fixed it, seems like my pagefile is storing too much cached memory for some reason, will look into it myself why it stores so much memory

memory increase in centOS and Virtualbox

I am using CentOS in Virtualbox.
My testing Web is frequently down, So I think reason is low memory and want to upgrade it.
origin memory is 1024MB, and in system configuration, updraded to 2048MB.
and to sync with CentOS, what commands need??
I think only upgrading memory in virtual box useless.
Must command some code in cent or chance some file.
but I did not know how to.
I think that should work.what makes you think it's not?
If you run
cat /proc/meminfo
before and after does it reflect the value you set?
You could also add a swap file if you haven't already

Restricting available memory for testing on linux

The machine on which I develop has more memory than the one on which the code will eventually run. I dont have access tothe machine on which it will actually run. This is a 64 bit application and I intend to use the address space but cap physical allocation. I dont want to lock down virtual memory, only physical memory. Is there a way to set limits on a linux machine such that it mimics a system with low RAM. I think ulimit does not differentiate between reserved address space vs actual allocation. If there is a way to do it without rebooting with different kernel parameters or, pulling out extra RAM that would be great. May be some /proc tricks.
See https://unix.stackexchange.com/questions/44985/limit-memory-usage-for-a-single-linux-process which suggests using "timeout" from here: https://github.com/pshved/timeout .
If You can change boot command line of the kernel and want to restrict available memory use
mem=
boot parameter.
For more information check:
https://www.kernel.org/doc/html/latest/admin-guide/kernel-parameters.html

sysfs cpu information missing

I'm trying to get hold of CPU architecture information under Linux.
I understand the information is available via the sysfs filesystem.
I have CentOS 5 running in a Xen VM. The sysfs filesystem is mounted. However, the /sys/devices/system/cpu/cpu0/ directory is almost empty. The only entry is a single file, "online", with a value of "1".
What gives? where's all my CPU information?
The actual cpu information is still in /proc/cpuinfo.
The sysfs-files are used to control things like scheduling and frequency settings, not to get information on the cpus themselves.
Okay, I've just had a chat with a sysadmin at work.
Looking at some machines, it looks like this information simply is not pushed by VMs. The VMs think they have a virtual CPU - rather than a CPU of the type of the real underlying CPU - and the cache information simply is not published.
It is published (and it's nice to finally see it!) on real machines with reasonably modern kernels.

Linux version of Windows "nonpaged pool" does such a thing exist?

I have been working with an Windows application which reads from the 'nonpaged pool' to increase performance. In this case the nonpaged pool is the area of memory where the network drivers write data as they grab it off the wire.
How does Linux handle memory which network drivers (or other drivers) which require high speed exclusive access to RAM and does the question 'how do I read directly from nonpaged pool?' even make sense when applied to Linux?
Many thanks
related question
Some networks like Infiniband support RDMA, which requires being able to prevent paging for some of the pages in a process. See the mlock(), mlockall(), munlock(), munlockall() functions.
Other than that, I don't think there is a concept of "nonpaged pool", per se. Generally, kernel memory is AFAIK not pageable, but all user memory except that locked with mlock() or such is.

Resources