Simple question: Is it dangerous to activate KSM on a running hypervisor (Debian 8 with 3.16 kernel)?
Or is it recommended to shut down all virtual machines (KVM/qemu) first, then activate KSM and then start the virtual machines again?
We expect a memory saving of approx. 50% (we have a similar system where KSM is already active and there we effectively save almost 50% due to the always very similar VM appliances).
Related
I have a Linux (redhat 7.6) VM and I need to give more RAM.
actually size: standard A1_v2 (2gb RAM)
new size: A4_v2 (8gb RAM)
If I do the resize by Azure portal, is there any considerations? Or any linux configuration that I will lose?
your vm would be rebooted to perform the resize. nothing on the OS level changes (well, unless you have some changes in memory, that would not be preserved after a reboot). basically if your vm (and\or applications inside the vm) can handle the reboot - nothing will break.
I am using VirtualBox 5.1 running in a host with 48 CPU's and 250GB of RAM
the virtual machine that I am importing (the guest) initially had 2 CPU's and 4GB of RAM.
Inside this machine I am running a process with Java that starts a dynamic number of threads to perform some tasks.
I ran it with configurations below:
The whole process in my laptop (2 CPUs/4GB RAM) ~ 11 seconds
same program in the virtual machine in the server
(15 CPUs and 32GB of RAM) ~ 45 seconds
same program in the virtual machine in the server
(20 CPUs and 32GB of RAM) ~ 100+ seconds
same program in the virtual machine in the server
(10 CPUs and 32GB of RAM) ~ 5+ seconds
First I thought that there was a problem in how I was managing the threads from Java but after many tests I figured out that there was a relation between the number of CPU's that the virtual machine has and its performance, the maximum was 10, after that the overall performance of the machine slows down(CPU starvation?)
The virtual machine runs Oracle Enterprise Linux 6.7 and the host runs Oracle Enterprise Linux 6.9
I couldn't found any hard limit in the Virtual Machine documentation regarding the number of CPU's.
Is there a setting that needs to be set to enable/take advantage of more than 10 CPU's in a VirtualBox instance?
Time has happened since I posted this question, just for the archive I
will share my findings hoping they can help to save time for others.
It turns out that the performance issues were due to the way how VirtualBox works. Especially the relationship between the OS and the hypervisor.
The Virtual Machine (the guest OS) at the end is a single process for the host and when you modify the number of CPU's in the Virtual Machine settings what they will do is change the number of threads that the process will have to emulate the other CPU's. (at least in VirtualBox)
Having said that, when I assigned 10+ CPUs to the VM I ended up with:
a single process with 10+ threads
an emulated OS running hundreds of processes
my Java code which was creating another bunch of threads
All of that together caused that the setup was saturating the host Virtual Machine process which I think it was due to the way how the host OS was handling the processes context switching
On my server, the hard limit was 7 virtual CPU's, if I added more than that it would slow down the performance of the Java software
Running the Java software outside of the VM didn't show any performance issue, it worked out of the box with 60+ isolated threads.
We have almost the same setup as yours (Virtualbox running on a 48-core machine across 2 NUMA nodes).
I initially set the number of cores to the maximum supported in Virtualbox (e.g. 32), but quickly realized that one of the two NUMA nodes was always idling while the other stayed at medium loads, when the VM was under load.
Long story short, a process can only be assigned to a single NUMA node, and Virtualbox runs one user process with several threads... which means that we are limited to using 24 cores (and even less in practice considering that this is a 12-core cpu with hyperthreading).
Google Compute Engine rents all size Linux VMs from 1 core to 64 cores at various prices. There are "preempt-able" instances for about 1/4 the price of guaranteed instances, but the the preempt-able instances can be terminated at any time (with an ACPI G2 soft off warning and ~ 30 secs until hard cutoff). Although you can provide a startup and shutdown script, the usual approach seems to lead to the unnecessary overhead of having to then create additional software to allow calculations to be interrupted, and to manage partial results of calculations whereas the suspend-to-disk/restore-from-disk scheme seen in laptops and desktops could be a much simpler approach to storing and resuming calculations and therefore preferable.
If I start a Linux preemptible VM on GCE, is it possible generally to suspend the state of the VM to the disk (aka hibernate) and restart a new preemptible VM from the disk afterwards? My idea is:
Start a new preemptible Linux VM.
When the OS receives the the preemption notice (ACPI G2 Soft Off signal), then trigger suspend the to disk - hibernate the Linux OS.
Start a new preemptible Linux VM from the suspended image, i.e. restore the former VM and continue in the computation.
How would I configure Linux to suspend/restore in this way?
While creating a KVM Virtual Machine in proxmox from the GUI, for the hard disk and CPU tab there are a couple of options that are confusing.
For example, this is the hard disk tab,
In that tab, what does "No backup", "Discard" and "Iothread" signify?
And similarly, this is the CPU tab,
In this tab, what does "Sockets", "Cores" and "Enable numa" mean?
I did not have any luck with google and the results that I got were conflicting.
No backup instructs proxmox to not perform any backups for that VM.
Discard allows the guest to use fstrim or the discard option to free up the unused space from the underlying storage system. This is only usable on a virtio_scsi driver.
Iothread sets the AIO mode to threads (instead of native). For more information check this presentation.
Sockets is the number of CPUs that the guest will see as installed/available.
Cores is the number of cores that the guest will be able to use for each CPU.
If your server has 2 sockets each with a CPU that has 6 cores you could put 2 in the Sockets and 6 in the Cores fields and the guest will be able to use 100% of both CPU's. You can also put 1 in Sockets and 3 in Cores fields and the guest will be able to use only 50% from one CPU, so only 25% of the available CPU power on the server.
Enable Numa will allow the guest to make use of the NUMA architecture (servers with more than 2 sockets) on specific servers. Check www.admin-magazine.com/Archive/2014/20/Best-practices-for-KVM-on-NUMA-servers
I have a Linux Guest VM with multiple vCPUs spawned using libvirt/qemu-kvm
Sometimes because of host kernel issues I suspect all the cores in the VM are not getting utilized correctly. I am looking for a programmatic way to check if the guest VM is getting its allocated number of vCPUs and the guest kernel scheduler is able to use both the cores.
Host kernel version: ubuntu 12.04.4 (3.11.0-20)
Guest kernel version: 2.6.27+
You can grab the system load by cat /proc/loadavg If you want the CPU %, see http://www.cyberciti.biz/tips/how-do-i-find-out-linux-cpu-utilization.html