Docker containers eating up all the ram using windows 10 - node.js

I have 7 docker containers running, which in total take around 3gb of ram, according to docker stats
Im running docker with hyper-v in windows , im not using WSL / WSL-2 because the problem that im about to describe becomes even worse.
The problem is that hyper-v is demanding around 6.5-7gb ram instead of 3gb of ram
Why? If my containers barely take 3gb. In result because of the high ram usage, im always hovering 15gb of total ram usage on windows 10, when I only have 16gb ram. If something weird happens , like changing branches in the container or whatever in a fast pace, the container eventually runs out of the assigned 8gb of memory and starts swapping with my ssd, which makes my pc crash.
With WSL the problem is even worse, since its straight up starts swapping even if i assign it a memory limit of 8gb in .wslconfig , hyper-v is at least somehwoe usable
Im running a node server, a react app, and a couple of proxys/databases. Im developing on it with VSC.
Any idea on how to fix this absurd amount of ram usage?

Related

Apache Tomcat 9 on Windows 10

VMware ESXi 6.5 and later (VM version 13)
2x CPU (Xeon E5-2620 v3)
16,384 MB memory
Guest OS: Windows 10 Pro 1809 (build 17763.55)
Performance of the VM is very sluggish, even through the VMware console connection. Looking at the Resource Monitor, the tomcat9.exe process is the main hog of CPU time. This process has between 150-180 threads running and average CPU utilisation of around 75% with overall CPU hovering around 90-100%.
I have been reading that Tomcat should be able to run on minimal resources so there must be something else going on here. Unfortunately I know very little about Tomcat so am at a loss of what to look for. I have rebooted the VM and have nothing running on it (apart from the Resource Monitor).
Surely Tomcat should not be monopolising the CPU like this?
It also seems like a Java process is high on the CPU utilisation list. Conversely, we have another instance using Tomcat 8 on Windows 7 which is not taxing the CPU at all.
In this specific case, increasing the amount of memory available to the Java Virtual Machine (JVM) solved the problem.
Refer this article for How to Increase Java Memory in Windows

Docker CPU/Mem allocation in Mac/Win

As far as I understood, at the moment, Docker for Mac requires that I decide upfront how much memory and CPU cores to statically allocate to the virtualized linux it runs on.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
This is very suboptimal!
In Linux, it's ideal because a container is just another process. So it uses and releases the system memory as containers starts and stop.
Is my mental model correct?
Will one day Docker for Mac or Windows dynamically allocate CPU and Memory resources?
The primary issue here is that, for the moment, Docker can only run Linux containers on Linux. That means on OS X or Windows, Docker is running in a Linux VM, and it's ability to allocate resources is limited by the facilities provided by the virtualization software in use.
Of course, Docker can natively on Windows, as long as you want to run Windows containers, and in this situation may more closely match the Linux "a container is just a process" model.
It is possible that this will change in the future, but that's how things stand right now.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
I suspect that's true for memory. I believe that if the docker vm is idle it isn't actually using much in the way of CPU resources (that is, you are not dedicating CPUs to the VM; rather, you are setting maximum limits on how many resources the vm can consume).

Is there a maximum number of CPU's that a VirtualBox could bare?

I am using VirtualBox 5.1 running in a host with 48 CPU's and 250GB of RAM
the virtual machine that I am importing (the guest) initially had 2 CPU's and 4GB of RAM.
Inside this machine I am running a process with Java that starts a dynamic number of threads to perform some tasks.
I ran it with configurations below:
The whole process in my laptop (2 CPUs/4GB RAM) ~ 11 seconds
same program in the virtual machine in the server
(15 CPUs and 32GB of RAM) ~ 45 seconds
same program in the virtual machine in the server
(20 CPUs and 32GB of RAM) ~ 100+ seconds
same program in the virtual machine in the server
(10 CPUs and 32GB of RAM) ~ 5+ seconds
First I thought that there was a problem in how I was managing the threads from Java but after many tests I figured out that there was a relation between the number of CPU's that the virtual machine has and its performance, the maximum was 10, after that the overall performance of the machine slows down(CPU starvation?)
The virtual machine runs Oracle Enterprise Linux 6.7 and the host runs Oracle Enterprise Linux 6.9
I couldn't found any hard limit in the Virtual Machine documentation regarding the number of CPU's.
Is there a setting that needs to be set to enable/take advantage of more than 10 CPU's in a VirtualBox instance?
Time has happened since I posted this question, just for the archive I
will share my findings hoping they can help to save time for others.
It turns out that the performance issues were due to the way how VirtualBox works. Especially the relationship between the OS and the hypervisor.
The Virtual Machine (the guest OS) at the end is a single process for the host and when you modify the number of CPU's in the Virtual Machine settings what they will do is change the number of threads that the process will have to emulate the other CPU's. (at least in VirtualBox)
Having said that, when I assigned 10+ CPUs to the VM I ended up with:
a single process with 10+ threads
an emulated OS running hundreds of processes
my Java code which was creating another bunch of threads
All of that together caused that the setup was saturating the host Virtual Machine process which I think it was due to the way how the host OS was handling the processes context switching
On my server, the hard limit was 7 virtual CPU's, if I added more than that it would slow down the performance of the Java software
Running the Java software outside of the VM didn't show any performance issue, it worked out of the box with 60+ isolated threads.
We have almost the same setup as yours (Virtualbox running on a 48-core machine across 2 NUMA nodes).
I initially set the number of cores to the maximum supported in Virtualbox (e.g. 32), but quickly realized that one of the two NUMA nodes was always idling while the other stayed at medium loads, when the VM was under load.
Long story short, a process can only be assigned to a single NUMA node, and Virtualbox runs one user process with several threads... which means that we are limited to using 24 cores (and even less in practice considering that this is a 12-core cpu with hyperthreading).

What are some ways to go about running a Vagrant VM in RAM?

I have a development environment running in a Vagrant VM (virtualBox). Considering I have 11Gb of spare RAM I thought I could run the VM completely in RAM.
Would anyone know of the approach to this and would I gain much performance from it?
If you have that much memory available, most probably you'll have the image cached in the host OS cache so you don't need to worry about that.
I've tried putting image files on to ramdisk on my Macbook and didn't see any improvement in 5 mins run (most of which was apt-get install stuff).
Traditionally, VirtualBox has opened disk image files as normal files, which results in them being cached by the host operating system like any other file. The main advantage of this is speed: when the guest OS writes to disk and the host OS cache uses delayed writing, the write operation can be reported as completed to the guest OS quickly while the host OS can perform the operation asynchronously. Also, when you start a VM a second time and have enough memory available for the OS to use for caching, large parts of the virtual disk may be in system memory, and the VM can access the data much faster. (c) 5.7. Host IO caching
Also the benefits you'll see greatly depend on the process you run there, if that's dominated by cpu / network, tinkering with the storage won't help you alot.

Android emulator memory usage keeps increasing

I am using Ubuntu 12.10 64bit and latest eclipse.
One minute after I launch the emulator, the memory usage keeps increasing all the way to 1000 MB. can anyone tell me why?
I limited the memory of emulator to 512mb, but didn't help.
Sounds like a memory leak, you should report it to the emulators buglist. Leaks happen when a resource is not closed or deleted during a process's run time.

Resources