Apache Tomcat 9 on Windows 10 - windows-10

VMware ESXi 6.5 and later (VM version 13)
2x CPU (Xeon E5-2620 v3)
16,384 MB memory
Guest OS: Windows 10 Pro 1809 (build 17763.55)
Performance of the VM is very sluggish, even through the VMware console connection. Looking at the Resource Monitor, the tomcat9.exe process is the main hog of CPU time. This process has between 150-180 threads running and average CPU utilisation of around 75% with overall CPU hovering around 90-100%.
I have been reading that Tomcat should be able to run on minimal resources so there must be something else going on here. Unfortunately I know very little about Tomcat so am at a loss of what to look for. I have rebooted the VM and have nothing running on it (apart from the Resource Monitor).
Surely Tomcat should not be monopolising the CPU like this?
It also seems like a Java process is high on the CPU utilisation list. Conversely, we have another instance using Tomcat 8 on Windows 7 which is not taxing the CPU at all.

In this specific case, increasing the amount of memory available to the Java Virtual Machine (JVM) solved the problem.
Refer this article for How to Increase Java Memory in Windows

Related

Docker containers eating up all the ram using windows 10

I have 7 docker containers running, which in total take around 3gb of ram, according to docker stats
Im running docker with hyper-v in windows , im not using WSL / WSL-2 because the problem that im about to describe becomes even worse.
The problem is that hyper-v is demanding around 6.5-7gb ram instead of 3gb of ram
Why? If my containers barely take 3gb. In result because of the high ram usage, im always hovering 15gb of total ram usage on windows 10, when I only have 16gb ram. If something weird happens , like changing branches in the container or whatever in a fast pace, the container eventually runs out of the assigned 8gb of memory and starts swapping with my ssd, which makes my pc crash.
With WSL the problem is even worse, since its straight up starts swapping even if i assign it a memory limit of 8gb in .wslconfig , hyper-v is at least somehwoe usable
Im running a node server, a react app, and a couple of proxys/databases. Im developing on it with VSC.
Any idea on how to fix this absurd amount of ram usage?

Single Windows process cannot use all of the 80 available threads of my setup, why?

I have both Linux and Windows 10 installed on a dual Xeon 6138 computer with 64GB of RAM.
I cannot access the computer immediately (because of the lockdown) but I strongly believe the Windows version is Windows 10 Enterprise. The system has last been updated in late 2018, and not after.
Xeon 6138 specs are available here (basically, each CPU has 20 cores, totalizing 40 HT threads for a single CPU, and 80 in my dual setup):
https://ark.intel.com/content/www/fr/fr/ark/products/120476/intel-xeon-gold-6138-processor-27-5m-cache-2-00-ghz.html
When I run a CPU-intensive program in Linux on this setup, all 80 threads of my system are used (see attached image 1).
My question is: when I run the same program on Windows 10, compiled with VC++ 2017, the process can only saturate 40 of the 80 threads available on my system.
Why, and how can I have all 80 threads used? (I know there is the concept of processor groups on Windows, but most of the programs I use are simply not processor-group aware, and I just know that I can't change that).

Is there a maximum number of CPU's that a VirtualBox could bare?

I am using VirtualBox 5.1 running in a host with 48 CPU's and 250GB of RAM
the virtual machine that I am importing (the guest) initially had 2 CPU's and 4GB of RAM.
Inside this machine I am running a process with Java that starts a dynamic number of threads to perform some tasks.
I ran it with configurations below:
The whole process in my laptop (2 CPUs/4GB RAM) ~ 11 seconds
same program in the virtual machine in the server
(15 CPUs and 32GB of RAM) ~ 45 seconds
same program in the virtual machine in the server
(20 CPUs and 32GB of RAM) ~ 100+ seconds
same program in the virtual machine in the server
(10 CPUs and 32GB of RAM) ~ 5+ seconds
First I thought that there was a problem in how I was managing the threads from Java but after many tests I figured out that there was a relation between the number of CPU's that the virtual machine has and its performance, the maximum was 10, after that the overall performance of the machine slows down(CPU starvation?)
The virtual machine runs Oracle Enterprise Linux 6.7 and the host runs Oracle Enterprise Linux 6.9
I couldn't found any hard limit in the Virtual Machine documentation regarding the number of CPU's.
Is there a setting that needs to be set to enable/take advantage of more than 10 CPU's in a VirtualBox instance?
Time has happened since I posted this question, just for the archive I
will share my findings hoping they can help to save time for others.
It turns out that the performance issues were due to the way how VirtualBox works. Especially the relationship between the OS and the hypervisor.
The Virtual Machine (the guest OS) at the end is a single process for the host and when you modify the number of CPU's in the Virtual Machine settings what they will do is change the number of threads that the process will have to emulate the other CPU's. (at least in VirtualBox)
Having said that, when I assigned 10+ CPUs to the VM I ended up with:
a single process with 10+ threads
an emulated OS running hundreds of processes
my Java code which was creating another bunch of threads
All of that together caused that the setup was saturating the host Virtual Machine process which I think it was due to the way how the host OS was handling the processes context switching
On my server, the hard limit was 7 virtual CPU's, if I added more than that it would slow down the performance of the Java software
Running the Java software outside of the VM didn't show any performance issue, it worked out of the box with 60+ isolated threads.
We have almost the same setup as yours (Virtualbox running on a 48-core machine across 2 NUMA nodes).
I initially set the number of cores to the maximum supported in Virtualbox (e.g. 32), but quickly realized that one of the two NUMA nodes was always idling while the other stayed at medium loads, when the VM was under load.
Long story short, a process can only be assigned to a single NUMA node, and Virtualbox runs one user process with several threads... which means that we are limited to using 24 cores (and even less in practice considering that this is a 12-core cpu with hyperthreading).

Tomcat not starting after applying last redhat kernel patch

After applying RHSA-2013:0911:R6-32 (Important: Red Hat Enterprise Linux 6 kernel update), tomcat refuses to start with a
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
error in the catalina.out log.
In our particular environment, we are using RHEL 32 bits with 2 GB RAM machines. The new kernel is: 2.6.32-358.11.1.el6.i686
The config is pretty default, only a -XX:MaxPermSize=1024M is configured. (I know, it's high). If I decrease that value less than 800M, tomcat starts.
If I boot with the previous kernel (2.6.32-358.6.2.el6.i686) , tomcat starts.
It looks the new kernel changed some memory allocation behaviour...Are there more people with mem issues?
I had the same issue on Centos 32bit using this kernel, as well as the most recent one kernel-firmware-2.6.32-358.14.1.el6. http://bugs.centos.org/view.php?id=6529 suggests using sysctl vm.unmap_area_factor=1 to influence how memory is allocated. However, it didn't do the trick for me. I'll migrate to a 64 bit installation now.

VMware Workstation 7 C/C++ Compile Workload Performance

Can anyone point me to VMware workstation benchmarks for compile workload?
Been looking for a while and I can't find any. It's a bit weird - this is supposedly a developer oriented product. Full compile of our project usually takes about 4 minutes.
I am currently using VMware workstation for development. Guest OS is Linux and the host is Windows. I don't use much of the VMware workstation features like snapshots - I have my code repository for that and I can re-create my dev environment within 10 minutes tops. I just prefer Windows font rendering, so I ssh (putty) to my VM and develop from console.
I am wondering how much compile performance I am sacrificing versus native. If there is a considerable difference (30% or more), perhaps it is more practical to have a dedicated/native dev box.
For background, In 2005/2006 or so I worked on a very large project based on linux and using Tuxedo and Informix.
We virtualized the environment for each developer with VMWare and also had 2 separate groups of machines for Q/A and staging.
Builds were done on the machines for which they were targeted for "consistency."
Unless we asked for make to run more jobs than we had CPUs (make -j 4 on a 2 CPU machine) the virtual machines build time was within 5 to 10% of the real machines.
As I recall, our makefiles reported build times of apx 18 to 20 minutes on a real machine and 20 to 24 on a virtual machine.
The virtual machines also bogged down from heavy network or disk IO.

Resources