Tomcat not starting after applying last redhat kernel patch - security

After applying RHSA-2013:0911:R6-32 (Important: Red Hat Enterprise Linux 6 kernel update), tomcat refuses to start with a
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
error in the catalina.out log.
In our particular environment, we are using RHEL 32 bits with 2 GB RAM machines. The new kernel is: 2.6.32-358.11.1.el6.i686
The config is pretty default, only a -XX:MaxPermSize=1024M is configured. (I know, it's high). If I decrease that value less than 800M, tomcat starts.
If I boot with the previous kernel (2.6.32-358.6.2.el6.i686) , tomcat starts.
It looks the new kernel changed some memory allocation behaviour...Are there more people with mem issues?

I had the same issue on Centos 32bit using this kernel, as well as the most recent one kernel-firmware-2.6.32-358.14.1.el6. http://bugs.centos.org/view.php?id=6529 suggests using sysctl vm.unmap_area_factor=1 to influence how memory is allocated. However, it didn't do the trick for me. I'll migrate to a 64 bit installation now.

Related

BSOD occurring in ERAM (Open Source RAMDisk driver)

When I use it (its source code is available at https://github.com/Zero3K/ERAM) in a Windows 10 64-bit virtual machine (in driver signature enforcement disabled mode) with it configured to use 100 MB for a RAM Disk and is set as a fixed disk via its cpl, I get a FAT_FILE_SYSTEM blue screen that does not happen in a Windows 7 64-bit virtual machine. Maybe someone could look at its source code to see what is causing it and then offer a fix.

Apache Tomcat 9 on Windows 10

VMware ESXi 6.5 and later (VM version 13)
2x CPU (Xeon E5-2620 v3)
16,384 MB memory
Guest OS: Windows 10 Pro 1809 (build 17763.55)
Performance of the VM is very sluggish, even through the VMware console connection. Looking at the Resource Monitor, the tomcat9.exe process is the main hog of CPU time. This process has between 150-180 threads running and average CPU utilisation of around 75% with overall CPU hovering around 90-100%.
I have been reading that Tomcat should be able to run on minimal resources so there must be something else going on here. Unfortunately I know very little about Tomcat so am at a loss of what to look for. I have rebooted the VM and have nothing running on it (apart from the Resource Monitor).
Surely Tomcat should not be monopolising the CPU like this?
It also seems like a Java process is high on the CPU utilisation list. Conversely, we have another instance using Tomcat 8 on Windows 7 which is not taxing the CPU at all.
In this specific case, increasing the amount of memory available to the Java Virtual Machine (JVM) solved the problem.
Refer this article for How to Increase Java Memory in Windows

memory increase in centOS and Virtualbox

I am using CentOS in Virtualbox.
My testing Web is frequently down, So I think reason is low memory and want to upgrade it.
origin memory is 1024MB, and in system configuration, updraded to 2048MB.
and to sync with CentOS, what commands need??
I think only upgrading memory in virtual box useless.
Must command some code in cent or chance some file.
but I did not know how to.
I think that should work.what makes you think it's not?
If you run
cat /proc/meminfo
before and after does it reflect the value you set?
You could also add a swap file if you haven't already

kvm balloon driver results in different total-memory then requested

I have ubuntu and installed on it several qemu-kvm guests, running also ubuntu.
I'm using libvirt to change the guests' memory allocation. But always encounter a constant difference between the requested memory allocation and the actual memory allocation I query from the Total field in the top command inside the guests.
The difference is the same in all the guests, and consistent.
In one machine I installed it is 134MB (allocated is less then requested), In another one it is 348MB.
I can live with it, I just don't know the reason. Does someone encounter this kind of problem? Maybe solved it?
Thanks
This constant difference is likely the space reserved by the kernel. Note that this amount of space will increase (at least in Linux) as you have more physical memory available in the system. The change you're seeing is probably due to kvm giving that particular guest more or less memory to work with than it was before.
If you're interested, here is a quick article on memory ballooning, as implemented by VMWare ESX Server.

2GB barrier for a X64 app running on x64 CPU (Xeon 7650) with x64 OS (redhat 5.6) - why + what to check

I'm running the x64 version of some simulation app, on a very nice IBM x-server (4 8-core CPUs). The OS is Linux - redhat 5.6 x64 kernel.
So this app crashes exactly when it needs more than 2 GB of memory (as evident from its own log files).
My question really is how to debug this issue - what relevant environment settings should I look at? Is 'ulimit' (or sysctl.conf) relevant to this issue? What additional info can I post in order for you to help me?
This would be an application problem. Although the application is compiled as a 64-bit application, it still uses signed 32-bit integers for some things instead of proper pointers or the appropriate *_t types.
If you compile the application yourself, look for any "unsigned" or "truncated" warnings in the compilation output, and fix them.
The shmmax value defines the amount of memory that applications can use, you should check the value with this command:
cat /proc/sys/kernel/shmmax
If you need to increment, you can use:
echo 4096000000 > /proc/sys/kernel/shmmax
Bye

Resources