Matlab will start to consume more and more resources and eventually this error occurs.
Exception in thread "Explorer NavigationContext request queue" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.String.concat(Uknown Sourc)
at com.matlab...
at com.matlab... etc
Exception in thread "Explorer NavigationContext request queue"
I'm running matlab R2015a
I'm on ubuntu 14.04 LTS - i5-2430M CPU # 2.4ghz x 4 500gb ssd 64bit
I get this error w/o running a single command. After launching matlab it spits this error out after about 30s of slowly consuming more and more resources. I can't figure this out and matlab hasn't responded to any of my posts.
I was getting the same error always. And solved by increasing the Java Heap Memory that is requested and allocated by MATLAB from the Operating System. Maybe this can help you:
Go to Home -> Preferences -> General -> Java Heap Memory and shift the bar (Memory size that can be used by MATLAB.)
I hope this also works for you!
Related
Posted as Q&A after finding a solution.
Working on a simulation code base on Linux, allocating memory succeeds, but later process gets killed by an external signal. Adding a signal handler does not prevent this, so it is presumably a SIGTERM. Since the process is killed, a debugger cannot provide a backtrace.
Judging by the signs, and preceding high memory usage, it is probably related to the OOM killer. Outright disabling the OOM Killer with
sudo sh -c "echo 2 > /proc/sys/vm/overcommit_memory"
resulted in many programs crashing.
What can be done to find the source of the issue, e.g. to get a backtrace indicating where too much memory is being used?
I observed this issue on Open Suse 15.2 when debugging a crash in a Fortran program. It was clear that it was an out-of-memory issue from the description of the tester, but on my system I would simply see
>>> ./run-simulation
[1] Killed
on the terminal, with no traceback being emitted.
On my system, the source of the issue turned out to be that virtual memory was set to "unlimited", as seen by
>>> ulimit -a
Setting a maximum for the virtual memory, e.g.
>>> ulimit -v 24''000''000 # in kB -> 24 GB on a 32 GB RAM system
resolved the issue by making the simulation program return an error code from ALLOCATE, or crash with a backtrace* for unhandled failed allocations (e.g. from temporary variables in the expression cmplx(transpose(some_larger_than_expected_matrix))).
* Assuming that the executable was compiled with support for backtraces (compiler-dependent), run through a debugger, ...
I am currently running node.js using pm2.
And recently, I was able to check "custom metrics" using the pm2 monit command.
Here, information such as Heap size, used heap size, and active requests are shown.
I don't know how the heap size is determined. Actually, I checked pm2 running on different servers.
Each was set to 95mib / 55mib, and accordingly, the used heap size was different.
Also, is the heap usage closer to 100% the better?
While searching on "StackOverflow" to find related information, I saw the following article.
What does Heap Usage mean in PM2
Also what means active requests ? It is continuously zero.
Thank you!
[Edit]
env : ubuntu18.04 [ ec2 - t3.micro ]
node version : v10.15
[Additional]
server memory : 1GB [ 40~50% used ]
cpu : vCPU (2) [ 1~2% used ]
The heap is the RAM used by the program you're asking PM2 to manage and monitor. Heap space, in Javascript and similar language runtimes, is allocated when your program creates objects and released upon garbage collection. Your runtime asks your OS for more heap space whenever it needs it: when active allocations exceed the free space. So your heap size will probably grow as your program starts up. That's normal.
Most programs allocate and release lots of objects as they do their work, so you should not try to optimize the % usage of your heap. When your program is running at a steady state – that is, after it has started up — you'll find the % utilization creeping up until garbage collection happens, and then dropping back. For example, a nodejs/express web server allocates req and res objects for each incoming request, then uses them, then drops them so the garbage collector can reclaim their RAM.
If your allocated heap size keeps growing, over minutes or hours, you probably have a memory leak. That is a programming bug: a problem you should do your best to solve. You should look up how that works for your application language. Other than that, don't worry too much about heap usage.
Active requests count work being done via various asynchronous objects like file writers and TCP connections. Unless your program is very busy it stays near zero.
Keep an eye on loop delay if your program does computations. If it creeps up, some computation function is hogging Javascript.
chrome invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=300
I'm getting the above error while testing with headless chrome browser + Selenium.
This error message...
chrome invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=300
...implies that the ChromeDriver controlled Browsing Context i.e. Chrome Browser invoked the OOM Killer due to out-of-memory error.
Out of Memory
Out of Memory error messages can appear when you attempt to start new programs or you try to use programs that are already running, even though you still have plenty of physical and pagefile memory available.
OOM Killer
The OOM Killer or Out Of Memory Killer is a process that the linux kernel employs when the system is critically low on memory. This situation occurs because the linux kernel has over allocated memory to its processes. When a process starts it requests a block of memory from the kernel. This initial request is usually a large request that the process will not immediately or indeed ever use all of. The kernel, aware of this tendency for processes to request redundant memory, over allocates the system memory. This means that when the system has, for example, 2GB of RAM the kernel may allocate 2.5GB to processes. This maximises the use of system memory by ensuring that the memory that is allocated to processes is being actively used. Now, if enough processes begin to use all of their requested memory blocks then there will not be enough physical memory to support them all. This means that the running processes require more memory than is physically available. This situation is exactly when linux kernel invoke the OOM Killer to review all running processes and kill one or more of them in order to free up system memory and keep the system running.
Chrome the first victim of the OOM Killer
Surprisingly it seems Chrome Browser Client is the first victiom of the oom killer. As the Linux OOM Killer kills the process with the highest score=(RSS + oom_score_adj), the chrome tabs are killed because they have an oom_score_adj of 300 (kLowestRendererOomScore = 300 in chrome_constants.cc) as follows:
#if defined(OS_LINUX)
const int kLowestRendererOomScore = 300;
const int kHighestRendererOomScore = 1000;
#endif
Details
This issue is a known issue and can be easily reproduced. We have discussed this issue in length and breadth with in oom_score_adj too high - chrome is always the first victiom of the oom killer. The goal was to adjust the OOM in Chrome OS to make sure the most-recently-opened tab isn't killed as it appeared OOM killer prefers recent processes by default. But on Linux distros that won't reflect and you would get undesirable behavior where Chrome procs get killed over other procs that probably should have been killed instead.
Solution
Some details interms of the error stack trace would have helped us to suggest you some changes in terms of:
total-vm usage
Physical memory
Swap memory
You can find a couple of relevant discussions in:
Understanding the Linux oom-killer's logs
what does anon-rss and total-vm mean
determine vm size of process killed by oom-killer
However, there was a code review to address this issue but the discussion still seems still in status Assigned with Priority:2 with in:
Linux: Adjust /proc/pid/oom_adj to sacrifice plugin and renderer processes to the OOM killer
tl; dr
java.lang.OutOfMemoryError: unable to create new native thread error using ChromeDriver and Chrome through Selenium in Spring boot
Outro
Chromium OS - Design Documents - Out of memory handling
Despite 32GB of RAM, this chromium OOM is still happening within its latest release !
Because this issue will fully freeze Xorg, the sysrq keys association can help to recover the console terminal.
ALT + SYS + K to kill chromium
Think about adding sysrq_always_enabled in the kernel boot command line.
I'm having memory issues in my production environment webapp.
I have a Tomcat running in a AWS EC2 t2.medium instance (2 cores cpu 64 bits + 4gb ram).
This is some info from javamelody:
OS: OS Linux, 3.13.0-87-generic , amd64/64 (2 cores)
Java: Java(TM) SE Runtime Environment, 1.8.0_91-b14
JVM: Java HotSpot(TM) 64-Bit Server VM, 25.91-b14, mixed mode
Tomcat "http-nio-8080": Busy threads = 4 / 200 ++++++++++++
Bytes received = 8.051
Bytes sent = 206.411.053
Request count = 3.204
Error count = 70
Sum of processing times (ms) = 540.398
Max processing time (ms) = 12.319
Memory: Non heap memory = 130 Mb (Perm Gen, Code Cache),
Buffered memory = 0 Mb,
Loaded classes = 12,258,
Garbage collection time = 1,394 ms,
Process cpu time = 108,100 ms,
Committed virtual memory = 5,743 Mb,
Free physical memory = 142 Mb,
Total physical memory = 3,952 Mb,
Free swap space = 0 Mb,
Total swap space = 0 Mb
Free disk space: 27.457 Mb
And my application goes into:
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory.
I had applied the following config options, but it seems to failing again.
export JAVA_OPTS="-Dfile.encoding=UTF-8 -Xms1024m -Xmx3072m -XX:PermSize=128m -XX:MaxPermSize=256m"
Is this config ok for my linux configuration?
For further information: my database and file system is running in another t2.medium instance (Windows 2 cores cpu + 4gb ram).
Thanks, and sorry for my english.
EDITED:
The problem is still going on. The weirdest thing is that at logs there was no big proccess running, and the time at it passed was at the very morning (so few people were connected to the application).
In the past I had the application in a Windows environment and non of this was going on. I thinked that a Linux instance would be better but I am driving crazy.
The log:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f2d80000, 43515904, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 43515904 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/ubuntu/hs_err_pid20233.log
Now my config is this at setenv.sh:
export CATALINA_OPTS="-Dfile.encoding=Cp1252 -Xms2048m -Xmx2048m -server"
And I don't know if makes any sense but the hs_err file have this line:
Memory: 4k page, physical 4046856k(102712k free), swap 0k(0k free)
Is this config ok for your linux configuration? Well, we don't know what else is running on the machine and how much memory will be used by other processes. Thus, the answer is "it depends". However, here's what you can do to figure out the correct setting by yourself:
If you want to know right ahead that the server has enough memory, set -Xmx and -Xms to the same value. This way you'd run into OutOfMemory conditions right when you start the server, not at some randome time in the future. Maybe your OS can just allocate 2.8G instead of 3. (same with the permsize parameters in case you're still running on Java7, otherwise remove them)
You might also want to add -XX:+AlwaysPreTouch to the list of your parameters, so that you can be sure the memory has been allocated right from the beginning.
And lastly, you don't want to set JAVA_OPTS (this will be used for any start of a JVM, including the shutdown command), instead use CATALINA_OPTS (which will only be used for starting up tomcat)
I have memory and CPU intensive application which runs several hundred concurrent threads that do text processing. There is a lot of going on in background, processing files, logging to disk, and so on. Application is compiled for x64 platform, under XE2.
Occassionaly, it crashes, and I've been trying to debug this issue for a few days, but without success. Here is the bug report: http://pastebin.com/raw.php?i=sSUXCznT
I tried running it under debugger and it reports Out of Memory exception after a while. At the point of crash, it was using 670mb of RAM, and machine has 32gb total RAM.
I was thinking it may be fragmentation, but if I'm understanding this bug report correctly, it says largest free block : 8185.75 GB, which indicates that fragmentation isn't the issue here.
Application isn't leaking memory anywhere (atleast that I know of), I have ReportMemoryLeaksOnShutdown enabled and it works fine.
Since I don't have any other ideas why it would crash with Out of memory exception, I would like to get some hints so I can get on the right path to fix this.
Try setting a breakpoint in System.pas procedure Error(errorCode: TRuntimeError);. Your Application should stop there when the out of memory happens. When you get there, skip the ErrorAt function (by using Debug->"Set next statement" in the context menu). That will silently ignore the exception so you can debug the call stack easier. Leave the functions with F7 until you have a useful stack trace.