We have a Tomcat7 instance which deploys 2 web apps. These webapps has a lot of dependencies and the currently memory space seems not to be enough.
We have configured in a startup file the environment variables needed. In particular, we set JAVA_OPTS=-Xmx8192
I talked to my mates about the lack of more config parameters because in another configs I saw -Xms, MaxPermSize,.. etc
Which parameters are missing in order to avoid the PermGen exception and which is their role?
Thanks in advance
-Xms256m -Xmx1024m -XX:+DisableExplicitGC -Dcom.sun.management.jmxremote
-XX:PermSize=256m -XX:MaxPermSize=512m
add above line of code in your VM arguments i am sure its work for you
if you use eclipse IDE than with use of it you can change the VM argument
double click on the server > open Lunch Configuration > Arguments > VM Arguments
and add above two lines
The amount of memory given to Java process is specified at startup. To make things more complex, the memory is divided into separate areas, heap and permgen being the most familiar sub-areas.
While you specify the maximum size of the heap allowed for this particular process via -Xmx, the corresponding parameter for permgen is -XX:MaxPermSize. 90% of the Java apps seem to require between 64 and 512 MB of permgen to work properly. In order to find your limits, experiment a bit.
Related
I am working on identifying repeated OutOfMemory issues in a Tomcat 8.5.38 server (Centos 7.6, openjdk 1.8, 4 CPU) running a Spring MVC application.
This issue is new for this app. (Edit: The issue started after Tomcat was upgraded from 8.5.35 to 8.5.38). I saved a memory heap dump by adding the “-XX:+HeapDumpOnOutOfMemoryError” JVM setting. In the heap dump I see that there are 2 instances of ParallelWebAppClassLoader. This app uses a large HashMap (about 200 Mb) of lookup values as a cache. Each class loader has a separate reference to this HashMap. I am trying to find why there are 2 ParallelWebAppClassLoader instances in this JVM? The server.xml does not specify the use of ParallelWebAppClassLoader.
Also is it correct to expect the ParallelWebAppClassLoader to maintain a copy of the HashMap?
If it is 2 copies of the same object, how can this duplicate space be optimized, if ParallelWebAppClassLoader is used?
The issue was because of the upgrade of Tomcat from 8.5.35 to 8.5.35. After this upgrade the JVM has two class loaders, and thus occupies twice the memory. A quick fix can be by increasing the RAM or rolling back the Tomcat version to 8.5.35.
If there is a setting to control the number of class loaders, please post an answer. I will upvote that.
Very recently I installed JDK 9 and Apache Cassandra from the official site. But now when I start cassandra in foreground, I get this message:
apache-cassandra-3.11.1/bin$ ./cassandra -f
[0.000s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/home/mmatak/monero/apache-cassandra-3.11.1/logs/gc.log instead.
intx ThreadPriorityPolicy=42 is outside the allowed range [ 0 ... 1 ]
Improperly specified VM option 'ThreadPriorityPolicy=42'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
So far I didn't find any solution for this. Is it maybe possible that Java 9 and Cassandra are not yet compatible? Here is that problem mentioned as well - #CASSANDRA-13107
But I am not sure how to just "remove the flag"? Where is it possible to override or remove this flag?
I had exactly the same issue:
Can't start Cassandra (Single-Node Cluster on CentOS7)
If it is an option for you, using Java 8, instead of 9, is the simplest way to solve the issue.
Setting the following env variables solved the problem in MAC
export JAVA8_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home
#Martin Matak Just comment out that line in the conf/jvm.options file:
########################
# GENERAL JVM SETTINGS #
########################
# allows lowering thread priority without being root on linux - probably
# not necessary on Windows but doesn't harm anything.
# see http://tech.stolsvik.com/2010/01/linux-java-thread-priorities-workaround.html
**#-XX:ThreadPriorityPolicy=42**
Some background on -XX:ThreadPriorityPolicy.
These were the values, as documented in the source code.
0 : Normal.
VM chooses priorities that are appropriate for normal
applications. On Solaris NORM_PRIORITY and above are mapped
to normal native priority. Java priorities below
NORM_PRIORITY map to lower native priority values. On
Windows applications are allowed to use higher native
priorities. However, with ThreadPriorityPolicy=0, VM will
not use the highest possible native priority,
THREAD_PRIORITY_TIME_CRITICAL, as it may interfere with
system threads. On Linux thread priorities are ignored
because the OS does not support static priority in
SCHED_OTHER scheduling class which is the only choice for
non-root, non-realtime applications.
1 : Aggressive.
Java thread priorities map over to the entire range of
native thread priorities. Higher Java thread priorities map
to higher native thread priorities. This policy should be
used with care, as sometimes it can cause performance
degradation in the application and/or the entire system. On
Linux this policy requires root privilege.
In other words: The default Normal setting causes thread priorities to be ignored on Linux.
Now someone found a bug in the code, which disabled the "is root?" check for values other than 1, but would still try to set the thread priority for every value other than 0.
Unless running as root, it would only be possible to lower the thread priority. So although not perfect, this was quite an improvement, compared to not being able to control the priorities at all.
Starting with Java 9, command line arguments like this one started to get checked, and this hack stopped working.
Fwiw, on Java 11/Linux, I can set the parameter to 1 without being root, and setting thread priorities does have an effect. So something has changed in the meantime, and at least with recent JVMs, and this hack does not seem necessary any more.
Solution to your Question
Reason for this exception
Multiple JDK versions running,probably JDK9,JDK 10 is causing this exception.
Set the Path to Point JDK 8 Version only.
Currently cassandra 3.1 is desired to run greater than jdk 8 only.
Change in Cassandra-Conf file (/opt/apache-cassandra-3.11.2/conf/cassandra-env.sh)
4.If you want to use higher JDK Version, update the system path variables based on your OS.
Theres a jvm.options in your conf directory which sets it:
https://github.com/apache/cassandra/blob/12d4e2f189fb228250edc876963d0c74b5ab0d4f/conf/jvm.options#L96
Following from Jay's answer if you're on macOS and installed via Homebrew: the file is located at local/etc/cassandra/jvm.options.
We are using Nashorn to use javascript from java (JDK 1.8 U66),after some profiling we are seeing a larger data is getting occupied by jdk.nashorn.internal.scripts.JO4P0
Object. Would like to know if any one have any idea?
jdk.nashorn.internal.scripts.JO4PO and other similar instances are used to represent scripts objects from your scripts. But, you need to provide more info for further investigation. I suggest you please write to nashorn-dev openjdk alias with profiler output + more info. about your application (like to project, if open source, would be useful
Anyone knows what's the deal with this IDE?
I have been running it for a while, lately it has become very slow and unresponsive at times.
Gobbles up CPU even when just editing a bunch of js files.
Possibilities:
1. My code base is getting bigger...
2. I have several listeners which compile coffeescript and sass files in the background when these change.
In any case, I am very surprised (for the worse) that this is so slow. Would expect better from a developer of an IDE.
Anyone had this kind of problem before?
10x
There are a couple performance tweaks you can apply to Webstorm to see if it improves your situation. When my colleagues and I found that Webstorm was slowing down these tweaks solved all our problems.
First things first, ensure your project is configured to utilise webstorm resources efficiently by excluding particular directories from a project. This will ensure the containing files are not indexed in memory and will not decrease performance when performing functions such as searching for files or text within files. Some examples of good candidates to exclude are the node_modules directory and compiled code directories.
If there are still performance issues, try the following:
If you are on Windows by default you would be using the 32-bit version. Navigate to the Webstorm directory (within program files) and you'll see webstorm64.exe, which will run Webstorm in 64-bit mode. (You might need to install a proper 64-bits JDK yourself then.)
The default VM options for IntelliJ IDEA may be not optimal when your project contains more than 10000 classes and developers often try to change the default options to minimize IntelliJ IDEA hangtime.
You can try bumping up the JVM memory limits for Webstorm. Open the VM options from the IDE_HOME\bin\<product>[bits][.exe].vmoptions. Initially try doubling the Xms and Xmxmemory values.
Please note that very big Xmx and Xms values are not so good. In this case, GarbageCollector has to work with a big part of memory at a time and causes considerable hang-ups.
For more info on configuring JVM memory options you can refer to:
Configuring IntelliJ IDEA VM options - http://blog.jetbrains.com/idea/2006/04/configuring-intellij-idea-vm-options/
Configuring JVM options and platform properties - https://intellij-support.jetbrains.com/entries/23395793-Configuring-JVM-options-and-platform-properties
You can now do it from UI.
These are my before-after. No problems with the garbage collector. Just multiplied all values by 4. Machine: 20Gb RAM, 4Ghz i7 CPU & SSD disk. With defaults it started to lag. Now no lag again.
Pasting as text for quick copy:
# custom WebStorm VM options
# Default:
# -Xms128m
# -Xmx750m
# -XX:ReservedCodeCacheSize=240m
# -XX:+UseCompressedOops
-Xms512m
-Xmx3000m
-XX:ReservedCodeCacheSize=960m
-XX:+UseCompressedOops
I was dealing with a similar situation. CPU used to spike like crazy, and the IDE used to lag. Go to WebStorm preference and try disabling plugins that you do not need.
For instance, if your project uses SASS, what's the point of having LESS plugin running? Likewise, if your project uses Git, you don't need to have CVS or Perforce Integration.
CPU still spikes when WebStorm is indexing my project files, but I usually just wait it out.
Stopping my TypeScript file watching significantly helped (both in the IDE settings menu and in tsconfig.json). I assume that once the project gets big enough, any changes force a large recompile. It's not ideal but it's something that worked for me and may work for others as well.
First time app starts correctly. Then I delete webapp/*.war file and paste new version of *.war. Jetty start deploying new war but error java.lang.OutOfMemoryError: PermGen space occurs. How can I configure Jetty to fix error / make correct redeploy?
This solution doesn't help me.
Jetty version: jetty-7.4.3.v20110701
There is probably no way to configure the problem away. Each JVM has one PermGen memory area that is used for class loading and static data. Whenever your application gets undeployed its classloader should be discarded and all classes loaded by it as well. When this fails because other references to the classloader still exist, garbage collecting the classloader and your applications classes will also fail.
A blog entry and its follow up explain a possible source of the problem. Whenever the application container's code uses a class that holds a reference to one of your classes, garbage collection of your classes is prevented. The example from the mentioned blog entry is the java.util.logging.Level constructor:
protected Level(String name, int value) {
this.name = name;
this.value = value;
synchronized (Level.class) {
known.add(this);
}
}
Note that known is a static member of java.util.logging.Level. The constructor stores a reference to all created instances. So as soon as the Level class was loaded or instantiated from outwith your application's code, garbage collection can't remove your classes.
To solve the problem you could avoid all classes that are in use outwith your own code or ensure no references are held to your classes from outwith your code. Both problems could occur within any class delivered with Java and are thus not feasible to be fixed within your application. You cannot prevent the problem by altering only your own code!
Your options are basically:
Increasing the memory limits and have the error strike less often
Analyze your code as detailed in the linked blog posts and avoid using the classes that store references to your objects
If a PermGen out of memory occurs, you need to restart the jvm, in your case restart jetty. You may increase the PermGen space with the JVM options in your linked solution so this happens later (with later I mean: after more redeploys). But it will happen every once in a while and you can do next to nothing to avoid that. The answer you linked explained well what PermGenSpace is and why it overflows.
Use:
-XX:PermSize=64M -XX:MaxPermSize=128M
or, if that was not enough yet
-XX:PermSize=256M -XX:MaxPermSize=512M
Also, be sure to increase the amount of space available to the vm in general if you use this commands.
use
-Xms128M -Xmx256M
For Jetty 7.6.6 or later this may help http://www.eclipse.org/jetty/documentation/current/preventing-memory-leaks.html.
We used the AppContextLeakPreventer and it helped with the OOM errors due to permgen space
I have this same problem with HotSpot, but with JRockit, which doesn't have a Permanent Generation, the problem has gone away. It's free now, so you might want to try it: https://blogs.oracle.com/henrik/entry/jrockit_is_now_free_and
Looks very much like Permanent Generation leak. Whenever your application left some classes to hang around after it is undeployed, you get this problem. You can try the latest version of Plumbr, may be it will find the left-over classes.
For Readers of the Future (relative to when this question has been asked):
In JDK 8 the Perm Gen Space is gone (not there anymore). Instead there is now Metaspace which is taken from the native space of the machine.
If you had problems of Perm Gen Overflow then you might want to have a look into this explanation and this comments on the removal process.