Out of memory issue with weblogic server 11g - linux

I am using weblogic 10.3.6 with JRockit installed . I am using a 64 bit system with Linux as OS. I have an adf application installed in it . There are only a couple of users using the application . But the server machine where WLS is installed keeps going down every week causing out of memory . so we have to restart it every week. When i was looking through I found that WebLogic can be made more stable by adjusting the heap size and other memory arguments .
Example: --Xms256m --Xmx512m MaxPermsize as 128m
My question is
What are these arguments ?
How are these arguments related to one another?
How do I determine the value for these arguments?
What can be other causes for out of memory issue?
Thanks,
Rakesh

Xms and Xmx are the minimum and the maximum heap (essentially where the objects are stored) size the java program can use.
In your case the java program is the WebLogic server on which your application is deployed. By default the Xms and Xmx values set by WebLogic server are : 256m and 512m.
Looks like your application needs more than 512MB heap memory. So you need to increase the maximum heap size (Xmx) to avoid frequent OutOfMemory error.
The new value of Xmx can be 1024m or more. You (or Performance team, if there is one.) have to do rigorous Performance, Scalability, Reliability testing with your application and with different Xmx values to determine what is the best for the application.
Setting the Memory Arguments (i.e Xms, Xmx) can be done at the script level (if you are using startWebLogic.sh/startManagedWebLogic.sh scripts to start the servers).
Script Level Changes:
Open setDomainEnv.sh and search for 'IF USER_MEM_ARGS the environment variable is set' and in the next line insert USER_MEM_ARGS="-Xms256m -Xmx1024m"
You can even change this setting from server to server by using SERVER_NAME variable which holds the server that is being started. For example to have this setting only for non-Admin servers, insert [ "${SERVER_NAME}" != "AdminServer" ] && USER_MEM_ARGS="-Xms256m -Xmx1024m"
Console Changes (Only if you use Admin Console to start the managed servers):
Login to AdminConsole–>Environments—>Servers—>—>Configuration—>ServerStart—>Arguments:(TextArea).
Enter : -Xms256m -Xmx1024m and Save.
Oh, btw JRockit does not have any concept of PermSize.

Get basic knowledge of the JVM parameters.
Simply setting a couple of JVM memory parameters to higher values won't help but only move the error into the future. You have to analyze the application to find out the real problem. JRockit comes with a very good memory analyzing tool Mission Control. Watch the demo, which will help to find out which part of your application causes the Out Of Memory error.

Related

Issue in pm2 - It stops responding

Am facing issue in my application servers. Assume that - there are two nodes in the Load-balancer.
Suddenly one of the node from them becomes unhealthy.
When I logged in that instance. There were no logs coming in pm2.
then I check its CPU it was very high.
So please guide me how can I fix this issue. Or any way to debug it.
Check out flame graphs to see where your Node app is CPU bound.
You can also use the new debugging system in Node 6.3 (--inspect) to debug with the full power of Chrome DevTools.
PM2 has some limited protection for runaway issues like this via the max-memory-restart option. Typically, high CPU will also correlate with high memory usage and this option can be used to restart your app when it begins consuming large amounts of memory (which in your case may or may not be the correct moment but it should help).
--max-memory-restart <memory> specify max memory amount used to autorestart (in octet or use syntax like 100M)

Issue with heap memory peaking

I've been having trouble with my webapp. My heap memory peak up to nearly to max size for about 30 mins and the it crashes my system.
I have googled and tried nearly everything. I have been monitoring my heap memory using Java VisualVM, jconsole and Oracle Java Mission Control(I know it's outdated).
So what I have tried until know:
Monitored heap memory to see if there is a specific thread running at specific time and it peak the memory. (This is not the case as it doesn't specifically peak at specific times.)
2.Increased my heap memory size.
Followed instructions from:
http://karunsubramanian.com/websphere/top-4-java-heap-related-issues-and-how-to-fix-them/
So my questions are:
Is there any tool that can help me see if I have a memory leak and from where?
Has anyone experienced the same issue.
Any pointers on how to manage this kind of problems.
Btw I am quite new in this area so please be kind.
Tomcat 7 on Windows Server 2012
JAVA 7
If you need more information please comment.
You need to configure the jvm to create a heapdump when an outofmemory occurs.
-XX:+HeapDumpOnOutOfMemoryError
Then analyze the heap dump to find what classes are using the memory

Jboss-6.1 Application running very slow

my application is running on jboss 6.1, and after few days my applications runs very slow., this is the situation I am facing every week,. for this I am killing java and clearing the temp and work folders and restarting the jboss again. Is there any other ways to clean the memory / manage the application. Kindly give me the suggestions for Linux and windows platforms.
Kindly help any one.
Thanks & Regards,
Sharath
Based on your RAM size of the system you can increase following parameters in run.conf(for linux) or run.conf.bat(for windows):
XMS, XMX, MaxPermSize.
-Xms512M -Xmx1024M -XX:MaxPermSize=128M
The flag Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM), while Xms specifies the initial memory allocation pool.
MaxPermSize are used to set size for Permanent Generation
The Permanent Generation is where class files are kept. These are the result of compiled classes and jsp pages. If this space is full, it triggers a Full Garbage Collection. If the Full Garbage Collection cannot clean out old unreferenced classes and there is no room left to expand the Permanent Space, an Out‐of‐ Memory error (OOME) is thrown and the JVM will crash
Hope you are aware of these three flags.

Openshift app: "OutOfMemory: Java heap space"

I am getting the error SEVERE: java.lang.OutOfMemoryError: Java heap space for my OpenShift application. As the error suggests I need to increase the java heap space. I have tried ssh'ing into my open-shift server and executed set JAVA_OPTS=-Xms1024M -Xmx1024M. Although the error is still present.
I am deploying a .war file on a tomcat7 server.
What should I be doing instead to fix this problem?
What size gear are yo using? If it is the small gear on the free account you only get 512MB of memory to begin with. If you need more memory you will need to upgrade to a larger gear, see http://www.openshift.com/pricing for how much memory the larger gear sizes have.
First, you need to check the heap size of the existing java process
Second, you need to try and increase it using the JAVA_OPTS option or otherwise. After making the change, check the process to validate that it indeed increased the heap size
Finally, if bumping the heap does not help, then as suggested you will need to perform some level of profiling or use some other technique to troubleshoot.

Maven build using/allocating huge amount of memory

I have a decent sized GWT (Google Web Toolkit) project that is built using Apache Maven. The build process involves generating 8 rpms and 2 wars.
I'm trying to build the project on a remote virtual server, running CentOS 5.2 as a guest OS. Since the guest OS can't use swap space, I am having to allocate a huge amount of memory to the box for it to build, otherwise I get a java could not allocate memory error (error=12). The build fails if there is under 7GB free. I suspect that most of this 7GB is never used, but is allocated for some reason.
At the end of the build the output reads: [INFO] Final Memory: 178M/553M
I have MAVEN_OPTS set to -Xms256m -Xmx1024M
I'm not sure how to make the maven build use less memory. Any suggestions are much appreciated.
Note that forking plugins like the maven gwt plugin (and maven surefire) uses memory that is "outside" the total that is reported by the maven execution. I would recommend corrolating OS-level process sizes with the output from "jps -lv" to find out which fork is stealing all your memory.
If, for instance, for some reason the forked process does not terminate it would get very crowded, very quickly.
That memory indicates it only ever needed a max of 553M, so the setting in MAVEN_OPTS is already above what you need. Are you saying you want to use less than that, or are you currently getting an error?

Resources