my application is running on jboss 6.1, and after few days my applications runs very slow., this is the situation I am facing every week,. for this I am killing java and clearing the temp and work folders and restarting the jboss again. Is there any other ways to clean the memory / manage the application. Kindly give me the suggestions for Linux and windows platforms.
Kindly help any one.
Thanks & Regards,
Sharath
Based on your RAM size of the system you can increase following parameters in run.conf(for linux) or run.conf.bat(for windows):
XMS, XMX, MaxPermSize.
-Xms512M -Xmx1024M -XX:MaxPermSize=128M
The flag Xmx specifies the maximum memory allocation pool for a Java Virtual Machine (JVM), while Xms specifies the initial memory allocation pool.
MaxPermSize are used to set size for Permanent Generation
The Permanent Generation is where class files are kept. These are the result of compiled classes and jsp pages. If this space is full, it triggers a Full Garbage Collection. If the Full Garbage Collection cannot clean out old unreferenced classes and there is no room left to expand the Permanent Space, an Out‐of‐ Memory error (OOME) is thrown and the JVM will crash
Hope you are aware of these three flags.
Related
I've been having trouble with my webapp. My heap memory peak up to nearly to max size for about 30 mins and the it crashes my system.
I have googled and tried nearly everything. I have been monitoring my heap memory using Java VisualVM, jconsole and Oracle Java Mission Control(I know it's outdated).
So what I have tried until know:
Monitored heap memory to see if there is a specific thread running at specific time and it peak the memory. (This is not the case as it doesn't specifically peak at specific times.)
2.Increased my heap memory size.
Followed instructions from:
http://karunsubramanian.com/websphere/top-4-java-heap-related-issues-and-how-to-fix-them/
So my questions are:
Is there any tool that can help me see if I have a memory leak and from where?
Has anyone experienced the same issue.
Any pointers on how to manage this kind of problems.
Btw I am quite new in this area so please be kind.
Tomcat 7 on Windows Server 2012
JAVA 7
If you need more information please comment.
You need to configure the jvm to create a heapdump when an outofmemory occurs.
-XX:+HeapDumpOnOutOfMemoryError
Then analyze the heap dump to find what classes are using the memory
I am getting the error SEVERE: java.lang.OutOfMemoryError: Java heap space for my OpenShift application. As the error suggests I need to increase the java heap space. I have tried ssh'ing into my open-shift server and executed set JAVA_OPTS=-Xms1024M -Xmx1024M. Although the error is still present.
I am deploying a .war file on a tomcat7 server.
What should I be doing instead to fix this problem?
What size gear are yo using? If it is the small gear on the free account you only get 512MB of memory to begin with. If you need more memory you will need to upgrade to a larger gear, see http://www.openshift.com/pricing for how much memory the larger gear sizes have.
First, you need to check the heap size of the existing java process
Second, you need to try and increase it using the JAVA_OPTS option or otherwise. After making the change, check the process to validate that it indeed increased the heap size
Finally, if bumping the heap does not help, then as suggested you will need to perform some level of profiling or use some other technique to troubleshoot.
I am using weblogic 10.3.6 with JRockit installed . I am using a 64 bit system with Linux as OS. I have an adf application installed in it . There are only a couple of users using the application . But the server machine where WLS is installed keeps going down every week causing out of memory . so we have to restart it every week. When i was looking through I found that WebLogic can be made more stable by adjusting the heap size and other memory arguments .
Example: --Xms256m --Xmx512m MaxPermsize as 128m
My question is
What are these arguments ?
How are these arguments related to one another?
How do I determine the value for these arguments?
What can be other causes for out of memory issue?
Thanks,
Rakesh
Xms and Xmx are the minimum and the maximum heap (essentially where the objects are stored) size the java program can use.
In your case the java program is the WebLogic server on which your application is deployed. By default the Xms and Xmx values set by WebLogic server are : 256m and 512m.
Looks like your application needs more than 512MB heap memory. So you need to increase the maximum heap size (Xmx) to avoid frequent OutOfMemory error.
The new value of Xmx can be 1024m or more. You (or Performance team, if there is one.) have to do rigorous Performance, Scalability, Reliability testing with your application and with different Xmx values to determine what is the best for the application.
Setting the Memory Arguments (i.e Xms, Xmx) can be done at the script level (if you are using startWebLogic.sh/startManagedWebLogic.sh scripts to start the servers).
Script Level Changes:
Open setDomainEnv.sh and search for 'IF USER_MEM_ARGS the environment variable is set' and in the next line insert USER_MEM_ARGS="-Xms256m -Xmx1024m"
You can even change this setting from server to server by using SERVER_NAME variable which holds the server that is being started. For example to have this setting only for non-Admin servers, insert [ "${SERVER_NAME}" != "AdminServer" ] && USER_MEM_ARGS="-Xms256m -Xmx1024m"
Console Changes (Only if you use Admin Console to start the managed servers):
Login to AdminConsole–>Environments—>Servers—>—>Configuration—>ServerStart—>Arguments:(TextArea).
Enter : -Xms256m -Xmx1024m and Save.
Oh, btw JRockit does not have any concept of PermSize.
Get basic knowledge of the JVM parameters.
Simply setting a couple of JVM memory parameters to higher values won't help but only move the error into the future. You have to analyze the application to find out the real problem. JRockit comes with a very good memory analyzing tool Mission Control. Watch the demo, which will help to find out which part of your application causes the Out Of Memory error.
I have a decent sized GWT (Google Web Toolkit) project that is built using Apache Maven. The build process involves generating 8 rpms and 2 wars.
I'm trying to build the project on a remote virtual server, running CentOS 5.2 as a guest OS. Since the guest OS can't use swap space, I am having to allocate a huge amount of memory to the box for it to build, otherwise I get a java could not allocate memory error (error=12). The build fails if there is under 7GB free. I suspect that most of this 7GB is never used, but is allocated for some reason.
At the end of the build the output reads: [INFO] Final Memory: 178M/553M
I have MAVEN_OPTS set to -Xms256m -Xmx1024M
I'm not sure how to make the maven build use less memory. Any suggestions are much appreciated.
Note that forking plugins like the maven gwt plugin (and maven surefire) uses memory that is "outside" the total that is reported by the maven execution. I would recommend corrolating OS-level process sizes with the output from "jps -lv" to find out which fork is stealing all your memory.
If, for instance, for some reason the forked process does not terminate it would get very crowded, very quickly.
That memory indicates it only ever needed a max of 553M, so the setting in MAVEN_OPTS is already above what you need. Are you saying you want to use less than that, or are you currently getting an error?
I am working on an MFC app that seems to be automagically committing to ~160MB of virtual memory. The app typically runs at 10-14MB of memory usage so this level of committed memory seems excessive. Additionally there is no where in the code where VirtualAlloc is called...
COM & ATL are also being used.
The memory shows as committed the instance the process launches, before a breakpoint in __tmainCRTStartup can be reached.
How can this memory be reserved/committed?
Thanks in advance!
The only reason can be a DLL you use. I've used MFC 7.0 and 9.0 for many projects and can tell you that they don't commit this lot of memory.
Turned out there was some "legacy" code using a static array of custom objects that allocated around 1000 extra elements, so changing this to use a std::vector alleviated this issue completely...