I have openfire installed on my debian server.
I want to know that whether openfire java memory should be less than server memory?
For e.g. i have a server with 256 MB RAM now can i have openfire java memory more than 256 MB RAM or it should be less than 256.
Please help
Thanks,
Pankaj
"Java memory" should definitely be lower, preferably quite a bit lower than available RAM, otherwise your server will start swapping and server performance go down a lot.
A number of things to consider to determine the "right" settings for Java heap space:
what's running on the server? If OpenFire is the only thing running, it can obviously be allowed to reserve more RAM
how much RAM does OpenFire really need? If you give a Java process lots of heap memory, it'll fill it before initiating a garbage collection. If you decrease heap size, it'll just have to collect garbage more often.
It may take some time to find the "ideal" settings, but often it's not useful to just allow the server to take up more memory.
Related
I had a lot of users upload files and I find the memory not released after user uploaded files. Thus I stop the liferay tomcat, and there is no other applications, while the memory usage still high. So who cost the memory, I guess its linux server cached the documents. Can I get some idea or suggestion from you? I want to release the memory
Once Java has allocated memory from the OS, it'll not free it up again. This is not a feature of Liferay, but of the underlying JVM.
You can allocate less memory to Liferay (or the appserver) to begin with, but must be sure to at least allocate enough for the upload to be processed (AFAIK the documents aren't necessarily held in memory at the same time). You can also configure the cache sizes, so that Liferay won't need to allocate more memory from the OS, at the price of more cache misses. I'm aware of several installations that rather accepted the (minor) impact of cache misses than increasing the overall memory requirements.
However, as memory is so cheap these days, many opt to not optimize this particular aspect. If you can't upgrade your hardware it might be called for though.
I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/
I've been having trouble with my webapp. My heap memory peak up to nearly to max size for about 30 mins and the it crashes my system.
I have googled and tried nearly everything. I have been monitoring my heap memory using Java VisualVM, jconsole and Oracle Java Mission Control(I know it's outdated).
So what I have tried until know:
Monitored heap memory to see if there is a specific thread running at specific time and it peak the memory. (This is not the case as it doesn't specifically peak at specific times.)
2.Increased my heap memory size.
Followed instructions from:
http://karunsubramanian.com/websphere/top-4-java-heap-related-issues-and-how-to-fix-them/
So my questions are:
Is there any tool that can help me see if I have a memory leak and from where?
Has anyone experienced the same issue.
Any pointers on how to manage this kind of problems.
Btw I am quite new in this area so please be kind.
Tomcat 7 on Windows Server 2012
JAVA 7
If you need more information please comment.
You need to configure the jvm to create a heapdump when an outofmemory occurs.
-XX:+HeapDumpOnOutOfMemoryError
Then analyze the heap dump to find what classes are using the memory
Running linux ubuntu 14.04 on a digitalOcean server which gives me 512MB ram. Surprisingly, when trying to run activator for a play app I came to realice that almost all the memory was used. Using 'htop' command I get this output. which process should I kill (I am using 2 ssh connections, one to monitor and the other one to do stuff).
I could also assign swap memory but that would affect performance. I thought 512MB should be more than enough to run a play server. I mean, seriously, we put a man on the moon with reaaaaly much less.
Linux makes as much use of memory as it can, but that doesn't mean that it's not available for your applications. It will use memory to cache certain things (such as files) and memory for buffers.
In your screenshot you'll see the memory usage bar is made of different coloured sections:
Green is memory in use
Blue is buffer
Yellow is cache
So generally any applications you run that require more memory will allocate it out of the memory used to cache data.
Having swap space is generally a good idea - it won't affect performance unless the kernel starts swapping heavily, but that's generally better than the alternative which is your applications will crash with an out-of-memory error.
I am getting the error SEVERE: java.lang.OutOfMemoryError: Java heap space for my OpenShift application. As the error suggests I need to increase the java heap space. I have tried ssh'ing into my open-shift server and executed set JAVA_OPTS=-Xms1024M -Xmx1024M. Although the error is still present.
I am deploying a .war file on a tomcat7 server.
What should I be doing instead to fix this problem?
What size gear are yo using? If it is the small gear on the free account you only get 512MB of memory to begin with. If you need more memory you will need to upgrade to a larger gear, see http://www.openshift.com/pricing for how much memory the larger gear sizes have.
First, you need to check the heap size of the existing java process
Second, you need to try and increase it using the JAVA_OPTS option or otherwise. After making the change, check the process to validate that it indeed increased the heap size
Finally, if bumping the heap does not help, then as suggested you will need to perform some level of profiling or use some other technique to troubleshoot.