Does your In Memory Client support memory limits as a percentage of total memory available of the application? .NET 4.0 supported memory limits in the web.config. .NET Core doesnt support limits on total memory available. You have to set the size of each cached entry for it to do limits by total units. And getting the size of complex objects in bytes is expensive.
If you're referring to ServiceStack's default Memory Cache Client, it doesn't, you can only set a DateTime expiresAt or TimeSpan expiresIn to specify how long before evicting cached content.
Related
I created an Azure Function App(.Net 6 Isolated) utilizing the Consumption plan, which is responsible for converting various documents from one format to another, such as converting PDFs to PNGs. However, the processing time for certain documents may be longer due to factors such as the size of the document. I am aware that the Consumption plan has a memory limitation of 1.5 GB per function app. There are two function endpoints on the app, and I would like to set a hard limit on the memory usage per request to ensure that it does not exceed 512 MB. Is this possible?
But the MemoryFailPoint class does not guarantee that the block of code will execute within a specific amount of memory. It only ensures that a certain amount of memory is available before executing the code
This functionality of setting the memory consumption size is available for the Azure Functions before the Year of 2016.
There have been few changes in the Serverless design especially Azure Functions on utilizing of the dependent resources.
Microsoft has disabled the memory setting in Consumption Plan based on the experience feedback from many of Azure Users and brought up the change that the Consumption Hosting Plan will decides the resource utilization including memory/CPU based on your usage of Functions.
Refer to this MS Article for more information on memory settings to each of our function apps.
How to get current memory usage using nodejs?
I have a backend application. When ram usage of the operating system is greater than 7GB, I want to decline user requests.
The main tools you have built into nodejs without going to external programs are these:
process.memoryUsage()
process.memoryUsage.rss()
Probably you want the second one because resident set size is closest to the total OS RAM memory allocated to the process.
In my NodeJS application:
1.Is it correct that RSS and external memory increases significantly, but heap memory increases slowly?
Is it correct that the external memory increases more than the heapTotal?
I tried using the chrome inspect, but as soon as i launch the snapshots the application crashes.
Also, i don't think analyzing the heap memory i find the problem.
see the graph here
Any suggestions?
I had a lot of users upload files and I find the memory not released after user uploaded files. Thus I stop the liferay tomcat, and there is no other applications, while the memory usage still high. So who cost the memory, I guess its linux server cached the documents. Can I get some idea or suggestion from you? I want to release the memory
Once Java has allocated memory from the OS, it'll not free it up again. This is not a feature of Liferay, but of the underlying JVM.
You can allocate less memory to Liferay (or the appserver) to begin with, but must be sure to at least allocate enough for the upload to be processed (AFAIK the documents aren't necessarily held in memory at the same time). You can also configure the cache sizes, so that Liferay won't need to allocate more memory from the OS, at the price of more cache misses. I'm aware of several installations that rather accepted the (minor) impact of cache misses than increasing the overall memory requirements.
However, as memory is so cheap these days, many opt to not optimize this particular aspect. If you can't upgrade your hardware it might be called for though.
I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/