I develope node js with mongodb application on windows. For memory check, I usually use windows task manager, but it is not an good option, I think.
How to check exact memory usage of mongodb queries? I can aggregate all needed data in one query, but maybe 2 projection will be better option.
You can use simple one line code using Object.entries() and adjust to readable format
Object.entries(process.memoryUsage()).forEach(item => console.log(`${item[0]}: ${(item[1] / 1024 / 1024).toFixed(4)} MB`))
output:
rss: 70.7695 MB
heapTotal: 85.4063 MB
heapUsed: 55.2614 MB
external: 0.0794 MB
you can use the built in memoryUsage method. The usage will be displayed in bytes.
Here is a simple example:
let usage = process.memoryUsage()
for (var i = 0; i < 4; i++) {
console.log(`${Object.keys(usage)[i]}: ${Object.values(usage)[i]} bytes`)
}
example log:
rss: 634034 bytes
heapTotal: 340239 bytes
heapUsed: 129323 bytes
external: 10232 bytes
I want to determine how large an array would be in memory for when my function executes. Determining the size of the array is easy but I am not seeing a correlation to the size of my array to the Max Memory used that gets recorded at the end of a Lambda execution.
There is no apparent coloration after inspecting process.memoryUsage() before and after setting the array as well as the Max Memory used reported by the Lambda. I can't find a good resource that indicates how/what Lambda actually uses to determine the memory used. Any help would be appreciated?
This question made me curious myself, so I decided to run some tests to see how memory allocation works inside an AWS Lambda container.
Test 1: Create array with 100,000 elements in memory
Memory size: 128MB
exports.handler = async (event) => {
const arr = [];
for (let i = 0; i < 100000; i++) {
arr.push(i);
}
console.log(process.memoryUsage());
return 'done';
};
Result: 56 MB
2019-04-30T01:00:59.577Z cd473d5b-986c-436e-8b36-b114410c84cf { rss: 35299328,
heapTotal: 11853824,
heapUsed: 7590320,
external: 8224 }
REPORT RequestId: 2a7548f9-5d2f-4060-8f9e-deb228730d8c Duration: 155.74 ms Billed Duration: 200 ms Memory Size: 128 MB Max Memory Used: 56 MB
Test 2: Create array with 1,000,000 elements in memory
Memory size: 128MB
exports.handler = async (event) => {
const arr = [];
for (let i = 0; i < 1000000; i++) {
arr.push(i);
}
console.log(process.memoryUsage());
return 'done';
};
Result: 99 MB
2019-04-30T01:03:44.582Z 547a9de8-35f7-48e2-a53f-ab669b188f9a { rss: 80093184,
heapTotal: 55263232,
heapUsed: 52951088,
external: 8224 }
REPORT RequestId: 547a9de8-35f7-48e2-a53f-ab669b188f9a Duration: 801.68 ms Billed Duration: 900 ms Memory Size: 128 MB Max Memory Used: 99 MB
Test 3: Create array with 10,000,000 elements in memory
Memory size: 128MB
exports.handler = async (event) => {
const arr = [];
for (let i = 0; i < 10000000; i++) {
arr.push(i);
}
console.log(process.memoryUsage());
return 'done';
};
Result: 128 MB
REPORT RequestId: f1df4f39-e0fc-4b44-8f90-c3c0e3d9c12d Duration: 3001.33 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 128 MB
2019-04-30T00:54:32.970Z f1df4f39-e0fc-4b44-8f90-c3c0e3d9c12d Task timed out after 3.00 seconds
I think we can pretty confidently say that the memory used by the lambda container does go up based on the size of an array in memory; in our third test we ended up maxing out our memory and timing out. My assumption here is that the process that controls the execution of the lambda also monitors how much memory that execution acquires; likely by cat /proc/meminfo as trognanders suggests.
Okay so I used the following code and increased the amount of array values to get a correlation. Three tests were done on each max value of the array. Lambda was set at 1024MB. Each array element is 10 chars/bytes long.
const util = require('util');
const exec = util.promisify(require('child_process').exec);
async function GetContainerUsage()
{
const { stdout, stderr } = await exec('cat /proc/meminfo');
// console.log(stdout);
let memInfoSplits = stdout.split(/[\n: ]/).filter( val => val.trim());
// console.log(memInfoSplits[19]); // This returns the "Active" value which seems to be used
return Math.round(memInfoSplits[19] / 1024);
}
function GetMemoryUsage()
{
const used = process.memoryUsage();
for (let key in used)
used[key] = Math.round((used[key] / 1024 / 1024));
return used;
}
exports.handler = async (event, context) =>
{
let max = event.ArrTotal;
let arr = [];
for(let i = 0; i < max; i++)
{
arr.push("1234567890"); //10 Bytes
}
let csvLine = [];
let jsMemUsed = GetMemoryUsage();
let containerMemUsed = await GetContainerUsage();
csvLine.push(event.ArrTotal);
csvLine.push(jsMemUsed.rss);
csvLine.push(jsMemUsed.heapTotal);
csvLine.push(jsMemUsed.heapUsed);
csvLine.push(jsMemUsed.external);
csvLine.push(containerMemUsed);
console.log(csvLine.join(','));
return true;
};
This output the following values used in the CSV:
Array Count, JS rss, JS heapTotal, JS heapUsed, external, System Active, Lambda reported usage
1,30,7,5,0,53,54
1,31,7,5,0,53,55
1,30,8,5,0,53,55
1000,30,8,5,0,53,55
1000,30,8,5,0,53,55
1000,30,8,5,0,53,55
10000,30,8,5,0,53,55
10000,31,8,6,0,54,56
10000,33,7,5,0,54,57
100000,32,12,7,0,56,57
100000,34,11,8,0,57,59
100000,36,12,10,0,59,61
1000000,64,42,39,0,88,89
1000000,60,36,34,0,84,89
1000000,60,36,34,0,84,89
10000000,271,248,244,0,294,297
10000000,271,248,244,0,295,297
10000000,271,250,244,0,295,297
Which if graphed becomes:
So at 10 Million elements the array is assumed to be 10mil*10bytes = 100MB. There must be some overhead I am missing somewhere as there is about 200MB used elsewhere. But at least there is a clear linear correlation, which I can now work with.
Capacity specification for FaaS vs PaaS
The whole idea of doing computing with lambda functions(FaaS) is that least bother on capacity planning. Now, given that its not possible for the cloud provider to default a lot of choices, memory settings and timeouts are some settings AWS uses to configure the function. Apparently, if you test it out you may see that memory settings are not just determining the memory but also the CPU compute capacity. This is as quoted by AWS -
Lambda allocates CPU power linearly in proportion to the amount of memory configured. At 1,792 MB, a function has the equivalent of 1 full vCPU (one vCPU-second of credits per second)
Ref https://docs.aws.amazon.com/lambda/latest/dg/resource-model.html
Hence, its not just enough to consider runtime memory footprint, but also for CPU speed with which it executed and finishes the function.
AWS does not call out what capacity or CPU/Memory/Server Type/IOPS they use in these containers and neither they show that usage in any CW metrics like an EC2 instance.
Hence we need to choose memory setting based on testing.
Each lambda(nodejs) will have its own memory footprint and dedicated set of node module dependencies. Hence, each one needs to load and performance tested to tune the memory and timeout settings and cannot be planned upfront.
General research observation
With any standard nodejs based lambda function, which has logging and does just hello world, deployed without a VPC
128 MB may show an execution time of say 150+ ms and a billing of
200ms for 128 MB
256 MB may show an execution time of say 80+ ms and
a billing of 100ms for 256 MB
Lower memory setting does not mean lower cost essentially and hence fine tuning based on load & performance test is the best way to determine the memory setting that can be used.
Attributes like timeout, is purely based on how long the function takes to complete the activity, which can be way high up for batch job operations(say 10m) vs a webservice which expects a quick response(say 10s). Timing out earlier instead of waiting on any long pending dependencies are important to avoid high billing in case of high throughput APIs. In case of API, slow timeout can result in alternate containers(functions) to spin up to scale for new requests which can also impact the number of IPs being allocated within the subnet which hosts the function(in case function runs within a vpc ).
Lambda limits on ENI and IPs or maximum lambda concurrency within a account/region is important factors to consider while planning for the capacity.
Ref https://docs.aws.amazon.com/lambda/latest/dg/limits.html
It is quite surprising to see the counters heapUsed and external showing reduction but still the heapTotal showing a spike.
***Memory Log - Before soak summarization 2
"rss":217214976,"heapTotal":189153280,"heapUsed":163918648,"external":1092977
Spike in rss: 4096
Spike in heapTotal: 0
Spike in heapUsed: 22240
Spike in external: 0
***Memory Log - Before summarizing log summary for type SOAK
"rss":220295168,"heapTotal":192294912,"heapUsed":157634440,"external":318075
Spike in rss: 3080192
Spike in heapTotal: 3141632
Spike in heapUsed: -6284208
Spike in external: -774902
Any ideas why the heapTotal is drastically increasing despite the heapUsed and external going drastically down. I mean I really thought that heapTotal = heapUsed + external.
I am using the following code to track memory
var prevStats;
function logMemory (path,comment) {
if (! fs.existsSync(path)) {
fs.mkdirSync(path, DIR_MODE);
}
path = pathUtils.posix.join(path,"memoryLeak.txt");
if (comment) comment = comment.replace(/(.+)/,' - $1');
var memStats = process.memoryUsage();
fs.appendFileSync(path,`\r\n\r\n***Memory Log ${comment}\r\n`+JSON.stringify(process.memoryUsage()));
if (prevStats) {
_.forEach (
Object.keys(memStats),
key => {
if (memStats.hasOwnProperty(key)) {
fs.appendFileSync(path,`\r\nSpike in ${key}: ${memStats[key]-prevStats[key]}`);
}
}
);
}
prevStats = memStats;
}
Quted from What do the return values of node.js process.memoryUsage() stand for? RSS is the resident set size, the portion of the process's memory held in RAM
(how much memory is held in the RAM by this process in Bytes) file size of 'text.txt' used in example is here is 370KB (378880 Bytes)
var http = require('http');
var fs = require('fs');
var express = require('express');
var app = express();
console.log("On app bootstrap = ", process.memoryUsage());
app.get('/test', function(req, res) {
fs.readFile(__dirname + '/text.txt', function(err, file) {
console.log("When File is available = ", process.memoryUsage());
res.end(file);
});
setTimeout(function() {
console.log("After sending = ", process.memoryUsage());
}, 5000);
});
app.listen(8081);
So on app bootstrap: { rss: 22069248, heapTotal: 15551232, heapUsed: 9169152 }
After i made 10 request for '/test' situation is:
When File is available = { rss: 33087488, heapTotal: 18635008, heapUsed: 6553552 }
After sending = { rss: 33447936, heapTotal: 18635008, heapUsed: 6566856 }
So from app boostrap to 10nth request rss is increased for 11378688 bytes which is roughly 30 times larger than size of text.txt file.
I know that this code will buffers up the entire data.txt file into memory for every request before writing the result back to clients, but i expected that after the requst is finished occupied memory for 'text.txt' will be released? But that is not the case?
Second how to set up maximum size of RAM memory which node process can consume?
In js garbage collector does not run immediately after execution of your code. Thus the memory is not freed immediately after execution. You can run GC independently, after working with large objects, if you care about memory consumption. More information you can find here.
setTimeout(function() {
global.gc();
console.log("After sending = ", process.memoryUsage());
}, 5000);
To look at your memory allocation you can run your server with v8-profiler and get a Heap snapshot. More info here.
Try running your example again and give the process some time to run garbage collection. Keep an eye on your process' memory usage with a system monitor and it should clear after some time. If it doesn't go down the process can't go higher in memory usage than mentioned below.
According to the node documentation the memory limit is 512 mb for 32 bit and 1 gb for 64 bit. They can be increased if necessary.
Running this:
setInterval(function() {
console.log(process.memoryUsage());
}, 1000);
shows memory usage constantly growing:
{ rss: 9076736, heapTotal: 6131200, heapUsed: 2052352 }
... some time later
{ rss: 10960896, heapTotal: 6131200, heapUsed: 2939096 }
... some time later
{ rss: 11177984, heapTotal: 6131200, heapUsed: 3141576 }
Why does this happen?
The program isn't empty, it's running a timer, checking memory usage, and writing to the console once a second. That causes objects to be created in the heap and until garbage collection runs your heap usage will keep increasing.
If you let it go you should eventually see heapUsed go back down each time garbage collection is triggered.