Node JS memory management on ARM single board - node.js

I am using sharp image processing module to resize the image and render it on UI.
app.get('/api/preview-small/:filename',(req,res)=>{
let filename = req.params.filename;
sharp('files/' + filename)
.resize(200, 200, {
fit: sharp.fit.inside,
withoutEnlargement: true
})
.toFormat('jpeg')
.toBuffer()
.then(function(outputBuffer) {
res.writeHead('200',{"Content-Type":"image/jpeg"});
res.write(outputBuffer);
res.end();
});
});
I am running above code on a single board computer Rock64 with 1 GB ram. When I run a Linux htop command and monitor the memory utilization, I could see the memory usage is adding up exponentially from 10% to 60% after every call to the nodejs app and it never comes down.
CPU USAGE
Though it does not give any issue running the application, my only concern is memory usage does not come down, even when the app is not running and I am not sure if this will crash the application eventually if this application runs continuously.
or if I move a similar code snippet to the cloud will it keep occupying memory even when it's not running?
Anyone who is using sharp module facing the similar issue or is this a known issue with node.js. Do we have a way to flush out/clear out the memory or will node do garbage collection?
Any help is appreciated. Thanks

sharp has some memory debugging stuff built in:
http://sharp.dimens.io/en/stable/api-utility/#cache
You can control the libvips cache, and get stats about resource usage.
The node version has a very strong effect on memory behaviour. This has been discussed a lot on the sharp issue tracker, see for example:
https://github.com/lovell/sharp/issues/429
Or perhaps:
https://github.com/lovell/sharp/issues/778

Related

Why is bcryptjs slower on AWS Lambda than in local docker?

I have Lambda written in NodeJS. I noticed it takes several seconds to complete. I added logs and found that bcrypt is quite slow.
Packages:
"dependencies": {
"bcryptjs": "^2.4.3",
Source code:
const bcrypt = require('bcryptjs');
console.log("User was found"); // following part takes more than 1 second!
if (bcrypt.compareSync(password, user.password)) {
console.log("Password verified"); //
This is a log from AWS LogWatch:
2020-01-13T20:25:30.951 User was found
2020-01-13T20:25:32.670 Password verified
and
2020-01-13T20:31:20.192 User was found
2020-01-13T20:31:21.550 Password verified
So it takes 1.7 seconds. I ran the same code in docker on my machine
2020-01-13T20:09:48.109 User was found
2020-01-13T20:09:48.211 Password verified
Locally it takes just 120 ms. AWS uses NodeJS 10.x, local docker image is probably 8.x. I do not know how to tell docker to reflect changes in packaged.yaml.
Is this NodeJS regression? Or some issue on AWS configuration?
Encryption performance is typically CPU bound. AWS Lambda CPU is proportional to RAM, so you should choose the largest (3008 MB) and re-test.
When I run this inside a Lambda function handler on a 3008 MB RAM Lambda in us-east-1, the compareSync call consistently takes 90-100ms. With a 128 MB Lambda, it takes a little over 1s.
On a related note, it's helpful to understand that choosing the lowest (128 MB) RAM option, simply because it is cheaper per GB-s, is not always the best thing to do. While the highest RAM option (with proportionally higher CPU and network) is certainly more expensive per GB-s, it also completes Lambda functions a lot quicker. So, for example, you might be able to complete your task in 1/10th of the time at only 1.75x the cost. In a lot of situations, that can be very valuable.
There is a project that can help you tune price/performance for your Lambdas: alexcasalboni/aws-lambda-power-tuning
I have the same issue , this is because you are using bcryptjs library try to use bcrypt is much much faster . Bcryptjs use plain javascipt thats why is too slow in the other hand bcrypt use c++ extensions

APP Engine Google Cloud Storage - Error 500 when downloading a file

I'm having an error 500 when I download a JSON file (2MB aprox) using the nodejs-storage library. The file gets downloaded without any problem, but once I render the view and pass the file as parameter the app crashes "The server encountered an error and could not complete your request."
file.download(function(err, contents) {
var messages = JSON.parse(contents);
res.render('_myview.ejs', {
"messages": messages
})
}
I am using the App Engine Standard Environment and have this further error detail:
Exceeded soft memory limit of 256 MB with 282 MB after servicing 11 requests total. Consider setting a larger instance class in app.yaml
Can someone give me hint? Thank you in advance.
500 error messages are quite hard to troubleshoot due to the all the possible scenarios that could go wrong with the App Engine instances. A good way to start debugging this type of errors with App Engine would be to go to the Stackdriver logging, query for the 500 error messages click on the expander arrow and check for the specific error code. In the specific case of the Exceeded soft memory limit... error message in the App Engine Standard environment my suggestion would be to choose an instance class better suited to your application's load.
Assuming you are using automatic scaling you could try to use an F2 instance class (which has a higher Memory and CPU limit than the default F1) and start from there. Adding or modifying the instance_class element of your app.yaml file to instance_class: F2 would suffice to accomplish the instance class suggested, or you could change your app.yaml file to use an instance better suited to your application's load.
Notice that increasing the instance class directly affects your billing and you can use the Google Cloud Platform Pricing Calculator to get an estimate of the costs associated to using a different instance class for your App Engine application.

Firebase Functions - ERROR, but no Event Message in Console

I have written a function on firebase that downloads an image (base64) from firebase storage and sends that as response to the user:
const functions = require('firebase-functions');
import os from 'os';
import path from 'path';
const storage = require('firebase-admin').storage().bucket();
export default functions.https.onRequest((req, res) => {
const name = req.query.name;
let destination = path.join(os.tmpdir(), 'image-randomNumber');
return storage.file('postPictures/' + name).download({
destination
}).then(() => {
res.set({
'Content-Type': 'image/jpeg'
});
return res.status(200).sendFile(destination);
});
});
My client calls that function multiple times after one another (in series) to load a range of images for display, ca. 20, of an average size of 4KB.
After 10 or so pictures have been loaded (amount varies), all other pictures fail. The reason is that my function does not respond correctly, and the firebase console shows me that my function threw an error:
The above image shows that
A request to the function (called "PostPictureView") suceeds
Afterwards, three requests to the controller fail
In the end, after executing a new request to the "UserLogin"-function, also that fails.
The response given to the client is the default "Error: Could not handle request". After waiting a few seconds, all requests get handled again as they are supposed to be.
My best guesses:
The project is on free tier, maybe google is throttling something? (I did not hit any limits afaik)
Is there a limit of messages the google firebase console can handle?
Could the tmpdir from the functions-app run low? I never delete the temporary files so far, but would expect that either google deletes them automatically, or warns me in a different way that the space is running low.
Does someone know an alternative way to receive the error messages, or has experienced similar issues? (As Firebase Functions is still in Beta, it could also be an error from google)
Btw: Downloading the image from the client (android app, react-native) directly is not possible, because I will use the function to check for access permissions later. The problem is reproducable for me.
In Cloud Functions, the /tmp directory is backed by memory. So, every file you download there is effectively taking up memory on the server instance that ran the function.
Cloud Functions may reuses server instances for repeated calls to the same function. This means your function is downloading another file (to that same instance) with each invocation. Since the names of the files are different each time, you are accumulating files in /tmp that each occupy memory.
At some point, this server instance is going to run out of memory with all these files in /tmp. This is bad.
It's a best practice to always clean up files after you're done with them. Better yet, if you can stream the file content from Cloud Storage to the client, you'll use even less memory (and be billed even less for the memory-hours you use).
After some more research, I've found the solution: The Firebase Console seems to not show all error information.
For detailed information to your functions, and errors that might be omitted in the Firebase Console, check out the website from google cloud functions.
There I saw: The memory (as suggested by #Doug Stevensson) usage never ran over 80MB (limit of 256MB) and never shut the server down. Moreover, there is a DNS resolution limit for the free tier, that my application hit.
The documentation points to a limit of DNS resolutions: 40,000 per 100 seconds. In my case, this limit was never hit - firebase counts a total executions of roundabout 8000 - but it seems there is a lower, undocumented limit for the free tier. After upgrading my account (I started the trial that GCP offers, so actually not paying anything) and linking the project to the billing account, everything works perfectly.

Make node dump performance map continuously or on non-clean exit

I'm attempting to create a flame graph for a Node app that's causing some issues, and while I am able to profile it using Xcode and get its CPU trace, the Node perf map isn't dumping to, for example, /tmp/perf-30001.map, when I exit it uncleanly (unfortunately, the issue I'm running into isn't allowing me to exit the Node app cleanly). I'm running the app with the --perf-basic-prof flag.
Is there any way to get Node to dump the memory map either continuously or on any kind of exit?
The map file is written continuously, be sure to use at least node 0.12 and to disable kptr_restrict sudo sysctl kernel/kptr_restrict=0.
And if you want a memory dump at exit you can later open in v8 debugger :
var heapdump = require('heapdump');
process.on('exit', function() {
heapdump.writeSnapshot(Date.now() + '.heapsnapshot');
});

NodeJS application with memory leak, where is it?

I have a NodeJs application that listens to messages via subscribe on a Redis server. It collects the messages for a period of 5 Seconds and then pushes them out to the connected clients, the code looks something like this:
io.sockets.on('connection', function (socket) {
nClients++;
console.log("Number of clients connected " + nClients);
socket.on('disconnect', function () {
nClients--;
console.log("Number of clients remaining " + nClients);
});
});
Receiving messages to send out to the clients
cli_sub.on("message",function(channel,message) {
oo = JSON.parse(message);
ablv_last_message[oo[0]["base"]+"_"+oo[0]["alt"]] = message;
});
setInterval(function() {
Object.keys(ablv_last_message).forEach( function(key) {
io.sockets.emit('ablv', ablv_last_message[key]);
});
ablv_last_message = [];
}, 5000);
SOLUTION FOUND (at least I think so): Node didn't crash because it reached some internal memory limits, it looks as if it crashed because my VPS ran out of memory, it was a 2GB VPS running one or two other processes too. After upgrading it to 4GB, Node runs smoothly, yes always around 1.6 to 2.0 GB but I believe its the GC who does its work here.
It is better you try some tools for finding leaks in node.js.
Tools for Finding Leaks
Jimb Esser’s node-mtrace, which uses the
GCC mtrace utility to profile heap usage.
Dave Pacheco’s node-heap-dump takes a snapshot of the V8 heap and serializes the whole thing out in a huge JSON file. It includes tools to traverse and investigate
the resulting snapshot in JavaScript.
Danny Coates’s v8-profiler and node-inspector provide Node bindings for the V8 profiler and a Node debugging interface using the WebKit Web Inspector.
Felix Gnass’s fork of the same that un-disables the retainers graph
Felix Geisendörfer’s Node Memory Leak Tutorial is a short and sweet explanation of how to use the v8-profiler and node-debugger, and is presently the state-of-the-art for most Node.js memory leak debugging.
Joyent’s SmartOS platform, which furnishes an arsenal of tools at your disposal for debugging Node.js memory leaks
From Tracking Down Memory Leaks in Node.js – A Node.JS Holiday Season.
And another blog
It looks to me that you keep adding keys to the global ablv_last_message object and never clean it.
You may use Object.getOwnPropertyNames rather than Object.keys

Resources