Async profiling nodejs server to review the code? - node.js

We encountered performance problem on our nodejs server holding 100k ip everyday.
Now we want to review the code and find the bottle-neck.
#jfriend00 from what we can see now, the problem seems to be DB access and file access. But we don't know what logic caused this access.
We are still looking for good ways to do the async profiling of nodejs server.
Here's what we tried
Nodetime
This works for us to some extent. It can give the executing time of code specified to the lines. However, we can't locate the error because the server works async and no stacking and calling info can be determined.
Async-profiling
This works with async and is said to be the first of this kind.
Problem is, we've integrated it's js code with our server-side code.
var AsyncProfile = require('async-profile')
AsyncProfile.profile(function () {
///// OUR SERVER-SIDE CODE RESIDES HERE
setTimeout(function () {
// doAsyncStuff
});
});
We can only record the profile of one time of server execution for one request. Can we use this code with things like forever? I've no idea with this.
dtrace
This is too general for us to locate problem in nodejs code.
Do you have any idea on profiling nodejs server code? Any hints or suggestions are appreciated. Thanks.

Related

DynamoDB NodeJS DocClient scan fails silently and intermittently?

Two developers have spent an entire day trying to determine what's going on here. I realize fully that it's vague, but I'm putting it out there as we've spent all day on it.
We have an app that has used MongoDB and we are transitioning to DynamoDB. We are trying to connect to a remote DynamoDB database. The lead developer was able to get the problem code to work on his machine, in a remote location. Two other developers took his code and tried to run it within the firm. The following scan (below) fails silently 95% of the time. No error, no timeout is returned.
The lead developer was offsite, so we thought of proxies, connections, etc. We've tested it with Mac, with Windows (the lead uses Windows), as well as with a mobile hot spot with both Mac and Windows, to see if it was the firm's network connection. Nothing worked.
The following code is where it goes wrong
this.docClient
.scan(params)
.promise()
.then(data => {
console.log('data received', data);
resolve(data.Items);
})
.catch(err => {
console.log('error from scan', err);
});
The call to scan never arrives at then nor at catch.
Very sporadically -- once out of 25 tries -- it works.
The params passed to the call are good.
The credentials are up to date.
The credentials are loaded from a shared credentials file as per https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-shared.html
When the following code is put right before the scan, the credentials are what they should be, but it makes no difference.
const credentials = new AWS.SharedIniFileCredentials({profile: 'default'});
AWS.config.credentials = credentials;
I know it's not much to go on, but any help would be appreciated. If not a possible solution, then perhaps things to investigate. We are at a complete loss.

Calling server-side code from client on Derby.js

I'm new to using Derby.js, and have scaffolded out a project using the yeoman generator-derby package. I thought everything was going fine, but I can't figure out what I'm doing wrong here.
A breakdown:
I have an 'app/dbWp.js' controller that exports several functions, and requires the modules 'mysql', 'async', and 'needle'
In my app/index.js, I import this file and use it like so:
app.proto.submitWp = function() {
dbWp.createUser(this.model);
};
I call this function from the view/index.jade like so:
button.btn.btn-primary(type="button", on-click="submitWp()")
In the browser, I get numerous console.error message complaining about the 'fs' module not being defined. After much googling, I discover that it's due to Browserify ignoring the 'fs' module, which subsequently causes problems with modules 'mysql' and 'needle'. But that implies this code is being executed in the browser?
So my question is: why is this trying to call the function on the client side? Obviously if it executes on the server side, as I thought it was going to, there wouldn't be a problem requiring these modules.
How can I execute this function on the server? Had this working fine with express + socket.io before, but wanted to change frameworks and give Derby.js a shot.
I'm clearly misunderstanding something about how Derby.js is supposed to work; any help would be appreciated. Thanks!
I know this is like 4 months later, but being new to DerbyJs too, I thought I could try and help.
I personally with standard html code have the equivalent working.
<button on-click="editContact(#contact.id)">Edit Contact</button>
This indeed runs code on the server. Can you try writing your code in standard HTML, or perhaps better yet, see if you can do a console.log on the server method to see if it even is getting there?
Perhaps the best would be to call an empty function on the server with a console log and check both the browser console and the server console.

Express 4 / Node JS - Gracefully managing uncaughtException

I try my very best to ensure that there are no errors in my code, but occasionally there is an uncaught exception that comes along and kills my app.
I could do with it not killing the app, but instead output it to a file somewhere, and try to resume the app where it left off - or restart quietly and show a nice message to all users on the application that something has gone wrong and to give it a sec while it sorts itself out.
In the event of the app not running, it'd be good if it could redirect it to somewhere that says "The app isn't running, get in touch to let me know" or something like that.
I could use process.on('uncaughtException') ... - but is this the right thing to do?
Thank you very much for taking the time to read this, and I appreciate your help and thoughts on this matter.
You can't actually resume after a crash, not at least without code written specifically for that purpose, like defining state and everything.
Otherwise use clusters to restart the app.
// ... your code ...
var cluster = require('cluster');
process.on('uncaughtException', function(err){
//.. do with `err` as you please
cluster.fork(); // start another instance of the app
});
When it forks, how does it affect the users - do they experience any latency while it's switching?
Clusters are usually used to keep running more than a single copy of your node app at all times, so that while one of the workers respawns, others are still active and preventing any latency.
if (cluster.isMaster)
require('os').cpus().forEach(cluster.fork);
cluster.on('exit', cluster.fork);
Is there anything that I should look out for, e.g. say there was an error connecting to the database and I hadn't put in a handler to deal with that, so the app kept on crashing - would it just keep trying to fork and hog all the system resources?
I've actually not thought about that concern before now. Sounds like a good concern.
Usually the errors are user instigated so it's not expected to cause such an issue.
Maybe database not connecting issue, and other such unrecoverable errors should be handled before the code actually goes into creating the forks.
mongoose.connection.on('open', function() {
// create forks here
});
mongoose.connection.on('error', function() {
// don't start the app if database isn't working..
});
Or maybe such errors should be identified and forks shouldn't be created. But you'll probably have to know in advance which errors could those be, so you could handle them.

Using node ddp-client to insert into a meteor collection from Node

I'm trying to stream some syslog data into Meteor collections via node.js. It's working fine, but the Meteor client polling cycle of ~10sec is too long of a cycle for my tastes - I'd like it be be ~1 second.
Client-side collection inserts via console are fast and all clients update instantly, as it's using DDP. But a direct MongoDB insert from the server side is subject to the polling cycle of the client(s).
So it appears that for now I'm relegated to using DDP to insert updates from my node daemon.
In the ddp-client package example, I'm able to see messages I've subscribed to, but I don't see how to actually send new messages into the Meteor collection via DDP and node.js, thereby updating all of the clients at once...
Any examples or guidance? I'd greatly appreciate it - as a newcomer to node and Meteor, I'm quickly hitting my limits.
Ok, I got it working after looking closely at some code and realizing I was totally over-thinking things. The protocol is actually pretty straight forward, RPC ish stuff.
I'm happy to report that it absolutely worked around the server-side insert delay (manual Mongo inserts were taking several seconds to poll/update the clients).
If you go through DDP, you get all the real-time(ish) goodness that you've come to know and love with Meteor :)
For posterity and to hopefully help drive some other folks to interesting use cases, here's the setup.
Use Case
I am spooling some custom syslog data into a node.js daemon. This daemon then parses and inserts the data into Mongo. The idea was to come up with a real-timey browser based reporting project for my first Meteor experiment.
All of it worked well, but because I was inserting into Mongo outside of Meteor proper, the clients had to poll every ~10 seconds. In another SO post #TimDog suggested I look at DDP for this, and his suggestion looks to have worked perfectly.
I've tested it on my system, and I can now instantly update all Meteor clients via a node.js async application.
Setup
The basic idea here is to use the DDP "call" method. It takes a list of parameters. On the Meteor server side, you export a Meteor method to consume these and do your MongoDB inserts. It's actually really simple:
Step 1: npm install ddp
Step 2: Go to your Meteor server code and do something like this, inside of Meteor.methods:
Meteor.methods({
'push': function(k,v) { // k,v will be passed in from the DDP client.
console.log("got a push request")
var d = {};
d[k] = parseInt(v);
Counts.insert(d, function(err,result){ // Now, simply use your Collection object to insert.
if(!err){
return result
}else{
return(err)
}
});
}
});
Now all we need to do is call this remote method from our node.js server, using the client library. Here's an example call, which is essentially a direct copy from the example.js calls, tweaked a bit to hook our new 'push' method that we've just exported:
ddpclient.call('push', ['hits', '1111'], function(err, result) {
console.log('called function, result: ' + result);
})
Running this code inserts via the Meteor server, which in turn instantly updates the clients that are connected to us :)
I'm sure my code above isn't perfect, so please chime in with suggestions. I'm pretty new to this whole ecosystem, so there's a lot of opportunity to learn here. But I do hope that this helps save some folks a bit of time. Now, back to focusing on making my templates shine with all this real-time data :)
According to this screencast its possible to simply call the meteor-methods declared by the collection. In your case the code would look like this:
ddpclient.call('/counts/insert', [{hits: 1111}], function(err, result) {
console.log('called function, result: ' + result);
})

Node.js API choking with concurrent connections

This is the first time I've used Node.js and Mongo, so please excuse any ignorance. I come from a PHP background. It was my understanding that Node.js scaled well because of the event-driven nature of it. As such, I built my API in node and have been testing it on a localhost. Today, I deployed it to my cloud server and everything works great, except...
As the requests start to pile up, they start to take a long time to fulfill. With just 2 clients connecting to the API, already I'm seeing 30sec+ page load times when both clients are trying to make several requests at once (which does sometimes happen).
Most of the work done by the API is either (a) reading/writing to MongoDB, which resides on a 2nd server on the cloud (b) making requests to other APIs, websites, etc. and returning the results. Both of these operations should not be blocking, but I can imagine the problem being something to do with a bottleneck either on the Mongo DB server (a) or to the external APIs (b).
Of course, I will have multiple application servers in the end, but I would expect each one to handle more than a couple concurrent clients without choking.
Some considerations:
1) I have some console.logs that I left in my node code, and I have a SSH client open to monitor the cloud server. I suspect that this could cause slowdown
2) I use express, mongoose, Q, request, and a handful of other modules
Thanks for taking the time to help a node newb ;)
Edit: added some pics of performance graphs after some responses below...
EDIT: here's a typical callback -- it is called by the express router, and it uses the Q module and OAuth to make a Post API call to Facebook:
post: function(req, links, images, callback)
{
// removed some code that calculates the target (string) and params (obj) variables
// the this.request function is a simple wrapper around the oauth.getProtectedResource function
Q.ncall(this.request, this, target, 'POST', params)
.then(function(res){
callback(null, res);
})
.fail(callback).end();
},
EDIT: some "upsert" code
upsert: function(query, callback)
{
var id = this.data._id,
upsertData = this.data.toObject(),
query = query || {'_id': id};
delete upsertData._id;
this.model.update(query, upsertData, {'upsert': true}, function(err, res, out){
if(err)
{
if(callback) callback(new Errors.Database({'message':'the data could not be upserted','error':err, 'search': query}));
return;
}
if(callback) callback(null);
});
},
Admittedly, my knowledge of Q/promises is weak. But, I think I have consistently implemented them in a way that does not block...
Your question has provided half of the relevant data: the technology stack. However, when debugging performance issues, you also need the other half of the data: performance metrics.
You're running some "cloud servers", but it's not clear what these servers are actually doing. Are they spiked on CPU? on Memory? on IO?
There are lots of potential issues. Are you running Express in production mode? Are you taking up too much IO on your MongoDB server? Are you legitimately downloading too much data? Did you get caught in an infinite Node.JS loop? (it happens)
I would like to provide better advice, but without knowing the status of the servers involved it's really impossible to start picking at any specific underlying technology. You may be a "Node newb", but basic server monitoring is pretty standard across programming languages.
Thank you for the extra details, I will re-iterate the most important part of my comments above: Where are these servers blocked?
CPU? (clearly not from your graph)
Memory? (doesn't seem likely here)
IO? (where are the IO graphs, what is your DB server doing?)

Resources