How to destroy firebase ref in node - node.js

If I do this in node:
console.log('1');
console.log('2');
outputs:
1
2
And the process ends.
If I change it to this:
console.log('1');
var Firebase = require('firebase');
var ref = new Firebase('https://<some-base>.firebaseio.com/');
console.log('2');
outputs:
1
2
and the process continues.
I believe that this is because ref is keeping the process alive. I know that I can use process.exit but I would prefer to not do that. I actually don't want the process to exit anyway, I just want to make sure that I don't have a memory leak issue where my firebase ref lasts forever. Is there any way to destroy a firebase reference once I'm done with it?

[Engineer at Firebase] Currently, instantiating the Firebase client with new Firebase(...) will create a long-lived persistent connection that keeps the Node.js process alive.
This is admittedly not ideal for a bunch of use cases, and we have some work to do here to ensure that the process exits cleanly and automatically when there are no outstanding Firebase listeners or pending writes to the server, but it's been medium / low priority. I'd expect a "fix" to be released by Q2 '15, hopefully Q1.

One workaround I found when using tape was to call test.onFinish(() => process.exit()); at the end. It's not ideal but it seems to get the job done running it both directly and with a test runner.
Example:
const test = require('tape');
test('Some test', (t) => {
// test code
});
test('Another test', (t) => {
// test code
});
test.onFinish(() => process.exit());

Related

Querying DB2 every 15 seconds causing memory leak in NodeJS

I have an application which checks for new entries in DB2 every 15 seconds on the iSeries using IBM's idb-connector. I have async functions which return the result of the query to socket.io which emits an event with the data included to the front end. I've narrowed down the memory leak to the async functions. I've read multiple articles on common memory leak causes and how to diagnose them.
MDN: memory management
Rising Stack: garbage collection explained
Marmelab: Finding And Fixing Node.js Memory Leaks: A Practical Guide
But I'm still not seeing where the problem is. Also, I'm unable to get permission to install node-gyp on the system which means most memory management tools are off limits as memwatch, heapdump and the like need node-gyp to install. Here's an example of what the functions basic structure is.
const { dbconn, dbstmt } = require('idb-connector');// require idb-connector
async function queryDB() {
const sSql = `SELECT * FROM LIBNAME.TABLE LIMIT 500`;
// create new promise
let promise = new Promise ( function(resolve, reject) {
// create new connection
const connection = new dbconn();
connection.conn("*LOCAL");
const statement = new dbstmt(connection);
statement.exec(sSql, (rows, err) => {
if (err) {
throw err;
}
let ticks = rows;
statement.close();
connection.disconn();
connection.close();
resolve(ticks.length);// resolve promise with varying data
})
});
let result = await promise;// await promise
return result;
};
async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};
Any ideas on where the leak is? Am i using async/await incorrectly? Or else am i creating/destroying DB connections improperly? Any help on figuring out why this code is leaky would be much appreciated!!
Edit: Forgot to mention that i have limited control on the backend processes as they are handled by another team. I'm only retrieving the data they populate the DB with and adding it to a web page.
Edit 2: I think I've narrowed it down to the DB connections not being cleaned up properly. But, as far as i can tell I've followed the instructions suggested on their github repo.
I don't know the answer to your specific question, but instead of issuing a query every 15 seconds, I might go about this in a different way. Reason being that I don't generally like fishing expeditions when the environment can tell me an event occurred.
So in that vein, you might want to try a database trigger that loads the key to the row into a data queue on add, or even change or delete if necessary. Then you can just put in an async call to wait for a record on the data queue. This is more real time, and the event handler is only called when a record shows up. The handler can get the specific record from the database since you know it's key. Data queues are much faster than database IO, and place little overhead on the trigger.
I see a couple of potential advantages with this method:
You aren't issuing dozens of queries that may or may not return data.
The event would fire the instant a record is added to the table, rather than 15 seconds later.
You don't have to code for the possibility of one or more new records, it will always be 1, the one mentioned in the data queue.
yes you have to close connection.
Don't make const data. you don't need promise by default statement.exec is async and handles it via return result;
keep setTimeout(getNewData, 2000);// check again in 2 seconds
line outside getNewData otherwise it becomes recursive infinite loop.
Sample code
const {dbconn, dbstmt} = require('idb-connector');
const sql = 'SELECT * FROM QIWS.QCUSTCDT';
const connection = new dbconn(); // Create a connection object.
connection.conn('*LOCAL'); // Connect to a database.
const statement = new dbstmt(dbconn); // Create a statement object of the connection.
statement.exec(sql, (result, error) => {
if (error) {
throw error;
}
console.log(`Result Set: ${JSON.stringify(result)}`);
statement.close(); // Clean up the statement object.
connection.disconn(); // Disconnect from the database.
connection.close(); // Clean up the connection object.
return result;
});
*async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};*
change to
**async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
};
setTimeout(getNewData, 2000);// check again in 2 seconds**
First thing to notice is possible open database connection in case of an error.
if (err) {
throw err;
}
Also in case of success connection.disconn(); and connection.close(); return boolean values that tell is operation successful (according to documentation)
Always possible scenario is to pile up connection objects in 3rd party library.
I would check those.
This was confirmed to be a memory leak in the idb-connector library that i was using. Link to github issue Here. Basically there was a C++ array that never had it's memory deallocated. A new version was added and the commit can viewed Here.

Multiple AWS Instances and Node events

I have an implementation in node where an API when called does some processing and waits for an event from another function before returning the response. This works fine when ran locally and when running in a single instance in AWS but when multiple instances are involved there are some issues which I'm assuming is because the API is being called from one instance and the emitter is being emitted in another instance. Is there any way to keep the listeners and emitters same across all instances?
Update :
After some research I found that using an application loadbalancer with some logic for routing can help with this issue. I am marking the answer below as correct because while it did not help me with AWS autoscaling, it did help me find an alernate solution to my problem.
AFAIU you think that event emitted from one process is being handled in a different process, but it never would be the case from what I know because each process has its own memory and also events would be associated with the process only.
I have added a sample code that demonstrates what I meant by it. Maybe if you post the code you are referring to, we could check what went wrong.
const cluster = require("cluster");
const EventEmitter = require("events");
if (cluster.isMaster) {
cluster.fork();
const myEE = new EventEmitter();
myEE.on("foo", arg =>
console.log("emitted from ", arg, "received in master")
);
setTimeout(() => {
myEE.emit("foo", "master");
}, 1000);
} else {
const myEE = new EventEmitter();
myEE.on("foo", arg => console.log("emitted from", arg, "received in worker"));
setTimeout(() => {
myEE.emit("foo", "client");
}, 2000);
}

Cron job failed without a reason

I am in a situation where I have a CRON task on google app engine (using flex environment) that just dies after some time, but I have no trace WHY (checked the GA Logs, nothing, tried try/catch, and explicitly log it - no error).
I have explicitly verified that if I create a cron task that runs for 8 minutes (but doesn't do much - just sleeps and updates database every second), it will run successfully. This is just to prove that CRON jobs can at least run 8 minutes if not more. & I have set up the Express & NodeJS combo up correctly.
This is all fine, but seems that my other cron job dies in 2-3 minutes, so quite fast. It is hitting some kind of limit, but I have no idea how to control for it, or even what limit it is, so all I can do is speculate.
I will tell more about my CRON task. It is basically rapidly querying MongoDB database where every query is quite fast. I've tried the same code locally, and there are no problems.
My speculation is that I am somehow creating too many MongoDB requests at once, and potentially running out of something?
Here's a pseudocode (just to describe what kind of scale data we're talking about - the numbers and flow are exactly the same):
function q1() {
return await mongoExecute(async (db) => {
const [l1, l2] = await Promise.all([
db.collection('Obj1').count({uid1: c1, u2action: 'L'}),
db.collection('Obj1').count({uid2: c2, u1action: 'L'}),
]);
return l1+l2;
});
}
for(let i = 0; i < 8000; i++) {
const allImportantInformation = Promise.all([
q1(),
q2(),
q3(),
.....
q10()
])
await mongoDb.saveToServer(document);
}
It is getting somewhere around i=1600 before the CRON job just dies without any explanation. The GA Cron Job panel clearly says the JOB has failed.
Here is also my mongoExecute (which is just a separate module that caches the db object, which hopefully is the correct practice in order to ensure that mongodb pooling works correctly.)
import { MongoClient, Db } from 'mongodb';
let db = null;
let promiseInProgress = null;
export async function mongoExecute<T> (executor: (instance: Db) => T): Promise<T | null> {
if (!db) {
if (!promiseInProgress) {
promiseInProgress = new Promise(async (resolve, reject) => {
const tempDb = await MongoClient.connect(process.env.MONGODB_URL);
resolve(tempDb);
});
}
db = await promiseInProgress;
}
try {
const value = await executor(db);
return value;
} catch (error) {
console.log(error);
return null;
}
}
What would be the solution? My idea is to basically ensure less requests are made at once (so all the promises would be sequential, and potentially add sleep between each cycle in the FOR.
I don't understand because it works fine up until some specific point (and quite big point, it's definitely different amount, sometimes it is 800, sometimes 1200, etc).
Is there any "running out of TCP connections" scenario happening? Theoretically we shouldn't run out of anything because we don't have much open at any given point.
It seems to be working if I throw 200ms wait between each cycle & I suspect I can figure out solution, all the items don't have to be updated in the same CRON execution, but it is a bit annoying, and I would like to know what's going on.
Is the garbage collector not catching up fast enough, why exactly is GA silently failing my cron task?
I discovered what the bug is, and fixed it accordingly.
Let me rephrase it; I have no idea what the bug was, and having no errors at any point was discouraging, however I managed to fix (lucky guess) whatever was happening by updating my nodejs mongodb driver to the latest version (from 2.xx -> 3.1.10).
No sleeps needed in my code anymore.

Watches never trigger in FoundationDB

I playing around with watches functionality and struggling to get it work.
The problem is that watch never fires, it simply not react to changes that I make for key in other transactions.
val key = new Tuple().add("watch-test").pack()
val watchExecuted = db.runAsync(tr => {
tr.set(key, new Tuple().add(1).pack())
tr.watch(key)
})
Thread.sleep(5000) // ensure that watch applied
db.run(tr => {
tr.set(key, new Tuple().add(2).pack())
})
watchExecuted.get() // never finish
Is anybody have any idea why watches do not react on changes as it supposed to do?
I think what's going on here is that your first transaction is never completing. It's maybe not obvious from the documentation, but runAsync won't return until the CompletableFuture returned in your function is ready. Because you are returning the watch future and not changing the value until after the transaction, it's never becoming ready and the transaction never ends.
If you replaced runAsync with run, I think it would work:
val watchExecuted = db.run(tr => {
tr.set(key, new Tuple().add(1).pack())
tr.watch(key)
})
If you wanted to use runAsync, then you would need to return your watch future wrapped in another object.
EDIT: or rather, if you want to use runAsync, you could return a CompletableFuture<CompletableFuture<Void>>:
var watchExecuted = db.runAsync(tr => {
tr.set(key, new Tuple().add(1).pack())
CompletableFuture.completedFuture(tr.watch(key))
});

How to forcibly keep a Node.js process from terminating?

TL;DR
What is the best way to forcibly keep a Node.js process running, i.e., keep its event loop from running empty and hence keeping the process from terminating? The best solution I could come up with was this:
const SOME_HUGE_INTERVAL = 1 << 30;
setInterval(() => {}, SOME_HUGE_INTERVAL);
Which will keep an interval running without causing too much disturbance if you keep the interval period long enough.
Is there a better way to do it?
Long version of the question
I have a Node.js script using Edge.js to register a callback function so that it can be called from inside a DLL in .NET. This function will be called 1 time per second, sending a simple sequence number that should be printed to the console.
The Edge.js part is fine, everything is working. My only problem is that my Node.js process executes its script and after that it runs out of events to process. With its event loop empty, it just terminates, ignoring the fact that it should've kept running to be able to receive callbacks from the DLL.
My Node.js script:
var
edge = require('edge');
var foo = edge.func({
assemblyFile: 'cs.dll',
typeName: 'cs.MyClass',
methodName: 'Foo'
});
// The callback function that will be called from C# code:
function callback(sequence) {
console.info('Sequence:', sequence);
}
// Register for a callback:
foo({ callback: callback }, true);
// My hack to keep the process alive:
setInterval(function() {}, 60000);
My C# code (the DLL):
public class MyClass
{
Func<object, Task<object>> Callback;
void Bar()
{
int sequence = 1;
while (true)
{
Callback(sequence++);
Thread.Sleep(1000);
}
}
public async Task<object> Foo(dynamic input)
{
// Receives the callback function that will be used:
Callback = (Func<object, Task<object>>)input.callback;
// Starts a new thread that will call back periodically:
(new Thread(Bar)).Start();
return new object { };
}
}
The only solution I could come up with was to register a timer with a long interval to call an empty function just to keep the scheduler busy and avoid getting the event loop empty so that the process keeps running forever.
Is there any way to do this better than I did? I.e., keep the process running without having to use this kind of "hack"?
The simplest, least intrusive solution
I honestly think my approach is the least intrusive one:
setInterval(() => {}, 1 << 30);
This will set a harmless interval that will fire approximately once every 12 days, effectively doing nothing, but keeping the process running.
Originally, my solution used Number.POSITIVE_INFINITY as the period, so the timer would actually never fire, but this behavior was recently changed by the API and now it doesn't accept anything greater than 2147483647 (i.e., 2 ** 31 - 1). See docs here and here.
Comments on other solutions
For reference, here are the other two answers given so far:
Joe's (deleted since then, but perfectly valid):
require('net').createServer().listen();
Will create a "bogus listener", as he called it. A minor downside is that we'd allocate a port just for that.
Jacob's:
process.stdin.resume();
Or the equivalent:
process.stdin.on("data", () => {});
Puts stdin into "old" mode, a deprecated feature that is still present in Node.js for compatibility with scripts written prior to Node.js v0.10 (reference).
I'd advise against it. Not only it's deprecated, it also unnecessarily messes with stdin.
Use "old" Streams mode to listen for a standard input that will never come:
// Start reading from stdin so we don't exit.
process.stdin.resume();
Here is IFFE based on the accepted answer:
(function keepProcessRunning() {
setTimeout(keepProcessRunning, 1 << 30);
})();
and here is conditional exit:
let flag = true;
(function keepProcessRunning() {
setTimeout(() => flag && keepProcessRunning(), 1000);
})();
You could use a setTimeout(function() {""},1000000000000000000); command to keep your script alive without overload.
spin up a nice repl, node would do the same if it didn't receive an exit code anyway:
import("repl").then(repl=>
repl.start({prompt:"\x1b[31m"+process.versions.node+": \x1b[0m"}));
I'll throw another hack into the mix. Here's how to do it with Promise:
new Promise(_ => null);
Throw that at the bottom of your .js file and it should run forever.

Resources