Intern Promise Timeouts - intern

I am writing some functional tests with Intern and came across the following section of text...
"The test will also fail if the promise is not fulfilled within the timeout of the test (the default is 30 seconds; set this.timeout to change the value)."
at...
https://github.com/theintern/intern/wiki/Writing-Tests-with-Intern#asynchronous-testing
How do I set the promise timeout for functional tests? I have tried calling timeout() directly on the promise but it isn't a valid method.
I have already set the various WD timeout (page load timeout, implicit wait etc...) but I am having issues with promises timing out.

Setting the timeout in my tests via the suggested API's just didn't work.
Its far from ideal but I ended up modifying Test.js directly and hard coding in the timeout I wnated.
I did notice when looking through the source that there was a comment on the timeout code saying something like // TODO timeouts not working correctly yet

It seems to be working okay on the latest version:
define([
'intern!object',
'intern/chai!assert',
'require',
'intern/dojo/node!leadfoot/helpers/pollUntil'
], function (registerSuite, assert, require, pollUntil) {
registerSuite(function(){
return {
name: 'index',
setup: function() {
},
'Test timeout': function () {
this.timeout = 90000;
return this.remote.sleep(45000);
}
}
});
});

You can also add defaultTimeout: 90000 to your configuration file (tests/intern.js in the default tutorial codebase) to globally set the timeout. This works well for me.

A timeout for a test is either set by passing the timeout as the first argument to this.async, or by setting this.timeout (it is a property, not a method).

For anyone who found their way here while using InternJS 4 and utilizing async/await for functional testing: timeout and executeAsync just wouldn't work for me, but the pattern below did. Basically I just executed some logic and used the sleep method at a longer interval than the setTimeout. Keep in mind that the javascript run inside of execute is block scoped so you will want to cache anything you want to reference later on the window object. Hopefully this saves someone else some time and frustration...
test(`timer test`, async ({remote}) => {
await remote.execute(`
// run setup logic, keep in mind this is scoped and if you want to reference anything it should be window scoped
window.val = "foo";
window.setTimeout(function(){
window.val = "bar";
}, 50);
return true;
`);
await remote.sleep(51); // remote will chill for 51 ms
let data = await remote.execute(`
// now you can call the reference and the timeout has run
return window.val;
`);
assert.strictEqual(
data,
'bar',
`remote.sleep(51) should wait until the setTimeout has run, converting window.val from "foo" to "bar"`
);
});

Related

Issues performing network I/O in a NodeJS worker thread

I have a script that will download thousands of files from a server, perform some CPU-intensive calculations on those files, and then upload the results somewhere. As an added level of complexity, I want to limit the number of concurrent connections to the server where I'm downloading the files.
To get the CPU-intensive calculations off the event thread, I leveraged workerpool by josdejong. I also figured I could take advantage of the fact that only a limited number of threads will be spun up at any given time to limit the number of concurrent connections to my server, so I tried putting the network I/O in the worker process like so (TypeScript):
import Axios from "axios";
import workerpool from "workerpool";
const pool = workerpool.pool({
minWorkers: "max",
});
async function processData(file: string) {
console.log("Downloading " + file);
const csv = await Axios.request<IncomingMessage>({
method: "GET",
url: file,
responseType: "stream"
});
console.log(csv);
// TODO: Will process the file here
}
export default async function (files: string[]) {
const promiseArray: workerpool.Promise<Promise<void>>[] = [];
// Only processing the first file for now during testing
files.slice(0, 1).forEach((file) => {
promiseArray.push(pool.exec(processData, [file]));
});
await Promise.allSettled(promiseArray);
await pool.terminate();
}
When I compile and run this code I see the message "Downloading test.txt", but after that I don't see the following log statement (console.log(csv))
I've tried various modifications on this code including removing the responseType, removing await and just inspecting the Promise that's returned by Axios, making the function non-async, etc. No matter what it seems to always crash on the Axios.request line
Are worker threads not able to open HTTP connections or something? Or am I just making a silly mistake?
If it is not getting to this line of code:
console.log(csv);
Then, either the Axios.request() is never fulfilling its promise or that promise is rejecting. You have no error handling at all in any of these functions so if it was rejecting, you wouldn't know and wouldn't be logging the problem. As a starter, I would suggest you instrument your code so you can log any rejections:
async function processData(file: string) {
try {
console.log("Downloading " + file);
const csv = await Axios.request<IncomingMessage>({
method: "GET",
url: file,
responseType: "stream"
});
console.log(csv);
} catch(e) {
console.log(e); // log an error
throw e; // propagate rejection/error
}
}
As a general point of code design, you should be catching and logging any possible promise rejection at some level. You don't have to catch them all at the lowest calling level as they will propagate up through returned promises, but you do need to catch any possible rejection somewhere and, for your own development sanity, you will want to log it so you can see when it happens and what the error is.
You can't execute TypeScript in a worker thread. The pool.exec method accepts either a static JavaScript function or a path to a JavaScript file with the same function.
Here is a quote from the workerpool readme:
Note that both function and arguments must be static and stringifiable, as they need to be sent to the worker in a serialized form. In case of large functions or function arguments, the overhead of sending the data to the worker can be significant.
I'm trying to make this work with TypeScript. Possible ways to resolve this are:
write a worker function in TypeScript, compile it to a separate bundle with any bundler, and then pass the path to the compiled file to the pool.exec. I managed to make this work but the only thing that I'm not satisfied with is that with this solution you can't use nodemon (if you use it)
use a JS wrapper that compiles the TS source code and executes it using ts-node. Then pass the path to that wrapper to the pool.exec function. This solution won't work with bundlers

Node.JS: Make module runnable through require or from command line

I have a script setupDB.js that runs asynchronously and is intended to be called from command line. Recently, I added test cases to my project, some of which require a database to be set up (and thus the execution of aforementioned script).
Now, I would like to know when the script has finished doing its thing. At the moment I'm simply waiting for a few seconds after requiring setupDB.js before I start my tests, which is obviously a bad idea.
The problem with simply exporting a function with a callback parameter is that it is important that the script can be run without any overhead, meaning no command line arguments, no additional function calls etc., since it is part of a bigger build process.
Do you have any suggestions for a better approach?
I was also looking for this recently, and came across a somewhat-related question: "Node.JS: Detect if called through require or directly by command line
" which has an answer that helped me build something like the following just a few minutes ago where the export is only run if it's used as a module, and the CLI library is only required if ran as a script.
function doSomething (opts) {
}
/*
* Based on
* https://stackoverflow.com/a/46962952/7665043
*/
function isScript () {
return require.main && require.main.filename === /\((.*):\d+:\d+\)$/.exec((new Error()).stack.split('\n')[ 2 ])[ 1 ]
}
if (isScript) {
const cli = require('some CLI library')
opts = cli.parseCLISomehow()
doSomething(opts)
} else {
module.exports = {
doSomething
}
}
There may be some reason that this is not a good idea, but I am not an expert.
I have now handled it this way: I export a function that does the setup. At the beginning I check if the script has been called from command line, and if so, I simply call the function. At the same time, I can also call it directly from another module and pass a callback.
if (require.main === module) {
// Called from command line
runSetup(function (err, res) {
// do callback handling
});
}
function runSetup(callback) {
// do the setup
}
exports.runSetup = runSetup;
make-runnable npm module can help with this.

Express Node Request For Loop Issue [duplicate]

With node.js I want to http.get a number of remote urls in a way that only 10 (or n) runs at a time.
I also want to retry a request if an exception occures locally (m times), but when the status code returns an error (5XX, 4XX, etc) the request counts as valid.
This is really hard for me to wrap my head around.
Problems:
Cannot try-catch http.get as it is async.
Need a way to retry a request on failure.
I need some kind of semaphore that keeps track of the currently active request count.
When all requests finished I want to get the list of all request urls and response status codes in a list which I want to sort/group/manipulate, so I need to wait for all requests to finish.
Seems like for every async problem using promises are recommended, but I end up nesting too many promises and it quickly becomes uncypherable.
There are lots of ways to approach the 10 requests running at a time.
Async Library - Use the async library with the .parallelLimit() method where you can specify the number of requests you want running at one time.
Bluebird Promise Library - Use the Bluebird promise library and the request library to wrap your http.get() into something that can return a promise and then use Promise.map() with a concurrency option set to 10.
Manually coded - Code your requests manually to start up 10 and then each time one completes, start another one.
In all cases, you will have to manually write some retry code and as with all retry code, you will have to very carefully decide which types of errors you retry, how soon you retry them, how much you backoff between retry attempts and when you eventually give up (all things you have not specified).
Other related answers:
How to make millions of parallel http requests from nodejs app?
Million requests, 10 at a time - manually coded example
My preferred method is with Bluebird and promises. Including retry and result collection in order, that could look something like this:
const request = require('request');
const Promise = require('bluebird');
const get = Promise.promisify(request.get);
let remoteUrls = [...]; // large array of URLs
const maxRetryCnt = 3;
const retryDelay = 500;
Promise.map(remoteUrls, function(url) {
let retryCnt = 0;
function run() {
return get(url).then(function(result) {
// do whatever you want with the result here
return result;
}).catch(function(err) {
// decide what your retry strategy is here
// catch all errors here so other URLs continue to execute
if (err is of retry type && retryCnt < maxRetryCnt) {
++retryCnt;
// try again after a short delay
// chain onto previous promise so Promise.map() is still
// respecting our concurrency value
return Promise.delay(retryDelay).then(run);
}
// make value be null if no retries succeeded
return null;
});
}
return run();
}, {concurrency: 10}).then(function(allResults) {
// everything done here and allResults contains results with null for err URLs
});
The simple way is to use async library, it has a .parallelLimit method that does exactly what you need.

How to destroy firebase ref in node

If I do this in node:
console.log('1');
console.log('2');
outputs:
1
2
And the process ends.
If I change it to this:
console.log('1');
var Firebase = require('firebase');
var ref = new Firebase('https://<some-base>.firebaseio.com/');
console.log('2');
outputs:
1
2
and the process continues.
I believe that this is because ref is keeping the process alive. I know that I can use process.exit but I would prefer to not do that. I actually don't want the process to exit anyway, I just want to make sure that I don't have a memory leak issue where my firebase ref lasts forever. Is there any way to destroy a firebase reference once I'm done with it?
[Engineer at Firebase] Currently, instantiating the Firebase client with new Firebase(...) will create a long-lived persistent connection that keeps the Node.js process alive.
This is admittedly not ideal for a bunch of use cases, and we have some work to do here to ensure that the process exits cleanly and automatically when there are no outstanding Firebase listeners or pending writes to the server, but it's been medium / low priority. I'd expect a "fix" to be released by Q2 '15, hopefully Q1.
One workaround I found when using tape was to call test.onFinish(() => process.exit()); at the end. It's not ideal but it seems to get the job done running it both directly and with a test runner.
Example:
const test = require('tape');
test('Some test', (t) => {
// test code
});
test('Another test', (t) => {
// test code
});
test.onFinish(() => process.exit());

How to forcibly keep a Node.js process from terminating?

TL;DR
What is the best way to forcibly keep a Node.js process running, i.e., keep its event loop from running empty and hence keeping the process from terminating? The best solution I could come up with was this:
const SOME_HUGE_INTERVAL = 1 << 30;
setInterval(() => {}, SOME_HUGE_INTERVAL);
Which will keep an interval running without causing too much disturbance if you keep the interval period long enough.
Is there a better way to do it?
Long version of the question
I have a Node.js script using Edge.js to register a callback function so that it can be called from inside a DLL in .NET. This function will be called 1 time per second, sending a simple sequence number that should be printed to the console.
The Edge.js part is fine, everything is working. My only problem is that my Node.js process executes its script and after that it runs out of events to process. With its event loop empty, it just terminates, ignoring the fact that it should've kept running to be able to receive callbacks from the DLL.
My Node.js script:
var
edge = require('edge');
var foo = edge.func({
assemblyFile: 'cs.dll',
typeName: 'cs.MyClass',
methodName: 'Foo'
});
// The callback function that will be called from C# code:
function callback(sequence) {
console.info('Sequence:', sequence);
}
// Register for a callback:
foo({ callback: callback }, true);
// My hack to keep the process alive:
setInterval(function() {}, 60000);
My C# code (the DLL):
public class MyClass
{
Func<object, Task<object>> Callback;
void Bar()
{
int sequence = 1;
while (true)
{
Callback(sequence++);
Thread.Sleep(1000);
}
}
public async Task<object> Foo(dynamic input)
{
// Receives the callback function that will be used:
Callback = (Func<object, Task<object>>)input.callback;
// Starts a new thread that will call back periodically:
(new Thread(Bar)).Start();
return new object { };
}
}
The only solution I could come up with was to register a timer with a long interval to call an empty function just to keep the scheduler busy and avoid getting the event loop empty so that the process keeps running forever.
Is there any way to do this better than I did? I.e., keep the process running without having to use this kind of "hack"?
The simplest, least intrusive solution
I honestly think my approach is the least intrusive one:
setInterval(() => {}, 1 << 30);
This will set a harmless interval that will fire approximately once every 12 days, effectively doing nothing, but keeping the process running.
Originally, my solution used Number.POSITIVE_INFINITY as the period, so the timer would actually never fire, but this behavior was recently changed by the API and now it doesn't accept anything greater than 2147483647 (i.e., 2 ** 31 - 1). See docs here and here.
Comments on other solutions
For reference, here are the other two answers given so far:
Joe's (deleted since then, but perfectly valid):
require('net').createServer().listen();
Will create a "bogus listener", as he called it. A minor downside is that we'd allocate a port just for that.
Jacob's:
process.stdin.resume();
Or the equivalent:
process.stdin.on("data", () => {});
Puts stdin into "old" mode, a deprecated feature that is still present in Node.js for compatibility with scripts written prior to Node.js v0.10 (reference).
I'd advise against it. Not only it's deprecated, it also unnecessarily messes with stdin.
Use "old" Streams mode to listen for a standard input that will never come:
// Start reading from stdin so we don't exit.
process.stdin.resume();
Here is IFFE based on the accepted answer:
(function keepProcessRunning() {
setTimeout(keepProcessRunning, 1 << 30);
})();
and here is conditional exit:
let flag = true;
(function keepProcessRunning() {
setTimeout(() => flag && keepProcessRunning(), 1000);
})();
You could use a setTimeout(function() {""},1000000000000000000); command to keep your script alive without overload.
spin up a nice repl, node would do the same if it didn't receive an exit code anyway:
import("repl").then(repl=>
repl.start({prompt:"\x1b[31m"+process.versions.node+": \x1b[0m"}));
I'll throw another hack into the mix. Here's how to do it with Promise:
new Promise(_ => null);
Throw that at the bottom of your .js file and it should run forever.

Resources