how to suppress output of a function in node/typescript - node.js

I have a typescript/node application where I am calling a 3rd-party function from a package that will output a string to the console. However, I want to know if it is possible to somehow suppress any kind of output (to the console/terminal) that this function will produce. I know that adding console.log = () => {} at the top of the file can do the job, but the linting rules (which I cannot change) of my group project state that calls to console.log are not allowed. Would anyone know of a more effective way of disabling output to console, without actually making reference to console?

Related

Text in Bash terminal getting overwritten! Using JS, Node.js (npms are: inquirer, console.table, and mysql)

Short 10sec video of what is happening: https://drive.google.com/file/d/1YZccegry36sZIPxTawGjaQ4Sexw5zGpZ/view
I have a CLI app that asks a user for a selection, then returns a response from a mysql database. The CLI app is run in node.js and prompts questions with Inquirer.
However, after returning the table of information, the next prompt overwrites the table data, making it mostly unreadable. It should appear on its own lines beneath the rest of the data, not overlap. The functions that gather and return the data are asynchronous (they must be in order to loop), but I have tried it with just a short list of standard synchronous functions for testing purposes, and the same problem exists. I have tried it with and without console.table, and the prompt still overwrites the response, as a console table or object list.
I have enabled checkwinsize in Bash with
shopt -s checkwinsize
And it still persists.
Is it Bash? Is it a problem with Inquirer?
I was having this issue as well. In my .then method of my prompt I was using switch and case to determine what to console log depending on the users selection. Then at the bottom I had an if statement checking to if they selected 'Finish' if they didn't I called the prompt function again, which I named 'questionUser'.
That's where I was having the issue, calling questionUser again was overriding my console logs.
What I did was I wrapped the questionUser call in a setTimeout like this:
if(data.option !== 'Finish'){
setTimeout(() => {
questionUser();
}, 1000);
}
This worked for me and allowed my console log data to not get deleted.
Hopefully if anyone else is having this issue this helps, I couldn't find an answer to this specific question anywhere.

How to trigger `exports.handler` in atom editor

This may be extremely silly.
I was using aws lambda functions for a while, and they usually start with exports.handler = (event, context, callback). AWS already has a test button where you can load in JSON and it tests it by providing JSON as an input to exports.handler, ad then from within the handler, formatting is done on the inputs, therse a bunch of console.logs() that are printed and so on.
I recently moved to atom editor, and moved all my code over from lambda. I am using Atom Runner to run my JS code.. however I realised when I run it, all I get is: Exited with code=0 in 0.745 seconds. Basically it isn't running at all.
How do I trigger exports.handler in Atom Editor? do I have to store JSON in a new file and call it in some way?
Lambda is wrapping your code, and knows to call that function .handler() when a request comes in. This is Lamba's contract with its users, but is not a universal thing. Right now Atom-Runner is just reading all of your code in, but then no functions are being called.
If you run node index.js (replace index with your filename) on the command line it will do the same thing Atom-Runner is doing.
You need a top level function call, for example adding exports.handler() at the very bottom of your file should work. If you want events, context, and callback to be defined you have to pass them yourself when you make that call (by reading in your JSON file or whatever you want).

Find out what file is requiring another file in Node

The title says pretty much what I need to do.
I have a module in node_modules which prints something to the standard output (and I don't want this to happen) but I don't find where I'm requiring this file.
I may be misunderstanding how modules are included, as I though that they must be required in order to be executed.
There are multiple ways for stuff to write to output. If it's just using console.log(), just swap in trace. Before your require() statements:
console.log = console.trace;
Then, you'll have the full trace output every time there's a log.
Using this console.log mod :
let old = console.log;
console.log = function(){
return old.apply(this,[].slice.apply(arguments).concat([(new Error()).stack.split(/\n/)[2].trim()]));
}
If you try :
console.log('I am trackable!')
You will get as output :
I am trackable! at test (/path/solution.js:5:9)
Happy hunting!

How to statically analyse that a file is fit for importing?

I have CLI program that can be executed with a list of files that describe instructions, e.g.
node ./my-program.js ./instruction-1.js ./instruction-2.js ./instruction-3.js
This is how I am importing and validating that the target file is an instruction file:
const requireInstruction = (instructionFilePath) => {
const instruction = require(instructionFilePath)
if (!instruction.getInstruction) {
throw new Error('Not instruction file.');
}
return instruction;
};
The problem with this approach is that it will execute the file executes regardless of whether it matches the expected signature, i.e. if file contains a side action such as connecting to the database:
const mysql = require('mysql');
mysql.createConnection(..);
module.exports = mysql;
Not instruction file. will fire, I will ignore the file, but the side-action will remain in the background.
How to safely validate target file signature?
Worst case scenario, is there a conventional way to completely sandbox the require logic and kill the process if file is determined to be unsafe?
Worst case scenario, is there a conventional way to completely sandbox the require logic and kill the process if file is determined to be unsafe?
Move the check logic into a specific js file. Make it process.exit(0) when everything is fine, process.exit(1) when it s wrong.
In your current program, instead of loading the file via require, use child_process.exec to invoke your new file, giving it the required parameter to know which file to test.
In your updated program, bind the close event to know if the return code was 0 or 1.
If you need more information than 0 or 1, into the new js file which will load the instruction, print some JSON.stringified data to stdout (console.log), and retrieve then JSON.parse it in the callback of call to child_process.exec.
Alternatively, have you looked into AST processing ?
http://jointjs.com/demos/javascript-ast
It could help you to identify piece of code which are not embedded within an exported function.
(Note: I discussed this question with the author on IRC. There may be some context in my answer that isn't in the original question.)
Given that your scenario is purely about preventing against accidental inclusion of non-instruction files, rather than about preventing malicious behaviour, static analysis using something like Esprima will probably be sufficient.
One approach would be to require that every instruction file exports some kind of object with a name property, containing the name of the instruction file. As there's not really anything to put in there besides a string literal, you can be fairly certain that if you can't locate a name property through static analysis, the file is not an instruction file - even in a language like JavaScript that isn't fully statically analyzable.
For any readers of this thread that are trying to protect from malicious actors, rather than accidents - for example, when accepting untrusted code from users: you cannot sandbox or 'validate' JavaScript with Node.js alone (not with the vm module either), and the above solution will not work for you. You will need system-level containerization or virtualization to run this kind of code safely. There are no other options.

Muting stdout and stderr during Mocha tests

I'll preface this by admitting that I'm probably doing something I shouldn't be doing. But since I'm already this deep, I might as well understand why things are happening this way.
I am using Mocha to test some Node.js code. This code uses the Winston logging library, which directly calls process.stdout.write() and process.stderr.write() (source). It works well; I have no complaints about that behavior.
However, when I unit-test this code, the output of the Mocha test runner is occasionally interspersed with lines of log output, which is ugly in some reporters (dot, bdd) and downright invalid in others (xunit). I wanted to block this output without modifying or subclassing Winston, and I wanted to avoid modifying the application itself if I could avoid it.
What I arrived at was a set of utility functions that can temporarily replace the Node builtins with a no-op function, and vice versa:
var stdout_write = process.stdout._write,
stderr_write = process.stderr._write;
function mute() {
process.stderr._write = process.stdout._write = function(chunk, encoding, callback) {
callback();
};
}
function unmute() {
process.stdout._write = stdout_write;
process.stderr._write = stderr_write;
}
Inside the various test specs, I called mute() directly before any call or assertion that produced unwanted output, and unmute() directly after. It felt a little hacky, but it worked -- not a single byte of unwanted output appeared on the console when running the tests.
Now it gets weird!
For the first time, I tried redirecting the output to a file:
mocha spec_file.js > output.txt
The unwanted output came back! Every piece of output that was sent to stdout appears in the file. Adding 2>&1, I get stderr in the file too. Nothing appears on the console in either case, though.
Why would the test code behave so differently between the two invocations? My gut guess is that Mocha is doing some sort of test to determine whether or not it's writing to a TTY, but I couldn't spot an obvious place where it changes the behavior of its writes.
Also the broader question, is there any correct way to mute stdout/stderr during tests, without wrapping all potentially-logging app code in a conditional that checks for the test environment?
See https://www.npmjs.org/package/mute
it('should shut the heck up', function (done) {
var unmute = mute()
app.options.defaults = true;
app.run(function() {
unmute();
helpers.assertFiles([
['package.json', /"name": "temp-directory"/],
['README.md', /# TEMP.Directory/]
]);
done();
});
});
I discovered a likely cause for this behavior. It does indeed have to do with whether or not stdout/stderr is a TTY.
When the script runs in a console, these are both TTYs, and process.stdout and process.stderr appear to be instances of tty.WriteStream and not, as I originally assumed, a stream.Writable. As far as my interactions went, the two classes really weren't that different -- both had public write() methods which called internal _write() methods, and both shared the same method signatures.
When piped to a file, things got a little different. process.stdout and process.stderr were instances of a different class that wasn't immediately familiar. Best I can figure, it's a fs. SyncWriteStream, but that's a stab in the dark. Anyway, this class doesn't have a _write() method, so trying to override it was pointless.
The solution was to move one level higher and do my muting with write() instead of _write(). It does the same thing, and it does it consistently regardless of where the output is going.

Resources