nodejs bunyan order of elements - node.js

I'm using bunyan, and this is an example of what i'm writting in my log.
Is there a way to change the order of the fields printed? from this:
{"name":"appName","hostname":"ip","pid":5817,"level":30,"msg":"message","time":"2015-10-15T19:04:01.596Z","v":0}
To this:
{"time":"2015-10-15T19:04:01.596Z","msg":"message","name":"appName","hostname":"ip","pid":5817,"level":30,"v":0}

Use the bunyan cli to get a more human readable log.
One option is to pipe just bunyan when you start your app (assuming you are running this from your root directory)
$ node app.js | ./node_modules/.bin/bunyan
A really short version is to pipe
$ node app.js | ./node_modules/.bin/bunyan -o short
Search around, there is a lot of power in the bunyan CLI.
https://github.com/trentm/node-bunyan#cli-usage

Related

Node.JS reading data from console command

I remember using something before in node.js that would allow me to run a command like
node appname.js text goes here
and then read the "text goes here" part with something like
console.log(console.text)
I can't remember what it is, and can't find it in any searches. Was this a real thing, or just me dreaming?
You can use process.argv to console the input from command line.
If you run below command in terminal/command line:
node appname.js text goes here.
You can print the command line arguments by:
console.log(process.argv)
Output of above console will be:
['node',
'/home/user/path/to/appname.js',
'text',
'goes',
'here' ]
If you dont want first two text, you can use:
console.log(process.argv.slice(2))
Output of above console will be:
['text',
'goes',
'here' ]
Read this link for more info.
Hope this helps you out!!!
Well there is lot's ways/packages around for reading from arguments.
the nodejs process is the base of it so check here
And also as i said lot's of packages there for parsing arguments.
yargs is one of them, minimist is also a populer one as far as i know.
If you don't want t use a package basicly it starts like this:
// inside node file
const args = process.argv.splice(2);
console.log(args);
// we are splice'ing from 2 cause
// process.argv[0] is your node-path
// process.argv[1] is your js file's full path
// Most of the time we are not using those so :)
So hope these would work for you ☺

Using unbuffered pipe as "dummy" file output

I've been dealing with a weird issue that I can't find a way to solve.
My situation is as follows.
I have an application in python called "app1", that requires a file for outputting the results of it's execution.
I have a secondary application, called "app2"; a binary, that gets the input from stdin.
I want to pipe what "app1" is generating directly into "app2" for processing, what in an ideal situation would be like this:
app1 | app2
But, as I said, there are some restrictions, like the fact that app1 requires a file to be the output.
The first solution I found for "fooling" app1 into outputting to stdout is to use mkfifo and create a pipe, so I can pipe it into stdin in app2. Like this:
pipe='/tmp/output_pipe'
mkfifo "$output_pipe"
python app1 -o "$output_pipe" &
app2 < $tmp_pipe
The problem is that eventually, during the execution, app1 will generate more output than what app2 can handle as an input, and due to the buffer size restrictions on the pipe, the pipe will fill up and everything will stop working.
Then I used this other approach:
python app1 -o /dev/stdout | app2
But the situation is the same as stdout has buffer size restrictions too.
Anyone has any idea on how can I solve this specific scenario?
TL;DR: I need a "dummy" file that will act as stdout but without the standard size restrictions of the pipes.
There are several utils designed to handle similar situations:
buffer: python app1 -o /dev/stdout | buffer | app2
stdbuf: python app1 -o /dev/stdout | stdbuf app2
unbuffer: python app1 -o /dev/stdout | unbuffer app2
mbuffer (buffer with more options): python app1 -o /dev/stdout | mbuffer | app2
bash process substitution: python app1 -o >(app2)
The utils have various options, some of which may be required here, (that depends on what app1 and app2 are doing). Some options set the size of the buffer, or add delays, or show diagnostic info.
Pixelbeat.org has some diagrams to help visualize how buffering works, (or fails to).
You have a few options:
Use a file. Instead of reading from stdin have the consumer read from a file and implement the file following code from "tail -f"
Write a pipe buffer program. This option is kind of silly but works if you cannot change either of the others. I wrote one in Perl a while ago, sorry can't share it, but basically use non-blocking IO to read from a pipe and write to a pipe, holding all the data in memory. Probably good to log a complaint if memory use goes too high.
Modify the reader or writer to use non-blocking IO and buffer the output or input.
Well. My bad.
It was not a buffer problem, as some people suggested here.
It was a CPU cap problem. Both applications were consuming 100% for the CPU and RAM when running and that's why the application crashed.

process input line by line with node CLI providing just an eval expression/callback to process each single line

I wonder about simple equivalent of perl -ne 'some expression' to be able to use node CLI possibly with --eval '<some expression, func/arrow>' and --require some-line-by-line-enabler. Is there any module making it possible or what can be approach to write one?
I've also found i.e. https://github.com/j-/require-cli and wonder if this may be the right way to go. I was trying it preparing some very basic module exposing forward of readline.on('line', callback) but consuming the stdin does not work just out of the box.
You can eval scripts with Node from command line:
node -e "console.log('Hello!')"

Prereload script into node interactive mode

Is it possibille to run node.exe, pipe a text into it, and continue the interactive session?
I want to create a shortcut bat (or bash) file for editing my database.
Usually this is what I'm doing:
$ node
>var db=require('mydb')
>db.open('myserver')
>//Now I can start access the db
>db.query...
I want to do something like that:
$ node -i perDefinedDb.js
>db.query(.... //I don't want to define the DB each time I run the node.exe
I tried some like that:
echo console.log(a) | node.exe
This is the result:
3
And the program is Finish. I want to continue the node REPL after piping something into.
In Other Words:
I want to be able to use my DB from node REPL, without defining it each time.
Launch the REPL from your js file and you can give the context you want:
const repl = require('repl');
var db = require('mydb');
db.open('myserver');
repl.start('> ').context.db = db;
Now you just have to run this file (node myREPL.js) and you can REPL as usual.

How to cut the log file?

Im using pm2 to create log file and it is very big(about 1.2GB, and it is still increasing).
How to cut a big log file to multiple small log files?
Has pm2 support anyway to cut the log file automatically?
In general, you do not have to worry if pm2 allows rotating log files because you can do that on a linux based system using the logrotate utility.
More details can be found at the following:
https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
http://www.z-car.com/blog/programming/how-to-rotate-logs-using-pm2-process-manager-for-node-js
https://github.com/Unitech/pm2/issues/114
As an example:
var file = fs.readFileSync('logfile.log')
if (file.length > 1024) { // 1KB
fs.writeFileSync('logfile.log', file.slice(-1024))
}

Resources