Log Rotation in Node.js? - node.js

In my web analytics, I am logging the data in plain text file. I want to rotate the log on a daily basis because its logging too much data. Currently I am using bunyan to rotate the logs.
Problem I am facing
It is rotating the file correctly, but rotated log file are in the name log.0, log.1, etc. I want the file name to be log.05-08-2013, log.04-08-2013
I can't edit the source of the bunyanpackage because we are installing the modules using package.json via NPM.
So my question is - Is there any other log rotation in Node.js that meets my requirement?

Winston does support log rotation using a date in the file name. Take a look at this pull request which adds the feature and was merged four months ago. Unfortunately the documentation isn't listed on the site, but there is another pull request pending to fix that. Based on that documentation, and the tests for the log rotation features, you should be able to just add it as a new Transport to enable the log rotation functionality. Something like the following:
winston.add(winston.transports.DailyRotateFile, {
filename: './logs/my.log',
datePattern: '.dd-MM-yyyy'
});

If you also want to add logrotate (e.g. remove logs that are older than a week) in addition to saving logs by date, you can add the following code:
var fs = require('fs');
var path = require("path");
var CronJob = require('cron').CronJob;
var _ = require("lodash");
var logger = require("./logger");
var job = new CronJob('00 00 00 * *', function(){
// Runs every day
// at 00:00:00 AM.
fs.readdir(path.join("/var", "log", "ironbeast"), function(err, files){
if(err){
logger.error("error reading log files");
} else{
var currentTime = new Date();
var weekFromNow = currentTime -
(new Date().getTime() - (7 * 24 * 60 * 60 * 1000));
_(files).forEach(function(file){
var fileDate = file.split(".")[2]; // get the date from the file name
if(fileDate){
fileDate = fileDate.replace(/-/g,"/");
var fileTime = new Date(fileDate);
if((currentTime - fileTime) > weekFromNow){
console.log("delete fIle",file);
fs.unlink(path.join("/var", "log", "ironbeast", file),
function (err) {
if (err) {
logger.error(err);
}
logger.info("deleted log file: " + file);
});
}
}
});
}
});
}, function () {
// This function is executed when the job stops
console.log("finished logrotate");
},
true, /* Start the job right now */
'Asia/Jerusalem' /* Time zone of this job. */
);
where my logger file is:
var path = require("path");
var winston = require('winston');
var logger = new winston.Logger({
transports: [
new winston.transports.DailyRotateFile({
name: 'file#info',
level: 'info',
filename: path.join("/var", "log", "MY-APP-LOGS", "main.log"),
datePattern: '.MM--dd-yyyy'
}),
new winston.transports.DailyRotateFile({
name: 'file#error',
level: 'error',
filename: path.join("/var", "log", "MY-APP-LOGS", "error.log"),
datePattern: '.MM--dd-yyyy',
handleExceptions: true
})
]});
module.exports = logger;

There's the logrotator module for log rotation that you can use regardless of the logging mechanism.
You can specify the format option to format the date format (or any other format for that matter)
var logrotate = require('logrotator');
// use the global rotator
var rotator = logrotate.rotator;
// or create a new instance
// var rotator = logrotate.create();
// check file rotation every 5 minutes, and rotate the file if its size exceeds 10 mb.
// keep only 3 rotated files and compress (gzip) them.
rotator.register('/var/log/myfile.log', {
schedule: '5m',
size: '10m',
compress: true,
count: 3,
format: function(index) {
var d = new Date();
return d.getDate()+"-"+d.getMonth()+"-"+d.getFullYear();
}
});

mongodb
winston itself does not support log rotation. My bad.
mongodb has a log rotation use case. Then you can export the logs to file names per your requirement.
winston also has a mongodb transport but I don't think it supports log rotation out of the box judging from its API.
This may be an overkill though.
forking bunyan
You can fork bunyan and add your repo's url in package.json.
This is the easiest solution if you're fine with freezing bunyan's feature or maintaining your own code.
As it is an open source project, you can even add your feature to it and submit a pull request to help improve bunyan.

Related

Speech to Text: Piping microphone stream to Watson STT with NodeJS

I am currently trying to send a microphone stream to Watson STT service but for some reason, the Watson service is not receiving the stream (I'm guessing) so I get the error "Error: No speech detected for 30s".
Note that I have streamed a .wav file to Watson and I have also tested piping micInputStream to my local files so I know both are at least set up correctly. I am fairly new to NodeJS / javascript so I'm hoping the error might be obvious.
const fs = require('fs');
const mic = require('mic');
var SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
var speechToText = new SpeechToTextV1({
iam_apikey: '{key_here}',
url: 'https://stream.watsonplatform.net/speech-to-text/api'
});
var params = {
content_type: 'audio/l16; rate=44100; channels=2',
interim_results: true
};
const micParams = {
rate: 44100,
channels: 2,
debug: false,
exitOnSilence: 6
}
const micInstance = mic(micParams);
const micInputStream = micInstance.getAudioStream();
micInstance.start();
console.log('Watson is listening, you may speak now.');
// Create the stream.
var recognizeStream = speechToText.recognizeUsingWebSocket(params);
// Pipe in the audio.
var textStream = micInputStream.pipe(recognizeStream).setEncoding('utf8');
textStream.on('data', user_speech_text => console.log('Watson hears:', user_speech_text));
textStream.on('error', e => console.log(`error: ${e}`));
textStream.on('close', e => console.log(`close: ${e}`));
Conclusion: In the end, I am not entirely sure what was wrong with the code. I'm guessing it had something to do with the mic package. I ended up scrapping the package and using "Node-audiorecorder" instead for my audio stream https://www.npmjs.com/package/node-audiorecorder
Note: This module requires you to install SoX and it must be available in your $PATH. http://sox.sourceforge.net/
Updated Code: For anyone wondering what my final code looks like here you go. Also a big shoutout to NikolayShmyrev for trying to help me with my code!
Sorry for the heavy comments but for new projects I like to make sure I know what every line is doing.
// Import module.
var AudioRecorder = require('node-audiorecorder');
var fs = require('fs');
var SpeechToTextV1 = require('watson-developer-cloud/speech-to-text/v1');
/******************************************************************************
* Configuring STT
*******************************************************************************/
var speechToText = new SpeechToTextV1({
iam_apikey: '{your watson key here}',
url: 'https://stream.watsonplatform.net/speech-to-text/api'
});
var recognizeStream = speechToText.recognizeUsingWebSocket({
content_type: 'audio/wav',
interim_results: true
});
/******************************************************************************
* Configuring the Recording
*******************************************************************************/
// Options is an optional parameter for the constructor call.
// If an option is not given the default value, as seen below, will be used.
const options = {
program: 'rec', // Which program to use, either `arecord`, `rec`, or `sox`.
device: null, // Recording device to use.
bits: 16, // Sample size. (only for `rec` and `sox`)
channels: 2, // Channel count.
encoding: 'signed-integer', // Encoding type. (only for `rec` and `sox`)
rate: 48000, // Sample rate.
type: 'wav', // Format type.
// Following options only available when using `rec` or `sox`.
silence: 6, // Duration of silence in seconds before it stops recording.
keepSilence: true // Keep the silence in the recording.
};
const logger = console;
/******************************************************************************
* Create Streams
*******************************************************************************/
// Create an instance.
let audioRecorder = new AudioRecorder(options, logger);
//create timeout (so after 10 seconds it stops feel free to remove this)
setTimeout(function() {
audioRecorder.stop();
}, 10000);
// This line is for saving the file locally as well (Strongly encouraged for testing)
const fileStream = fs.createWriteStream("test.wav", { encoding: 'binary' });
// Start stream to Watson STT Remove .pipe(process.stdout) if you dont want translation printed to console
audioRecorder.start().stream().pipe(recognizeStream).pipe(process.stdout);
//Create another stream to save locally
audioRecorder.stream().pipe(fileStream);
//Finally pipe translation to transcription file
recognizeStream.pipe(fs.createWriteStream('./transcription.txt'));

Winston daily rotate remove .GZIP files

currently im zipping the files daily with winston daily rotate. The thing I want to do now is remove the zip files after a week. is there a possibility to accomplish this by using winston daily rotate or do i have to write it myself ?
Code im using:
const transport = new (winston.transports.DailyRotateFile)({
"name": "basic-log",
"filename": `${logDir}/%DATE%-log`,
"datePattern": "YYYY-MM-DD",
"zippedArchive": true,
"colorize": false,
"maxFiles": '2d'
});
transport.on('rotate', function(oldFilename, newFilename) {
// do something fun
console.log(new Date(), oldFilename, newFilename)
});
const logger = new (winston.Logger)({
transports: [
transport
]
});
Thanks in advance.
currently (winston-daily-rotate-file v.3.3.3) doesn't delete zipped files.
Open bug: https://github.com/winstonjs/winston-daily-rotate-file/issues/125
In winston-daily-rotate-file you can set maxFiles: '7d' which will delete the files that are older than a week.
From winston-daily-rotate-file:
maxFiles: Maximum number of logs to keep. If not set, no logs will be removed. This can be a number of files or number of days. If using days, add 'd' as the suffix. (default: null)
read more about it here: https://www.npmjs.com/package/winston-daily-rotate-file#usage

node.js - use archiver where output is buffer

I want to zip a few readeableStreams into a writableStream.
the purpose is to do all in memory and not to create an actual zip file on disk.
for that i'm using archiver
let bufferOutput = Buffer.alloc(5000);
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(bufferOutput);
archive.append(someReadableStread, { name: test.txt});
archive.finalize();
I get an error on line archive.pipe(bufferOutput);.
This is the error: "dest.on is not a function"
what am i doing wrong?
Thx
UPDATE:
I'm running the following code for testing and the ZIP file is not created properly. what am I missing?
const fs = require('fs'),
archiver = require('archiver'),
streamBuffers = require('stream-buffers');
let outputStreamBuffer = new streamBuffers.WritableStreamBuffer({
initialSize: (1000 * 1024), // start at 1000 kilobytes.
incrementAmount: (1000 * 1024) // grow by 1000 kilobytes each time buffer overflows.
});
let archive = archiver('zip', {
zlib: { level: 9 } // Sets the compression level.
});
archive.pipe(outputStreamBuffer);
archive.append("this is a test", { name: "test.txt"});
archive.finalize();
outputStreamBuffer.end();
fs.writeFile('output.zip', outputStreamBuffer.getContents(), function() { console.log('done!'); });
In your updated example, I think you are trying to get the contents before it has been written.
Hook into the finish event and get the contents then.
outputStreamBuffer.on('finish', () => {
// Do something with the contents here
outputStreamBuffer.getContents()
})
A Buffer is not a stream, you need something like https://www.npmjs.com/package/stream-buffers
As for why you are seeing garbage, this is because what you are seeing is the zipped data, which will seem like garbage.
To verify if the zipping has worked, you probably want to unzip it again and check if the output matches the input.
By adding the event listener on archiver works for me:
archive.on('finish', function() {
outputStreamBuffer.end();
// write your file
});

How ensure default data in NeDB?

I'm trying to use NeDB as storage for my data in node-webkit application. I have the single collection named config.db:
var Datastore = require('nedb')
, path = require('path')
, db = new Datastore({ filename: path.join(require('nw.gui').App.dataPath, 'config.db') });
When user opens node-webkit application first time my config.db should have default data like:
{
color: "red",
font: 'bold'
...
}
Does NeDB have option for providing default data if there are no yet? Or What it the best way to save it if config.db is empty (in case if user opens node-webkit application first time)?
As far as I know NeDB does not have an option to create initial data.
I think the easiest way to achieve this is to simply query whether there is data. If counting documents returns 0, obviously the initial data have not yet been saved, so you should do this now.
If you include this check in the startup code of your application, it will automatically initialize the data on first run, and afterwards simply do nothing.
I came across this question while looking for a similar solution. I thought I'd share what I ended up with (this is a module):
var fs = require("fs");
module.exports = function (app) {
var customizationService = app.service("customization");
fs.readFile("./db/customization", "utf8", function (err, data) {
if (err) {
return console.log(err);
}
if (data) {
// Sweet, carry on
} else {
var customOptions = {
SiteTitle: "VendoMarket",
SiteTagline: "The freshest eCommerce platform around"
};
// Save data to the locations service
customizationService.create(customOptions);
}
});
};
And then in my app.js file:
//--------------------------------------
// Initialize
//--------------------------------------
var vendoInit = require("./src/init");
vendoInit(app);
(My app.js file is at the base of my project, src is a folder next to it)

node.js retrieve http csv file and load into mongoose

I'm very new to coding in general, so I apologize ahead of time if this question should be rather obvious. Here's what I'm looking to do, and following that I'll post the code I've used so far.
I'm trying to get gzip'd csv rank data from a website and store it into a database, for a clan website that I'm working on developing. Once I get this figured out, I'll need to grab the data once every 5 minutes. The grabbing the csv data I've been able to accomplish, although it stores it into a text file and I need to store it into mongodb.
Here's my code:
var DB = require('../modules/db-settings.js');
var http = require('http');
var zlib = require('zlib');
var fs = require('fs');
var mongoose = require('mongoose');
var db = mongoose.createConnection(DB.host, DB.database, DB.port, {user: DB.user, pass: DB.password});
var request = http.get({ host: 'www.earthempires.com',
path: '/ranks_feed?apicode=myapicode',
port: 80,
headers: { 'accept-encoding': 'gzip' } });
request.on('response', function(response) {
var output = fs.createWriteStream('./output');
switch (response.headers['content-encoding']) {
// or, just use zlib.createUnzip() to handle both cases
case 'gzip':
response.pipe(zlib.createGunzip()).pipe(output);
break;
default:
response.pipe(output);
break;
}
});
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function callback () {
var rankSchema = new mongoose.Schema({
serverid: Number,
resetid: Number,
rank: Number,
countryNumber: Number,
name: String,
land: Number,
networth: Number,
tag: String,
gov: String,
gdi: Boolean,
protection: Boolean,
vacation: Boolean,
alive: Boolean,
deleted: Boolean
})
});
Here's an example of what the csv will look like(first 5 lines of file):
9,386,1,451,Super Kancheong Style,22586,318793803,LaF,D,1,0,0,1,0
9,386,2,119,Storm of Swords,25365,293053897,LaF,D,1,0,0,1,0
9,386,3,33,eug gave it to mak gangnam style,43501,212637806,LaF,H,1,0,0,1,0
9,386,4,128,Justpickupgirlsdotcom,22628,201606479,LaF,H,1,0,0,1,0
9,386,5,300,One and Done,22100,196130870,LaF,H,1,0,0,1,0
Hope it's not too late to help, but here's what I'd do:
Request the CSV formatted data and store it in memory or a file.
Parse the CSV data to convert each row into an object.
For each object, use Model.create() to create your new entry.
First, you need to create a model from your Schema:
var Rank = db.model('Rank', rankSchema);
Then you can parse your block of CSV text (whether you read it from a file or do it directly after your request is up to you.) I created my own bogus data variable since I don't have access to the api, but as long as your data is a newline delimited block of CSV text this should work:
/* Data is just a block of CSV formatted text. This can be read from a file
or retrieved right in the response. */
var data = '' +
'9,386,1,451,Super Kancheong Style,22586,318793803,LaF,D,1,0,0,1,0\n' +
'9,386,2,119,Storm of Swords,25365,293053897,LaF,D,1,0,0,1,0\n' +
'9,386,3,33,eug gave it to mak gangnam style,43501,212637806,LaF,H,1,0,0,1,0\n' +
'9,386,4,128,Justpickupgirlsdotcom,22628,201606479,LaF,H,1,0,0,1,0\n' +
'9,386,5,300,One and Done,22100,196130870,LaF,H,1,0,0,1,0\n';
data = data.split('\n');
data.forEach(function(line) {
line = line.split(',');
if (line.length != 14)
return;
/* Create an object representation of our CSV data. */
var new_rank = {
serverid: line[0],
resetid: line[1],
rank: line[2],
countryNumber: line[3],
name: line[4],
land: line[5],
networth: line[6],
tag: line[7],
gov: line[8],
gdi: line[9],
protection: line[10],
vacation: line[11],
alive: line[12],
deleted: line[13]
};
/* Store the new entry in MongoDB. */
Rank.create(new_rank, function(err, rank) {
console.log('Created new rank!', rank);
});
});
You could put this in a script and run it every 5-minutes using a cron job. On my Mac, I'd edit my cron file with crontab -e, and I'd setup a job with a line like this:
*/5 * * * * /path/to/node /path/to/script.js > /dev/null

Resources