log4js - node package - not writing to file - node.js

I am novice to nodejs and wanted my node app to log to a file as well as console.
Here is my code structure:
fpa_log4j.json:
{
"appenders": {
"fpa_file": {
"type": "file",
"filename": "fpa.log",
"maxLogSize": 10485760,
"backups": 10,
"compress": true
},
"screen": {
"type": "stdout",
"layout": {
"type": "coloured"
}
}
},
"categories": {
"default": {
"appenders": [
"fpa_file",
"screen"
],
"level": "trace"
}
}
}
config.js
var LOG4J_CFG = 'config/fpa_log4j.json';
var fs = require('fs');
var log4js = require('log4js');
var path = require('path');
LOG4J_CFG = path.resolve(LOG4J_CFG);
log4js.configure(JSON.parse(fs.readFileSync(LOG4J_CFG, 'utf8')));
....
....
module.exports = {
log4js,
....
....
}
app.js
var cfg = require('./config');
var log = cfg.log4js.getLogger("appMain");
log.info('FPA program STARTED!');
OUTPUT
[2017-12-04T03:20:17.791] [INFO] appMain - FPA program STARTED!
However the log file seems to be empty:
dir fpa.log
Volume in drive C is XXXXXXXXX
Volume Serial Number is XXXXXXXXXXXX
Directory of c:\fpaMain
12/04/2017 12:13 AM 0 fpa.log
1 File(s) 0 bytes
0 Dir(s) 12,242,776,064 bytes free
To be frank, I was able to make this work few days back. But then something got changed (alas, not able to recollect what it is!) I guess, and then logging into the file stopped.

Ok. So I figured it out finally. Sharing it here for the sake of others. Here it comes:
There were few more lines in the app.js (which were removed for clarity) and the app was in fact crashing immediately after the log.info() statement. And I was expecting the statements to appear in the log file as it appeared on the console. But then, I failed to realize that, unlike other programming languages, nodejs has got the powerful feature of parallel processing by birth itself (which in fact was one of the core reasons for my love towards it :)).
So in my case, due to its unparallel feature of parallel processing, nodejs just fired the log statement and went on processing other statements. And, when it crashed, it took down the whole program with it. Since writing to the console was way much faster than IO (for the obvious reasons!!), it showed up on the screen faster. However before it could write to the file system the app got crashed, ending up nothing being written to the file!
Lesson Learned:
Always remember that nodejs === parallel processing
For any IO related issues like the above, ensure that the program runs to complete or at least for some more time before it crashes,
thus providing enough time for the IO to complete.

Related

Curl command is not processing changes I make to file

I am trying to test parsing of a zip file in node.js using curl from the command line. Originally, I had a route that looks like this:
app.post('/processZip', (req, res) => {
const zip = req.file
console.log(req)
extractCSVFilesFromZip(zip, '/tmp/connections', '/tmp/messages')
const connectionsOutputPath = '/tmp/connections'
const messagesOutputPath = '/tmp/messages'
console.log(`Size of Parsed Connections File: ${connectionsOutputPath.size}`)
console.log(`Size of Parsed Messages File: ${messagesOutputPath.size}`)
res.send('success!')
})
which calls a function that looks like this:
const extractCSVFilesFromZip = (zipFilePath, connectionsCSVOutputPath,
messagesCSVOutputPath) => {
console.log(zipFilePath)
fs.createReadStream(zipFilePath)
.pipe(unzip.Parse())
.on('entry', entry => {
const [
fileName,
size
] = [
entry.path,
entry.size
]
if (fileName === 'Connections.csv') {
console.log(`Size of Connections File to Parse: ${size}`)
entry.pipe(fs.createWriteStream(connectionsCSVOutputPath))
} else if (fileName === 'Messages.csv') {
console.log(`Size of Messages File to Parse: ${size}`)
entry.pipe(fs.createWriteStream(messagesCSVOutputPath))
} else {
entry.autodrain()
}
})
}
I am using this curl command to test the request:
curl -F file=#../../../Downloads/Basic_LinkedInDataExport_09-14-2018.zip http://localhost:5000/processZip/
Originally, it gave me an error pointing to the first instance of createReadStream in the function, so I commented out all the code and just tried to console.log(zipFilePath) to see what is being sent. But I still get the same error. In fact, I can comment out, remove, or change any of the code in either the route or the file, but it makes no difference. I still get the same error. It's as if curl is still sending the request to a cached version of the files, and not processing the changes I am making. But if I examine the files from the command line with sudo nano I can see the updated versions. What could be causing this issue? I have saved the files and restarted the server each time. Could it be that I need to wait longer than usual for the changes to be processed because it is a larger codebase than I am used to working in, or is something else to blame. For what it is worth, the servers are being run by forever. Thanks in advance for any help!
Okay I figured it out, there was a ghost process running on 5000. killall -9 node did the trick!

Is there an alternative to requiring common node modules besides globals?

Getting tired of typing
const async = require('async');
const _ = require('lodash');
at the head of almost every JS file.
One could use globals, good for ease of use, bad for unit tests.
Is there an alternative that I'm missing? If I can do a require('common') to load the utilities I want and use them in the current file, that would be best.
Well then do that - create a common.js file and put all the stuff in there, then simply require whatever you need in a single statement using the destructuring assignment.
Example
common.js
module.exports = {
fs: require('fs'),
http: require('http')
//... what else you want
};
main.js
const { fs, http } = require('./common.js');
Note
This was just an example to show you how to archive your desired behaviour. But I would not recommend you to use this as it obscures what you're actually loading and bring a unnecessary dependency just to save some statements.
Wow, there's this amazing thing called keyboard snippets which completely saves one from typing those redundant characters over and over, without the need for compromising the integrity of the code.
VSCode
"debug require": {
"prefix": "rede",
"body": [
"const debug = require('debug')('$1');$0"
]
},
"lodash require": {
"prefix": "relo",
"body": [
"const _ = require('lodash');$0"
]
},
"async require": {
"prefix": "reas",
"body": [
"const async = require('async');$0"
]
},

advantage of creating node server in background

Until today in all my node\express projects, I have created the HTTP server in some file
var server = http.createServer(app);
http.globalAgent.maxSockets = maxSockets;
server.listen(port, function() {
logger.info('App starting on : ' + port );
});
and called that file directly to start the app. Recently i am noticing some boilerplates, using the approach of calling a starter file and based on arguments, spawn a child process, be it building the app or starting the app
in package json
"start": "babel-node tools/run.js"
in run .js
// Launch `node build/server.js` on a background thread
function spawnServer() {
return cp.spawn(
'node',
[
// Pre-load application dependencies to improve "hot reload" restart time
...Object.keys(pkg.dependencies).reduce(
(requires, val) => requires.concat(['--require', val]),
[],
),
// If the parent Node.js process is running in debug (inspect) mode,
// launch a debugger for Express.js app on the next port
...process.execArgv.map(arg => {
if (arg.startsWith('--inspect')) {
const match = arg.match(/^--inspect=(\S+:|)(\d+)$/);
if (match) debugPort = Number(match[2]) + 1;
return `--inspect=${match ? match[1] : '0.0.0.0:'}${debugPort}`;
}
return arg;
}),
'--no-lazy',
// Enable "hot reload", it only works when debugger is off
...(isDebug
? ['./server.js']
: [
'--eval',
'process.stdin.on("data", data => { if (data.toString() === "load") require("./server.js"); });',
]),
],
{ cwd: './build', stdio: ['pipe', 'inherit', 'inherit'], timeout: 3000 },
);
}
eg : https://github.com/kriasoft/nodejs-api-starter/
how is this advantageous?
In my experience this is not a widespread practice. It appears based on the comments that they're doing it in order to be able to do configuration on command line options based on the environment and so on. To be honest, this seems a bit counterproductive to me.
What I've seen far more often is to start node from the command line with just npm start. package.json has several options for defining what that will do, but most often I would just have it call something simple like node server.js or similar. If you have multiple options for starting that you want to offer (for example, turning on the debug flags and such), then just add more scripts and call npm run <scriptname> to make it happen.
Any other special sauce would be baked into the process manager of choice (systemd is my preference but others like pm2 exist and work well), and between that and the environment variables you can do all or most of what's in the above script without all the indirection. I feel like the example you posted would really up the 'wtf-factor' if I were starting to maintain the app and didn't know that at it was doing things like that for me under the hood.

Run a gulp tasks on multiple sets of files

I have a gulp task that I would like to run on multiple sets of files. My problem is pretty much similar to what is described here except that I define my sets of files in an extra config.
What I've come up with so far looks like the following:
config.json
{
"files": {
"mainScript": [
"mainFileA.js",
"mainFileB.js"
],
"extraAdminScript": [
"extraFileA.js",
"extraFileB.js"
]
}
}
gulpfile.js
var config = require ('./config.json');
...
gulp.task('scripts', function() {
var features = [],
dest = (argv.production ? config.basePath.compile : config.basePath.build) + '/scripts/';
for(var feature in config.files) {
if(config.files.hasOwnProperty(feature)) {
features.push(gulp.src(config.files[feature])
.pipe(plumper({
errorHandler: onError
}))
.pipe(jshint(config.jshintOptions))
.pipe(jshint.reporter('jshint-stylish'))
.pipe(sourcemaps.init())
.pipe(concat(feature + '.js'))
.pipe(gulpif(argv.production, uglify()))
.pipe(sourcemaps.write('.'))
.pipe(gulp.dest(dest))
);
}
}
return mergeStream(features);
});
My problem is that this doesn't seem to work. The streams are not combine or at least nothing really happens. Some while ago others ran into a similar problem, see here, but even though it should have been fixed it's not working for me.
By the way I've also tested merging the streams in this way:
return es.merge(features)
return es.merge.apply(null, features)
And if I just run the task on a single set of files it works fine.
Motivation
The reason why I want to do this is that at some point concatenating and minifying ALL scripts into one final file doesn't make sense when the sheer number of files is too large. Also, sometimes there is no need to load everything at once. For example all scripts related to an admin interface doesn't need to be load by every visitor.

In grunt, if I'm watching multiple files and two or more change, how can I only run tasks on the changed files?

I've got an initConfig with this code in it:
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
watch: {
options: {
spawn: false
},
coffee: {
files: [
'src/**/*.coffee'
],
tasks: ['coffee', 'coffeelint', 'concat', 'qunit']
},
...
coffee: {
glob_to_multiple: {
expand: true,
flatten: false,
cwd: '.',
src: ['src/**/*.coffee'],
ext: '.js'
}
},
...
grunt.event.on('watch', function (action, filepath) {
if (grunt.file.isMatch("**/*.coffee", filepath)) {
grunt.config(['coffee', 'glob_to_multiple', 'src'], filepath);
}
});
This is supposed to compile only the .coffee files that have changed. This works pretty well. But I just noticed that if I modify multiple at once, it will output this:
Waiting...src\test\resources\app\js\FILE1.coffee
src\main\resources\app\js\FILE2.coffee
OK
>> File "src\test\resources\app\js\FILE1.coffee" changed.
>> File "src\main\resources\app\js\FILE2.coffee" changed.
Running "coffee:glob_to_multiple" (coffee) task
File src/main/resources/app/js/FILE2.js created.
...
As you can see, I've changed two files, but it's only running the tasks on "FILE2.js". How can I avoid this? I want it to run coffee:glob_to_multiple on FILE1 and FILE2, not just one of them.
NOTE: I'm pretty sure the documentation explains how to do this:
If you save multiple files simultaneously you may opt for a more robust method:
var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
grunt.config(['jshint', 'all'], Object.keys(changedFiles));
changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
changedFiles[filepath] = action;
onChange();
});
Following that documentation, I made this change to my code:
var changedFiles = Object.create(null);
var onChange = grunt.util._.debounce(function() {
grunt.config(['coffee', 'glob_to_multiple', 'src'], Object.keys(changedFiles));
changedFiles = Object.create(null);
}, 200);
grunt.event.on('watch', function(action, filepath) {
if (grunt.file.isMatch("**/*.coffee", filepath)) {
changedFiles[filepath] = action;
onChange();
}
});
And things worked exactly the way I want. But I'm not sure how this works. Could someone explain it to me?
It's a pretty sophisticated solution using Lo-Dash debounce ;-) (in a sec...)
Know that when you used your older code of:
grunt.config(['coffee', 'glob_to_multiple', 'src'], filepath);
Grunt is instructed to run the coffee task with the new file. The problem with this is that it's a synchronic process and so when another file is changed ( usually this happens in a matter of milliseconds) then Grunt Watch won't allow you to run another process until the debounceDelay has been reached.
The default debounceDelay is 500 ms, but this can be changed using options of the watch task. (read more About option.debounceDelay
Basically when you save multiple files, as you saw - only the first file saved is changed. In order to bypass this, a great utility for delaying (debouncing) function run is in the grunt.util._.debounce (Lo-Dash link to it is here
The function's parameters are:
_.debounce(func, wait, options)
So it takes in the function, how many ms to wait, and some options (that we don't need here).
When you call the debounce utility it will delay the execution of the function the waitTime and that way - when you save multiple files at once - all the calls will add up to a single function call after those 200ms time period.
That way - the most useful line here, besides the debounce util is the following:
changedFiles[filepath] = action;
Which will add the files to the (at first) empty object of changedFiles. Notice that after the debounce function has launched we reset the changedFiles obj so that the next call will contain only freshly changed files.
Amazing solution indeed ;-)

Resources