I am using the Bunyan module for NodeJS logging. When I try using the rotating-file type, it makes my app crash every time and outputs this error:
Error: ENOENT, rename 'logs/info.log.3'
However, it never happens at the same time so I can't find any logic...
This is how I instanciate my logger:
var log = Bunyan.createLogger(config.log.config);
log.info('App started, ' + process.env.NODE_ENV);
And here is my config.json:
{
"name" : "app",
"streams" : [
{
"type" : "rotating-file",
"period": "5000ms", //Low period is for testing purposes
"count" : 12,
"level" : "info",
"path" : "logs/info.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "error",
"path" : "logs/error.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "trace",
"path" : "logs/trace.log"
}
]
}
Can anyone advise how to fix my issue? Thanks in advance.
What I have just done (last night actually) to get around this problem of a master + workers contending over a Bunyan rotating-file is to have the workers write "raw" log records to a stream-like object I created called a WorkerStream. The write method of the WorkerStream simply calls process.send to use IPC to deliver the log record to the master. The master uses a different logger config that points to a rotating-file. The master uses the code shown below to listen for log records from its workers and write them to the log file. So far it appears to be working perfectly.
cluster.on('online', function (worker) {
// New worker has come online.
worker.on('message', function (msg) {
/* Watch for log records from this worker and write them
to the real rotating log file.
*/
if (msg.level) {
log._emit(msg);
}
});
});
ln is your friend.
Existing logging libraries have rotation problem with cluster module. Why doesn't ln have this issue?
Both bunyan and log4js rename the log file on rotation. The disaster happens on file renaming under cluster environment because of double files renaming.
bunyan suggests using the process id as a part of the filename to tackle this issue. However, this will generate too many files.
log4js provides a multiprocess appender and lets master log everything. However, this must have the bottleneck issue.
To solve this, I just use fs.createWriteStream(name, {"flags": "a"}) to create a formatted log file at the beginning instead of fs.rename at the end. I tested this approach with millisecond rotation under cluster environment and no disasters occurred.
I have experienced the same issue without using clustering. I believe the problem is being caused by old files sitting in the log directory. While the main logger can open and append to existing files, the file rotation logic uses rename, which files when it steps on an existing file. (e.g. an existing info.log.3 file).
I'm still digging into the source to figure out what needs to be changed to recover from the rolling error.
One additional thought as I review the source. If you have multiple Bunyan log instances that use the same log file (in my case, a common error.log), the rename calls could be happening nearly concurrently from the OS level (asynchronous and separate calls from a Node.js perspective, but concurrently from the OS perspective).
It's sadly not possible to use multiple rotating file streams against the same file.
If you're in the same process, you must use a single logger object - make sure you're not creating multiple of them.
If you're working across processes, you must log to different files. Unfortunately there's nothing yet that has the IPC in place to allow different rotators to coordinate amongst themselves.
I have a plugin rotating file stream that detects if you try to create 2 rotators against the same file in the a single process and throws an error.
It can't help in the case of multiple processes tho.
bunyan-rotating-file-stream
From my experience, it happens sometimes when the logs directory (or whatever you named it) does not exist.
If you are running through this error in a automation pipeline, for example, you may be ignoring all the files in logs and committing it empty, then it is not created when the repository is cloned by the pipeline.
Simply make sure that logs is created by placing a .gitkeep file inside it (or any other trick).
This may be the case of many of you who come across this question.
Related
I'm making a discord bot that queues two people together for a game, it performs this by having their discord Id, queue status, an opponent in a JSON file. Looks like this for each user:
{
"discordId": "296062947329966080",
"dateAdded": "2019-03-11T02:34:01.303Z",
"queueStatus": "notQueuing",
"opponent": null
},
When one person queues up with a command it sets "queueStatus" to Queuing and when another person is found with Queuing it sets opponent to that person and tells both users that they are opponents. The problem is that randomly the JSON file will corrupt when changing and something like this will happen to the bottom:
"dateAdded": "2019-03-11T02:34:01.303Z",
"queueStatus": "notQueuing",
"opponent": null
}
]
}537"
}
]
}
My only idea is that it's because two people doing it at the same time writes to the file at the same time and corrupts it and that fs.writeFileSync would fix it, but if I use fs.writeFileSync the entire rest of the discord bot pauses and stops working until it's done writing which isn't a very practical solution.
The data being stored in the JSON file should be migrated to MongoDB or other DB. CRUD operations on a single static file from multiple jobs/sources is not a scalable solution. Migrating this data storage to a Database will resolve these pausing and stoppages.
Checkout this video on Youtube by freecodecamp.org
However, if the JSON file is required or still preferred I would recommend using EventEmitter to create a single blocking queue for reading and writing.
I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.
I am working on a very old Nodejs application which creates a new child process using forever-monitor. The logs of this child process are taken care by forever-monitor only. This is how the configuration looks like:
var child = new (forever.Monitor)(__dirname + '/../lib/childprocess.js', {
max: 3,
silent: true,
options: [program.port],
'errFile': __dirname + '/../childprocess_error.log',
'outFile': __dirname + '/../childprocess_output.log'
}
);
Everything is working fine in this setup. The new requirement is to rotate these logs every 12 hours. That is every 12 hours a new file will be created which will have all the content of this file childprocess_output.log and should be stored in some other directory. The new log file will obviously have the timestamp appended at the end of the name (eg: childprocess_output_1239484034.log).
And the original file childprocess_output.log should be reset, that is all its content should be deleted and it should start logging from fresh.
I am trying to understand which npm library should I used for this purpose. I googled a bit and found a few of the npm libraries which matches my requirement, but the number of downloads for these libraries was really small, so I doubt the reliability of those libraries.
Which library NodeJs developers use for log rotation?
Also, my last resort would be to use the Linux tool Logrotate if I couldn't find any appropriate library in Node. I am avoiding using Logroate because I want my application to handle the scenario and not depend on the instance configuration.
you can use :
fs (the file system library) handled with methods like statSync and renameSync coupled with try-catches block-codes.
I have a basic service check in a puppet manifest I want running most of the time with just
service{ $service_name :
ensure => "running",
enable => "true",
}
The thing is there are periods of maintenance I would like to ensure puppet doesn't come along and try to start it back up.
I was thinking creating a file "no_service_start" in a specified path and do a 'creates' check like you could do with a guard for exec but it doesn't look like that's available for the service type.
My next thought was to have the actual service init script do the check for this file itself and just die early if that guard file exists.
While this works in that it prevents a service from starting it manifests itself as a big red error in puppet (as expected). Given the service not starting is a desired outcome if that file is in place I'd rather not have an error message present and have to spend time thinking about if it's "legit" or not.
Is there a more "puppet" way this should be implemented though?
Define a fact for when maintenance is happening.
Then put the service definition in an if block based off that fact.
if !maintenance
{
service{ $service_name :
ensure => "running",
enable => "true",
}
}
Then when puppet compiles the catalog if maintenance == true the service will not be managed and stay in whatever state it currently happens to be.
I don't really like this answer, but to work around puppet spitting out errors when bailing b/c of a guard file is to have to init script that's doing that check bail with an exit code as 0.
How about putting the check outside? You could do something similar to this: https://stackoverflow.com/a/20552751/1097483, except with your service check inside the if loop.
As xiankai said, you can do this on the puppetmaster. If you have a script that returns running or stopped as a string, depending on the current time or anything, you can write something like:
service{ $service_name :
ensure => generate('/usr/local/bin/maintenanceScript.sh');
}
I am running play on multiple machines in our datacenter. We loadbalance the hell out of everything. On each play node/VM I'm using Apache and an init.d/play script to start and stop the play service.
The problem is that our play websites are hosted on shared network storage. This makes deployment really nice, you deploy to one place and the website is updated on all 100 machines. Each machine has a mapped folder "/z/www/PlayApp1" where the play app lives.
The issue is that when the service starts or stops the server.pid file is being written to that network location where the apps files live.
The problem is that as I bring up 100 nodes, the 100th node will override the PID file with it's pid and now that pid file only represents the correct process ID for 1 out of 100 nodes.
So how do I get play to store the pid file locally and not with the app files on the network share? I'll need each server's PID file to reflect that machines actual process.
We are using CentOS (Linux)
Thanks in advance
Josh
According to https://github.com/playframework/play/pull/43 it looks like there is a --pid_file command line option; it might only work with paths under the application root so you might have to make directories for each distinct host (which could possibly be symlinks)
I have 0 experience with Play so hopefully this is helpful information.
I don't even think it should run a second copy, based on the current source code. The main function is:
public static void main(String[] args) throws Exception {
File root = new File(System.getProperty("application.path"));
if (System.getProperty("precompiled", "false").equals("true")) {
Play.usePrecompiled = true;
}
if (System.getProperty("writepid", "false").equals("true")) {
writePID(root);
}
:
blah blah blah
}
and writePID is:
private static void writePID(File root) {
String pid = ManagementFactory.getRuntimeMXBean().getName().split("#")[0];
File pidfile = new File(root, PID_FILE);
if (pidfile.exists()) {
throw new RuntimeException("The " + PID_FILE + " already exists. Is the server already running?");
}
IO.write(pid.getBytes(), pidfile);
}
meaning it should throw an exception when you try to run multiple copies using the same application.path.
So either you're not using the version I'm looking at or you're discussing something else.
It seems to me it would be a simple matter to change that one line above:
File root = new File(System.getProperty("application.path"));
to use a different property for the PID file storage, one that's not on the shared drive.
Although you'd need to be careful, root is also passed to Play.int so you should investigate the impact of changing it.
This is, after all, one of the great advantages of open source software, inasmuch as you can fix the "bugs" yourself.
For what it's worth, I'm not a big fan of the method you've chosen for deployment. Yes, it simplifies deployment but upgrading your servers is an all-or-nothing thing which will cause you grief if you accidentally install some dodgy software.
I much prefer staged deployments so I can shut down non-performing nodes as needed.
Change your init script to write the pid to /tmp or somewhere else machine-local.
If that is hard, a symlink might work.