I am using Groovy and Log4J.
I am not a Log4J expert but after searching many sites for answers I thought I had a configuration that should work in the “Config.groovy” file.
Here’s the result:
I get console logging.
However, the log files named “project.log” and “StackTrace.log” are empty.
I also get another file created named “StackTrace.log.1” (2KB size) that contains an exception message (a non-critical error) posted after I run the application.
Questions:
Why am I not getting logging messages in the “project.log” and “StackTrace.log” files?
Why is a file named “StackTrace.log.1” getting created and written to instead of the stack trace messages getting logged to the “StackTrace.log” file?
Any help or clues as to what I'm doing wrong will be greatly appreciated.
Here is my “Config.groovy” file (log4j portion):
// log4j configuration
log4j = {
// Set default level for all, unless overridden below.
root { debug 'stdout', 'file' }
// Set level for all application artifacts
info "grails.app"
error "org.hibernate.SQL", "org.hibernate.type"
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
warn 'org.mortbay.log'
appenders {
rollingFile name: 'file', file:'project.log', maxFileSize:1024, append: true
rollingFile name: 'stacktrace', file: "StackTrace.log", maxFileSize: 1024, append: true
}
}
is it possible that StackTrace.log.1 has been created because the maxFileSize of 1024 has been reached and then the rollingFile?
I would also begin by removing all the class names listed there so that the debug error level defined in the root closure is applied to all loggers and work from there.
Related
Azure application insights log messages using log4j framework in java as shown below.
https://learn.microsoft.com/en-us/azure/azure-monitor/app/java-trace-logs
Is there any similar thing for nodejs application without using using azure node sdk to log messages. I am looking for logging log messages using log4js messages to App insights with some configuration changes.
If you just need to log messages. Your can use log package for nodejs.
Installation:
npm i log
This is Universal logging utility
Configurable, environment and presentation agnostic, with log levels and namespacing (debug style) suppor
Usage
Writing logs
// Default logger writes at 'info' level
const log = require("log");
// Log 'info' level message:
log("some info message %s", "injected string");
// Get namespaced logger (debug lib style)
log = log.get("my-lib");
// Log 'info' level message in context of 'my-lib' namespace:
log("some info message in 'my-lib' namespace context");
// Namespaces can be nested
log = log.get("func");
// Log 'info' level message in context of 'my-lib:func' namespace:
log("some info message in 'my-lib:func' namespace context");
// Log 'error' level message in context of 'my-lib:func' namespace:
log.error("some error message");
// log output can be dynamically enabled/disabled during runtime
const { restore } = log.error.disable();
log.error("error message not really logged");
// Restore previous logs visibiity state
restore();
log.error("error message to be logged");
Available log levels
Mirror of applicable syslog levels (in severity order):
debug - debugging information (hidden by default)
info - a purely informational message (hidden by default)
notice - condition normal, but significant
warning (also aliased as warn) - condition warning
error - condition error - to notify of errors accompanied with recovery mechanism (hence reported as log and not as uncaught exception)
Note: critical, alert, emergency are not exposed as seem to not serve a use case in context of JS applications, such errors should be exposed as typical exceptions
Output message formatting
log doesn't force any specific arguments handling. Still it is recommended to assume printf-like message format, as all currently available writers are setup to support it. Placeholders support reflects one implemented in Node.js format util
Excerpt from Node.js documentation:
The first argument is a string containing zero or more placeholder tokens. Each placeholder token is replaced with the converted value from the corresponding argument. Supported placeholders are:
%s - String.
%d - Number (integer or floating point value).
%i - Integer.
%f - Floating point value.
%j - JSON. Replaced with the string '[Circular]' if the argument contains circular references.
%o - Object. A string representation of an object with generic JavaScript object formatting. Similar to util.inspect() with options { showHidden: true, depth: 4, showProxy: true }. This will show the full object including non-enumerable symbols and properties.
%O - Object. A string representation of an object with generic JavaScript object formatting. Similar to util.inspect() without options. This will show the full object not including non-enumerable symbols and properties.
%% - single percent sign ('%'). This does not consume an argument.
Note to log writer configuration developers: For cross-env compatibility it is advised to base implementation on sprintf-kit
Enabling log writing
log on its own doesn't write anything to the console or any other means (it just emits events to be consumed by preloaded log writers).
To have logs written, the pre-chosen log writer needs to be initialized in the main (starting) module of a process.
List of available log writers
log-node - For typical Node.js processes
log-aws-lambda - For AWS lambda environment
Note: if some writer is missing, propose a PR
Logs Visibility
Default visibility depends on the enviroment (see chosen log writer for more information), and in most cases is setup through the following environment variables:
LOG_LEVEL
(defaults to notice) Lowest log level from which (upwards) all logs will be exposed.
LOG_DEBUG
Eventual list of namespaces to expose at levels below LOG_LEVEL threshold
List is comma separated as e.g. foo,-foo:bar (expose all foo but not foo:bar).
It follows convention configured within debug. To ease eventual migration from debug, configuration fallbacks to DEBUG env var if LOG_DEBUG is not present.
Timestamps logging
Writers are recommended to to expose timestamps aside each log when following env var is set
LOG_TIME
rel (default) - Logs time elapsed since logger initialization
abs - Logs absolute time in ISO 8601 format
Tests
$ npm test
Project cross-browser compatibility supported by:
My app (locally) raises ActiveStorage::IntegrityError error, whenever it tries to attach a file. How can I get out of this error?
I have only one has_one_attached and I don't know how that error gets in the way.
# model
has_one_attached :it_file
Tempfile.open do |temp_file|
# ...
it_file.attach(io: temp_file, filename: 'filename.csv', content_type: 'text/csv')
end
# storage.yml
local:
service: Disk
root: <%= Rails.root.join("storage") %>
EDIT: it can be related with deleting storage/ directory (it happened after I deleted that) or it can be because it's happening in a job (the full error was Error performing ActivityJob (Job ID: .. ) from Async( .. ) in .. ms: ActiveStorage::IntegrityError (ActiveStorage::IntegrityError)
And this does not add files to storage/ folder but it's generating folders under it when I tried to attach them.
As mentioned in the comments, one reason this can happen is that the file object is at the end of the file, which was the problem in this case. It could be fixed here with temp_file.rewind.
Very weird. After update to rails 6.0 I must recalculate some checksums. Yes I used dokku,docker. It was fine before update.
# Disk service is in use for ActiveStorage
class ProjectImage < ApplicationRecord
has_one_attached :attachment
end
# update all checksums
ProjectImage.all.each do |image|
blob = image.attachment.blob
blob.update_column(:checksum, Digest::MD5.base64digest(File.read(blob.service.path_for(blob.key))))
end;
This was happening to me not because of anything mentioned above.
In my case I was defining a test var using one dummy file, but I was attaching it to 2 different records.
let(:file) { File.open(Rails.root.join('spec', 'fixtures', 'files', 'en.yml')) }
let(:data) { [file, file] }
The function in question received a list of ids and data and attaching the files to the records. This is a simplified version of the code
record_0.file.attach(
io: data[0],
filename: 'en.yml',
content_type: 'application/x-yaml'
)
record_1.file.attach(
io: data[1],
filename: 'en.yml',
content_type: 'application/x-yaml'
)
Once I defined 2 test vars, one for each record, using the same file I got it to work.
let(:file_0) { File.open(Rails.root.join('spec', 'fixtures', 'files', 'en.yml')) }
let(:file_1) { File.open(Rails.root.join('spec', 'fixtures', 'files', 'en.yml')) }
let(:data) { [file_0, file_1] }
Background
In my case I faced this error when I was trying to upgrade the rails config defaults.
I was activating configs from config/initializers/new_framework_defaults_6_1.rb which was generated by rails app:update rake task.
Cause
I activated this setting
Rails.application.config.active_storage.track_variants = true
which collided with our existing mechanism to handle variant generation. We are reading variants size/types from account-settings so its complicated.
Technical cause
As mentioned above, this is causes due to mismatch in checksum of the file and checksum stored in blob record in database.
# activestorage-6.1.7.1/lib/active_storage/downloader.rb:37
def verify_integrity_of(file, checksum:)
unless Digest::MD5.file(file).base64digest == checksum
raise ActiveStorage::IntegrityError
end
end
Solution
I commented it back
To make sure its always inactive, I moved the setting to config/initializers/active_storage.rb like below
# Track Active Storage variants in the database.
# *Note*: Its Rails 6.1.0 feature and we have our own way of handling variants which also depends upon the
# thumbnail_sizes setting values and varies from account to account.
# so we are disabling it for now.
Rails.application.config.active_storage.track_variants = false
Summary
You may want to use this awesome feature so see workarounds.
In your case, causes can be something else so look deeper.
disabling this feature solves my issue.
I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.
I am looking for a logging solution for my node.js app that would allow me to set the logging level via file/folder selectors.
For example I would like to be able to set the logging for all files in /app/schema to 'info'. And all the rest to 'error'.
Exemplary configuration:
{
"*":"error",
"/app/schema":"info" //<--Regex expression would be great too.
}
I constantly comment/uncomment/remove logging statements when I need to examine something. I would rather do that via a configuration change and leave the logging files intact. A global debugging level just creates too way to much noise and volume (which matters when storing logs).
Is there something like this? Apache log4j is similar, you can set logging level on package level.
If I get your question right, I'd suggest Bunyan, it lets you configure as many log streams and log levels as you like, i.e. from the docs:
var log = bunyan.createLogger({
name: 'myapp',
streams: [
{
level: 'info',
stream: process.stdout // log INFO and above to stdout
},
{
level: 'error',
path: '/var/tmp/myapp-error.log' // log ERROR and above to a file
}
]
});
You'll have to set up the logic for each path with each log-level.
I am using the Bunyan module for NodeJS logging. When I try using the rotating-file type, it makes my app crash every time and outputs this error:
Error: ENOENT, rename 'logs/info.log.3'
However, it never happens at the same time so I can't find any logic...
This is how I instanciate my logger:
var log = Bunyan.createLogger(config.log.config);
log.info('App started, ' + process.env.NODE_ENV);
And here is my config.json:
{
"name" : "app",
"streams" : [
{
"type" : "rotating-file",
"period": "5000ms", //Low period is for testing purposes
"count" : 12,
"level" : "info",
"path" : "logs/info.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "error",
"path" : "logs/error.log"
},
{
"type" : "rotating-file",
"period": "5000ms",
"count" : 12,
"level" : "trace",
"path" : "logs/trace.log"
}
]
}
Can anyone advise how to fix my issue? Thanks in advance.
What I have just done (last night actually) to get around this problem of a master + workers contending over a Bunyan rotating-file is to have the workers write "raw" log records to a stream-like object I created called a WorkerStream. The write method of the WorkerStream simply calls process.send to use IPC to deliver the log record to the master. The master uses a different logger config that points to a rotating-file. The master uses the code shown below to listen for log records from its workers and write them to the log file. So far it appears to be working perfectly.
cluster.on('online', function (worker) {
// New worker has come online.
worker.on('message', function (msg) {
/* Watch for log records from this worker and write them
to the real rotating log file.
*/
if (msg.level) {
log._emit(msg);
}
});
});
ln is your friend.
Existing logging libraries have rotation problem with cluster module. Why doesn't ln have this issue?
Both bunyan and log4js rename the log file on rotation. The disaster happens on file renaming under cluster environment because of double files renaming.
bunyan suggests using the process id as a part of the filename to tackle this issue. However, this will generate too many files.
log4js provides a multiprocess appender and lets master log everything. However, this must have the bottleneck issue.
To solve this, I just use fs.createWriteStream(name, {"flags": "a"}) to create a formatted log file at the beginning instead of fs.rename at the end. I tested this approach with millisecond rotation under cluster environment and no disasters occurred.
I have experienced the same issue without using clustering. I believe the problem is being caused by old files sitting in the log directory. While the main logger can open and append to existing files, the file rotation logic uses rename, which files when it steps on an existing file. (e.g. an existing info.log.3 file).
I'm still digging into the source to figure out what needs to be changed to recover from the rolling error.
One additional thought as I review the source. If you have multiple Bunyan log instances that use the same log file (in my case, a common error.log), the rename calls could be happening nearly concurrently from the OS level (asynchronous and separate calls from a Node.js perspective, but concurrently from the OS perspective).
It's sadly not possible to use multiple rotating file streams against the same file.
If you're in the same process, you must use a single logger object - make sure you're not creating multiple of them.
If you're working across processes, you must log to different files. Unfortunately there's nothing yet that has the IPC in place to allow different rotators to coordinate amongst themselves.
I have a plugin rotating file stream that detects if you try to create 2 rotators against the same file in the a single process and throws an error.
It can't help in the case of multiple processes tho.
bunyan-rotating-file-stream
From my experience, it happens sometimes when the logs directory (or whatever you named it) does not exist.
If you are running through this error in a automation pipeline, for example, you may be ignoring all the files in logs and committing it empty, then it is not created when the repository is cloned by the pipeline.
Simply make sure that logs is created by placing a .gitkeep file inside it (or any other trick).
This may be the case of many of you who come across this question.