I have a c# program that logs with log4net both on file appender and smtp appender.
I would like to keep always the log file but at the end of the program in my c# code I would like to decide if I have to send the email or not.
The usual level or text filters do not help here since while the program runs I log a lot of stuff that may be interesting or not and I know that only at the end of the process.
Is it possible to "block" the smtp appender by code?
You need to set the Threshold property on the appender to OFF to disable logging, and to your required value otherwise, something like this:
private static void SetEmailLogging(bool enabled)
{
// Assuming there aren't more than 1 SmtpAppenders
var appender = log4net.LogManager.GetRepository()
.GetAppenders()
.OfType<SmtpAppender>()
.SingleOrDefault();
if (appender == null)
{
return;
}
// Set the level you want.
appender.Threshold = enabled ? Level.All : Level.Off;
appender.ActivateOptions();
}
Related
I am trying to track the occurrence of specified Security events. I want a message to be displayed to the user whenever these events are logged in the Windows Security log. It was recommended that I use a permanent WMI event consumer/watcher to accomplish this but I have never used this before and don't understand how to implement it based on the documentation.
If anyone can explain how I can do this for, as an example, Event 1102, it would be much appreciated.
You can user ORMi to create watcher and get any new event:
WMIWatcher watcher = new WMIWatcher("root\\CimV2", "Select * From __InstanceCreationEvent WHERE TargetInstance ISA 'Win32_NTLogEvent' and TargetInstance.LogFile='Application'");
watcher.WMIEventArrived += Watcher_WMIEventArrived;
private static void Watcher_WMIEventArrived(object sender, WMIEventArgs e)
{
//HANDLE EVENTS
}
Be sure to check out if the query returns the events you are expecting.
I am developing an express project which will have multiple modules/services in it. The folder structure looks mostly like this:
-- app.js
-- payment_service
-- routes.js
-- index.js
-- models
-- model_1.js
-- model_2.js
APIs in index.js are the only exposed APIs and they work as a gateway for all requests coming for this module/service.
Most of the services can throw operational error under many circumstances, so manual intervention may needed to fix things. So I need to:
Log errors properly with proper context so that some person/script can do the needful.
Figure out the reason of failure.
There will be dedicated teams owning each service. So I should be able to differentiate between error logs for each service so that it can be aggregated and forwarded to concerned person.
I decided to go with ELK stash so that I can generate reports by script.
The main problem that I am facing is that I can't maintain correlation between logs. For example; If a request comes and it travels through five functions and each function logs something then I can't relate those logs.
One way is to create a child logger for each request and pass it to all the functions but that seems to be extra overhead passing logger instance to all the functions.
Another option is to use something like verror and do the logging only at entry point of the service/module so that the whole context can be contained in the log. This approach looks ok for logging errors, however it can't help with info and debug logs - they help me a lot in development and testing phase.
For the sake of differentiating between error logs, I am going to create
A dedicated logger for each service with log level error.
An application wide generic logger for info and debug purpose.
Is this the correct approach?
What will be the best way so that I can achieve all the requirements in simplest way?
I'd recommend you use a logger and you don't need anything too complex. For example:
npm install 12factor-log
Then create a file in your root folder near app.js (or in a /lib folder is where I'd place libraries)
logger.js
const Log = require('12factor-log');
module.exports = (params) => {
return new Log(params);
}
Then in your modules, import your logger and pass in the module name when you instantiate it so you can track where statements come from...
model_1.js
var log = require('./logger')({name: 'model_1'});
// ...
log.info("Something happened here");
// ...
try {
// ...
catch (error) {
const message = `Error doing x, y, z with value ${val}`;
log.error(message);
throw new Error(message);
}
Then handle error gracefully at your controller -> view layer for user-friendly experience.
Your logs would print something like this:
{"ts":"2018-04-27T16:37:24.914Z","msg":"Something happened here","name":"model_1","type":"info","level":3,"hostname":"localhost","pid":18}
As far as correlation of logs, if you see in the output above it includes the hostname of the machine it's running on, and also the name of the module and severity level. You can import this JSON into Logstash and load into Elasticsearch and it will store JSON for easy search and indexing.
See: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
Logging is complex and many people have worked on it. I would suggest not doing so yourself.
So, not following my own advice, I created my own logging package:
https://www.npmjs.com/package/woveon-logger
npm install woveon-logger
This prints file and line numbers of errors and messages, has logging levels and aspect-oriented logging, and can dump a stack trace in one call. It even has color coding options. If you get stuck and need some feature in logging, let me know.
let logger1 = new Logger('log1', {level : 'info', debug : true, showname : true};
let logger2 = new Logger('log2', {level : 'verbose', debug : true, showname : true};
...
log1.info('Here is a log message, that is on line 23.');
log1.verbose('Does not show');
log2.verbose('Shows because log2 is verbose logging');
log2.setAspect('IO', true);
log2.aspect('IO', 'Showing aspect IO logging, for logs for IO related operations');
[2018-06-10T10:43:20.692Z] [INFO--] [log1 ] [path/to/myfile:23] Here is a log message, that is on line 23.
[2018-06-10T10:43:20.792Z] [VERBOS] [log2 ] [path/to/myfile:25] Shows because log2 is verbose logging
[2018-06-10T10:43:20.892Z] [IO----] [log2 ] [path/to/myfile:27] Showing aspect IO logging, for logs for IO related operations
Also, some other features like:
log1.throwError('Logs this as both a line of logging, and throws the error with the same message');
log1.printStack('Prints this label next to the stack trace.');
Hope it helps!
You can use grackle_tracking library https://www.getgrackle.com/analytics_and_tracking
It logs errors & traffic to your db.
I'm currently writing a console app that kicks off a number of stored procedures in our (Sql Server) database. The app is primarily responsible for executing the procedures, logging events to a number of places, and then some arbitrary work after. We have a nice Data NuGet package out there that integrates with OrmLite / ServiceStack, so I'm trying to use OrmLite as our ORM here as well.
The app itself just takes inputs that include the name of the sproc, and I'm executing them based off that (string) name. The sprocs themselves just move data; the app doesn't need to know the database model (and can't; the models can change).
Since these sprocs do quite a bit of work, the sprocs themselves output logging via PRINT statements. It's my goal to include these PRINTed log messages in the logging of the console app.
Is it possible to capture PRINT messages from a DbConnection command? I can't find any way via the built-in commands to capture this; only errors. Do I have to use ExecuteReader() to get a hold of the DataReader and read them that way?
Any help is appreciated. Thanks!
Enable Debug Logging
If you configure ServiceStack with a debug enabled logger it will log the generated SQL + params to the configured logger.
So you could use the StringBuilderLogFactory to capture the debug logging into a string.
CaptureSqlFilter
OrmLite does have a feature where you can capture the SQL output of a command using the CaptureSqlFilter:
using (var captured = new CaptureSqlFilter())
using (var db = OpenDbConnection())
{
db.Where<Person>(new { Age = 27 });
captured.SqlStatements[0].PrintDump();
}
But this doesn't execute the statement, it only captures it.
Custom Exec Filter
You could potentially use a Custom Exec Filter to execute the command and call a custom function with:
public class CaptureOrmLiteExecFilter : OrmLiteExecFilter
{
public override T Exec<T>(IDbConnection dbConn, Func<IDbCommand, T> filter)
{
var holdProvider = OrmLiteConfig.DialectProvider;
var dbCmd = CreateCommand(dbConn);
try
{
return filter(dbCmd);
}
finally
{
MyLog(dbCmd);
DisposeCommand(dbCmd);
OrmLiteConfig.DialectProvider = holdProvider;
}
}
}
//Configure OrmLite to use above Exec filter
OrmLiteConfig.ExecFilter = new CaptureOrmLiteExecFilter();
I have a document library setup to recieve emails. The emails coming in have a single picture and a csv file which I use for some processing.
The override emailrecieved works perfectly but of course as I override I lose the nice SharePoint functionaliy that saves the incomming email as configured in the settings.
It was my understanding that I could call MyBase.EmailRecieved in my event for the underlying functionality to still work. This however is not working and no record of the email coming in is getting retained.
For now I am explicitly creating an audit trail but I would like to rely on SharePoints existing functionality as I believe it will be more robust.
What am I doing wrong with the MyBase.EmailRecieved call? Or what can I do instead if this doesnt work?
Thanks in advance.
When writing your own EmailReceived event receiver you will loose the default functionality.
What you will have to do is to implement this default functionality yourself. Let me give you a simple example. The following example saves all mail attachments to the list if they are *.csv files. You can do the same with the emailMessage and save it to the list as well. As you can see it is as easy as to add Files.Add to add a file to a document library.
public override void EmailReceived(SPList list, SPEmailMessage emailMessage, string receiverData)
{
SPFolder folder = list.RootFolder;
//save attachments to list
foreach (SPEmailAttachment attachment in emailMessage.Attachments)
{
if (attachment.FileName.EndsWith(".csv"))
{
var attachmentFileName = attachment.FileName;
folder.Files.Add(folder.Url + "/" + attachmentFileName, attachment.ContentStream, true);
}
}
list.Update();
}
I am using Groovy and Log4J.
I am not a Log4J expert but after searching many sites for answers I thought I had a configuration that should work in the “Config.groovy” file.
Here’s the result:
I get console logging.
However, the log files named “project.log” and “StackTrace.log” are empty.
I also get another file created named “StackTrace.log.1” (2KB size) that contains an exception message (a non-critical error) posted after I run the application.
Questions:
Why am I not getting logging messages in the “project.log” and “StackTrace.log” files?
Why is a file named “StackTrace.log.1” getting created and written to instead of the stack trace messages getting logged to the “StackTrace.log” file?
Any help or clues as to what I'm doing wrong will be greatly appreciated.
Here is my “Config.groovy” file (log4j portion):
// log4j configuration
log4j = {
// Set default level for all, unless overridden below.
root { debug 'stdout', 'file' }
// Set level for all application artifacts
info "grails.app"
error "org.hibernate.SQL", "org.hibernate.type"
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate'
warn 'org.mortbay.log'
appenders {
rollingFile name: 'file', file:'project.log', maxFileSize:1024, append: true
rollingFile name: 'stacktrace', file: "StackTrace.log", maxFileSize: 1024, append: true
}
}
is it possible that StackTrace.log.1 has been created because the maxFileSize of 1024 has been reached and then the rollingFile?
I would also begin by removing all the class names listed there so that the debug error level defined in the root closure is applied to all loggers and work from there.