I have five different log4j property files in my application for each event. I do not want the application to load the file using DOM or PropertyConfigurator. But I want to load all these properties into a Map with event name as key and Properties or Logger as the value. So that when I invoke the getLogger method with the event name the appropriate logger object based on the event name will be returned.
The implementation in this post helps me to some extent. log4j log file names?
They are dynamically creating logger objects based on the job. But I want to make use of the static log4j file for each event and load it and give it back.
I also checked the response in this post. multiple log4j instance configuration
But as the event names and the list of appenders for each event will be a huge number in my application, for better maintainability purposes, I am opting for one log4j file for each event.
Expecting your help on this.
Thanks,
Radhika
I have log4j files for each event defined.
Pass the event name and using property configurator load the event specific log4j property file in context. The getLogger method then will have that Logger.
private static synchronized Logger getEventLogger(String eventName) {
Logger logger = null;
try {
logger = m_loggers.get(eventName);
if (logger == null) {
PropertyConfigurator.configure(eventName + ".properties");
logger = Logger.getLogger(eventName);
m_loggers.put(eventName, logger);
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return logger;
}
Related
In my Spring Boot project, I have two JMS listeners listening to one queue. All messages received from the queue have to be processed in the same way and persisted / updated in the database (Oracle). Currently, I have a synchronized method in a class that is doing the parsing of the messages. As expected, all thread read messages simultaneously, but parsing is done one by one as the method (parseMessage()) is synchronized. What I want is to parse the messages simultaneously and do database operations as well.
How can I solve this?
I don't want to create two different classes with the same code and use #Qualifier to call different classes in each listener, as the code for parsing the message is the same.
The ideal solution, I think, is to do database operations using a new synchronized method in a new class, but parsing the message in a multi-threaded way. So, at a time only one thread can say persist / update. When a thread is not waiting to persist / update, it continues the parsing on its own thread.
Please correct me if I am wrong or if you find the optimal solution. Let me know if any other info is needed.
JMS Controller Class
#RestController
#EnableJms
public class JMSController {
#Autowired
private IParseMapXml iParseMapXml;
#JmsListener(destination = "${app.jms_destinaltion}")
public void receiveMessage1(String recvMsg) {
try {
InputSource is = new InputSource(new StringReader(recvMsg.replaceAll("&", "&")));
Document doc = new SAXReader().read(is);
iParseMapXml.parseMessage(doc);
} catch (Exception e) {
}
}
#JmsListener(destination = "${app.jms_destinaltion}")
public void receiveMessage2(String recvMsg) {
try {
InputSource is = new InputSource(new StringReader(recvMsg.replaceAll("&", "&")));
Document doc = new SAXReader().read(is);
iParseMapXml.parseMessage(doc);
} catch (Exception e) {
}
}
}
Parse XML Interface
public interface IParseMapXml {
public void parseMessage(Document doc);
}
Parsing Implementation
public class ParsingMessageClass implements IParseMapXml{
#Override
#Transactional
synchronized public void parseMessage(Document doc) {
// TODO Auto-generated method stub
....
PROCESS DATA/MESSAGE
....
DO DB OPERATIONS
}
}
I am trying to attach a transaction listener to my jOOQ DSL configuration but none of its method gets ever called.
DSL context configuration
#Bean
public Configuration dslConfiguration(DataSourceConnectionProvider connectionProvider) {
return new DefaultConfiguration()
.set(SQLDialect.POSTGRES)
.set(connectionProvider)
.set(new DefaultTransactionProvider(connectionProvider)) // <-- probably unnecessary
.set(new TxListener()) // <-- here
.set(executeListener());
}
Transaction manager
#Bean
public PlatformTransactionManager transactionManager(EntityManagerFactory entityManagerFactory) {
return new JpaTransactionManager(entityManagerFactory);
}
(Just checked that my calls are done within a transaction: updated some data, threw an exception, data remains unchanged.)
I'm running Seq on an Azure instance (Windows Server 2012 R2 Datacenter) and logging with Serilog from a console application running on my local workstation. I have 3 sinks configured - File, Console and Seq. I'm also running on dnxcore50 (just in case you were thinking my setup wasn't dodgy enough). All my events are showing up in console and the file 100% of the time. Seq is only capturing event about 1 in 5 or more runs, that is it will either capture all the events for the run or none of them. I am using the free single user license to evaluate Seq, and I haven't found anything to suggest there are any limitations that would cause this behavior.
I've set up SelfLog.Out on the loggers, but that logs nothing at all, other than my test line which I added to make sure the self logging could at least write to the specified file.
public abstract class SerilogBaseLogger<T> : SerilogTarget
{
protected SerilogBaseLogger(SerilogOptions options, string loggerType)
{
LoggerConfiguration = new LoggerConfiguration();
Options = options;
LevelSwitch.MinimumLevel = options.LogLevel.ToSerilogLevel();
var file = File.CreateText(Path.Combine(options.LogFilePath, string.Format("SelfLog - {0}.txt", loggerType)));
file.AutoFlush = true;
SelfLog.Out = file;
SelfLog.WriteLine("Testing self log.");
}
// ... snip ....
}
public class SerilogSeqTarget<T> : SerilogBaseLogger<T>
{
public string Server => Options.Seq.Server;
public SerilogSeqTarget(SerilogOptions options)
: base(options, string.Format("SerilogSeqTarget[{0}]", typeof(T)))
{
LoggerConfiguration
.WriteTo
.Seq(Server);
InitializeLogger();
}
public override string ToString()
{
return string.Format("SerilogSeqTarget<{0}>", typeof(T).Name);
}
}
public class SerilogLoggerFactory<TType> : LoggerFactory<SerilogTarget>
{
// .... snip ....
public override IMyLoggerInterface GetLogger()
{
var myLogger = new SerilogDefaultLogger()
.AddTarget(SerilogTargetFactory<TType>.GetFileTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetConsoleTarget(Options))
.AddTarget(SerilogTargetFactory<TType>.GetSeqTarget(Options));
myLogger.Info("Logger initialized with {#options} and targets: {targets}", Options, ((SerilogDefaultLogger)myLogger).Targets);
return myLogger;
}
}
public class SerilogTargetFactory<TType>
{
// ... snip ...
public static SerilogTarget GetSeqTarget(SerilogOptions options)
{
return !string.IsNullOrEmpty(options.Seq.Server)
? new SerilogSeqTarget<TType>(options)
: null;
}
}
Any suggestions? Is this just a side-effect of being on the bleeding edge, working with pre-release everything (although in that case I'd expect things to fail somewhat consistently)?
when targeting dnxcore50/CoreCLR, Serilog can't hook into AppDomain shutdown to guarantee that any buffered messages are always flushed. (AppDomain doesn't exist in that profile :-)
There are a couple of options:
Dispose the loggers (especially the Seq one) on shutdown:
(logger as IDisposable)?.Dispose();
Use the static Log class and call its CloseAndFlush() method on shutdown:
Log.CloseAndFlush();
The latter - using the static Log class instead of the various individual ILogger instances - is probably the quickest and easiest to get going with, but it has exactly the same effect as disposing the loggers so either approach should do it.
Rather than roll a log after a date/time or specified maximum size, I want to be able to call a method like "ResetLog" that copies my "log.txt" to "log.txt.1" and then clears log.txt.
I've tried to implement that by doing something like this with a FileAppender, rather than a RollingFileAppender:
var appenders = log4net.LogManager.GetRepository().GetAppenders();
foreach (var appender in appenders) {
var fa = appender as log4net.Appender.FileAppender;
if (fa != null) {
string logfile = fa.File;
fa.Close();
string new_file_path = CreateNextLogFile(logfile);
fa.File = new_file_path;
fa.ActivateOptions();
}
}
The file is closed and CreateNextLogFile() renames it. I then create a new log file and set the FileAppender to use it. However, I thought that ActivateOptions would go ahead and reconfigure the FileAppender with the desired settings. I've looked over the log4net documentation and don't see any other public methods that allow me to reopen the FileAppender after closing it. Can anyone recommend a way to implement the rollover? It would be nice if the RollingFileAppender had something like this, but I didn't see anything useful it its documentation, either.
If we look at the RollingFileAppender we can see that the mechanism for rolling over consists of closing the file, renaming existing files (optional) and opening it again:
// log4net.Appender.RollingFileAppender
protected void RollOverSize()
{
base.CloseFile();
// debug info removed
this.RollOverRenameFiles(this.File);
if (!this.m_staticLogFileName && this.m_countDirection >= 0)
{
this.m_curSizeRollBackups++;
}
this.SafeOpenFile(this.m_baseFileName, false);
}
Unfortunately the CloseFile/SafeOpenFile methods are protected which means you cannot access it from the outside (not easily). So your best bet would be to write an appender inheriting from RollingFileAppender, and to override the virtual AdjustFileBeforeAppend which is called before any logging event is added to the appender.
There you can decide what are the conditions of the roll if any must occur. An idea would be to create a static event that your custom rolling appender suscribes to: when the event is triggered the appender makes a note of it (rollBeforeNextAppend = true;). As soon as you try and log next entry the appender will roll.
public class CustomRollingAppender: RollingFileAppender
{
public CustomRollingAppender()
{
MyStaticCommandCenter.RollEvent += Roll;
}
public void Roll()
{
rollBeforeNextAppend = true;
}
public bool rollBeforeNextAppend {get; set;}
public override void AdjustFileBeforeAppend()
{
if (rollBeforeNextAppend) {
CloseFile();
RollOverRenameFiles(File);
SafeOpenFile(Filename, false);
rollBeforeNextAppend = false;
}
}
}
I am new to servicestack and elman logging.
Can any body suggest how do we integrate elmah in service stack applications.
Thank you...
If you have an existing logging solution then you can use the ServiceStack.Logging.Elmah project. It is available via NuGet.
Exceptions, errors and fatal calls will be logged to Elmah in addition to the originally intended logger. For all other log types, only the original logger is used.
So if you are already using Log4Net then you can just configure Elmah like this
ElmahLogFactory factory = new ElmahLogFactory(new Log4NetFactory());
If you don't want to wrap in over an existing log then you can just research adding Elmah to any ASP.NET website. There is no reason it wouldn't work just because you are using ServiceStack.
using ServiceStack.Logging;
using ServiceStack.Logging.Elmah;
using ServiceStack.Logging.NLogger;
public AppHost()
: base(
"description",
typeof(MyService).Assembly)
{
LogManager.LogFactory = new ElmahLogFactory(new NLogFactory());
}
public override void Configure(Container container)
{
this.ServiceExceptionHandler += (request, exception) =>
{
// log your exceptions here
HttpContext context = HttpContext.Current;
ErrorLog.GetDefault(context).Log(new Error(exception, context));
// call default exception handler or prepare your own custom response
return DtoUtils.HandleException(this, request, exception);
};
// rest of your config
}
}
Now your ServiceStack error's appear in Elmah (assuming you've setup web.config etc).
Actually kampsj answer is better than Gavin's as Gavins causes double-logging to elmah by calling explicit elmah logger and then the default servicestack error handling...which itself already does the logging.
So really all you need is this (below assuming you want to wrap NLog with Elmah)
public class YourAppHost : AppHostBase
{
public YourAppHost() //Tell ServiceStack the name and where to find your web services
: base("YourAppName", typeof(YourService).Assembly)
{
LogManager.LogFactory = new ElmahLogFactory(new NLogFactory());
}
//...just normal stuff...
}
You could just have this above:
ElmahLogFactory factory = new ElmahLogFactory();
...but you probably should wrap another type of logger for non-error logging, like Debug and Warn.
This section on configuring Elmah and the Logging.Elmah UseCase for a working example of ServiceStack and Elmah configured together.
The ElmahLogFactory can be configured in your Global.asax before initializing the ServiceStack AppHost, e.g:
public class Global : System.Web.HttpApplication
{
protected void Application_Start(object sender, EventArgs e)
{
var debugMessagesLog = new ConsoleLogFactory();
LogManager.LogFactory = new ElmahLogFactory(debugMessagesLog, this);
new AppHost().Init();
}
}