I use logback MDC to record my application defferent module's log, for example,
// 1. define a logger
org.slf4j.Logger mdclog = org.slf4j.LoggerFactory.getLogger("MY_LOGGER_NAME");
// record trade log
org.slf4j.MDC.put("MY_MDC_KEY", "trade_log");
mdclog.info("This is trade log");
// record goods log
org.slf4j.MDC.put("MY_MDC_KEY", "goods_log");
mdclog.info("This is goods log");
mdc config in logback.xml
<appender name="log_classify" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<Key>login</Key>
<DefaultValue>OTHER</DefaultValue>
</discriminator>
<sift>
<appender name="${MY_MDC_KEY}" class="ch.qos.logback.core.rolling.RollingFileAppender">
<prudent>false</prudent>
<file>${LOG_PATH}/${MY_MDC_KEY}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_PATH}/${MY_MDC_KEY}_%d{yyyy-MM-dd}.log.zip</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
</encoder>
</appender>
</sift>
</appender>
this generate two log file,
I want to record trade or goods sql log in defferent log file,
so I wonder if JOOQ does support this ?
I find a way to resove my question.
I put my appender into org.jooq.tools.LoggerListener,
<Logger name="org.jooq.tools.LoggerListener" level="DEBUG">
<appender-ref ref="log_classify" />
</Logger>
the jooq sql excute log will be add to defferent file after initialise my MDC.
Actually, I don’t know if this is a good idea
jOOQ doesn't integrate this deeply with loggers out of the box, but you can place your org.slf4j.MDC.put calls in an ExecuteListener prior to running a SQL query, for example in the ExecuteListener::renderEnd event, once the SQL statement is generated, and regex-match the query to make a decision. Alternatively, using a VisitListener, you can make this decision already earlier, trying to match specific tables that may be present in queries.
But quite probably, a much better place to initialise your MDC context would be on the service layer, because your service will probably also know if you're about to run "trade" or "goods" queries.
Related
Basically the title itself kinda explains what i'm trying to achieve but in greater detail:
Let's say the one has similar to the following XML setup for the layout:
layout="<log level='${level:lowerCase=True}' time='${longdate:universalTime=true}' myCustomProperty1='${event-properties:item=myCustomProperty1}' myCustomProperty2='${event-properties:item=myCustomProperty2}'>${newline}
...."
Now when myCustomProperty1 is set to let's say 'blah1' but myCustomProperty2 is not added to eventInfo.Properties collection the resulting entry looks like following:
<log level='blah' time='blah' myCustomProperty1='blah1' myCustomProperty2=''>
...
The question is - what can be done (preferably in the config file) to exclude the myCustomProperty2 attribute from finally rendered result so the output looks as following:
<log level='blah' time='blah' myCustomProperty1='blah1'>
...
Here is the gotcha - the same logger is used by multiple threads so i can't simply alter target's layout configuration at the runtime since it may negatively affect the rest of the threads
Thank you in advance for your suggestions.
-K
You could try using When :
<variables>
<variable name="var_myCustomProperty1" value="${when:when=length('${event-properties:item=myCustomProperty1}')>0:Inner= myCustomProperty1="${event-properties:item=myCustomProperty1}"}"/>
<variable name="var_myCustomProperty2" value="${when:when=length('${event-properties:item=myCustomProperty2}')>0:Inner= myCustomProperty2="${event-properties:item=myCustomProperty2}"}"/>
</variables>
<targets>
<target name="test" type="Console" layout="<log level='${level:lowerCase=True}' time='${longdate:universalTime=true}'${var_myCustomProperty1}${var_myCustomProperty2} />" />
</targets>
NLog 4.6 will include the XmlLayout, that might make things easier:
https://github.com/NLog/NLog/pull/2670
Alternative you can use the JsonLayout, if xml-output is not a requirement (renderEmptyObject="false")
I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.
I've got the EventLog target set up like so:
<target xsi:type="EventLog"
name="EventLog"
layout="${longdate:universalTime=true}|${level:uppercase=true}|${logger}|${message}"
source="MyApp"
log="Application" />
Now, obviously not all my events will have the same ID, so I want to set event ID on a per message basis, rather than setting a static ID in the config. I believe this should work:
var logger = LogManager.GetCurrentClassLogger();
var logEvent = new LogEventInfo(LogLevel.Warn, logger.Name, "Test message");
logEvent.Properties.Add("EventID", 4444);
logger.Log(logEvent);
...but my events always have event ID set to 0. Anyone know how to get this working?
I figured it out - you have to use a layout in the eventId property of the target:
<target xsi:type="EventLog"
name="EventLog"
layout="${longdate:universalTime=true}|${level:uppercase=true}|${logger}|${message}"
source="MyApp"
>> eventId="${event-properties:EventID:whenEmpty=0}" <<
log="Application" />
I've also created the Timber logging facade called for both NLog and log4net, which makes logging messages with different event IDs very simple.
On the github hub repo there's an example config for EventLog targets that includes an eventId. The eventId will use a Layout that renders an event ID.
https://github.com/NLog/NLog/wiki/Eventlog-target
I'd like to do this (from log4net docu) with nlog:
This example shows how to deliver only significant events. A LevelEvaluator is specified with a threshold of WARN. This means that an email will be sent for each WARN or higher level message that is logged. Each email will also contain up to 512 (BufferSize) previous messages of any level to provide context. Messages not sent will be discarded.
Is it possible?
I only found this on codeproject.
But it uses a wrapper target that flushes in behalf of the number of messages, not on the log level.
Thanks
Tobi
There is a QueuedTargetWrapper ( a target that buffers log events and sends them in batches to the wrapped target)
that seems address the requirement. I haven't tried it yet.
There is a related discussion "The Benefits of Trace Level Logging in Production Without the Drawback of Enormous Files"
Simple solution that will write last 200 log-events on error:
<target name="String" xsi:type="AutoFlushWrapper" condition="level >= LogLevel.Error" flushOnConditionOnly="true">
<target xsi:type="BufferingWrapper"
bufferSize="200"
overflowAction="Discard">
<target xsi:type="wrappedTargetType" ...target properties... />
</target>
</target>
See also: https://github.com/nlog/NLog/wiki/BufferingWrapper-target
I have four appenders namely as follows
appender name= LogFileAppender // to write general logs in File
appender name=LogDatabaseAppender// to write general logs in db via Oracle StoredProc
appender name=ExceptionFileAppender // to write exception logs in File
appender name=ExceptionDatabaseAppender // to write exception logs in db via Oracle StoredProc
I want to have a appconfig file where I can set which appender to use.
Moreover , I have methods as follows
Method_WriteLogOnly ---> which will use appender 1 or 2
Method_WriteExceptionLogs---> which will use appender 3 or 4
Problem is I dont know if I am using the same log4net.config.xml file for both the methods , then how to set the appender .
What is the best practice , either to set appender programmatically or through another configuration place like if I have an app.config or web.config file , and there I write a key value pair (some sort of code like this) for choosing the appender ?
I think you should not decide in code what appender to use: you should decide what is logged, but the people that run your application should decide how it is logged.
While I can understand that you want a separate file for exception, I wonder a bit why you want to use two database appenders. If you need to write to different tables then you can easily do this inside your stored procedure. This has several advantages: Configuration will be easier, you will only have one database connection...
Assuming that it would be good enough for you to say that "exceptions == messages with level ERROR" then you can easily create two appenders and use filters to make sure only messages with level ERROR end up in the "exception" log file.