log4j2 (2.0.1) Async Appender errorRef usage - logstash

We use log4j2 for message logging in our applications. Currently, our log4j2 configurations use an Async Appender which then reference a Socket Appender (protocol="tcp") to write logs to a remote Logstash Server:
<Socket name="logstash" host="logging" port="4560" protocol="tcp" >
<LogStashJSONLayout>
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<KeyValuePair key="tomcat.host" value="${env:HOSTNAME}"/>
<KeyValuePair key="tomcat.port" value="${env:CONNECTOR_PORT}"/>
<KeyValuePair key="tomcat.service" value="${web:contextPath}"/>
</LogStashJSONLayout>
</Socket>
<Async name="async">
<AppenderRef ref="logstash"/>
</Async>
What we would now like to do is to, modify our log4j2 configuration to include a fallback RollingFile Appender for cases where the Logstash Server is not available and, to accomplish this, we are thinking that we would modify the Async Appender by:
Setting 'blocking=false'.
Setting 'ignoreExceptions="false"'
Setting an 'errorRef' to point to our fallback RollingFile Appender.
Is this a sensible way of accomplishing this? And, if so, how would the Async Appender's XML look like? We tried the following:
<RollingFile name="fallbackFile"
fileName="${sys:catalina.base}/logs/${web:contextPath}-fallback.log"
filePattern="${sys:catalina.base}/logs/${web:contextPath}-%d{dd-MMM-yyyy}-%i.log"
append="true">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="1 GB" />
</Policies>
</RollingFile>
<Socket name="logstash" host="logging" port="4560" protocol="tcp" >
<LogStashJSONLayout>
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<KeyValuePair key="tomcat.host" value="${env:HOSTNAME}"/>
<KeyValuePair key="tomcat.port" value="${env:CONNECTOR_PORT}"/>
<KeyValuePair key="tomcat.service" value="${web:contextPath}"/>
</LogStashJSONLayout>
</Socket>
<Async name="async" blocking="false" ignoreExceptions="false" errorRef="fallbackFile">
<AppenderRef ref="logstash"/>
</Async>
But we get an error when deploying the application in an environment where the Logstash Server node is purposely unavailable:
2015-04-24 09:55:43,688 ERROR Unable to invoke factory method in class class org.apache.logging.log4j.core.appender.SocketAppender for element Socket. 2015-04-24 09:55:43,702 ERROR Null object returned for Socket in Appenders. 2015-04-24 09:55:43,707 ERROR No appender named logstash was configured Apr 24, 2015 9:55:43 AM org.apache.catalina.core.StandardContext startInternal SEVERE: Error during ServletContainerInitializer processing javax.servlet.ServletException: Failed to instantiate WebApplicationInitializer class at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:160) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5481) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1081) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:553) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1668) at sun.reflect.GeneratedMethodAccessor532.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:301) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown Source) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1480) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:709) at org.apache.catalina.manager.ManagerServlet.doPut(ManagerServlet.java:450) at javax.servlet.http.HttpServlet.service(HttpServlet.java:649) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.filters.SetCharacterEncodingFilter.doFilter(SetCharacterEncodingFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:612) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:421) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1070) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:611) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:314) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Unknown Source) Caused by: java.lang.ExceptionInInitializerError at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at java.lang.Class.newInstance(Unknown Source) at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:157) ... 42 more Caused by: java.lang.NullPointerException at org.apache.logging.log4j.core.appender.AsyncAppender.start(AsyncAppender.java:108) at org.apache.logging.log4j.core.config.AbstractConfiguration.start(AbstractConfiguration.java:157) at org.apache.logging.log4j.core.LoggerContext.setConfiguration(LoggerContext.java:364) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:422) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:146) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:75) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:37) at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:468) at org.apache.logging.log4j.LogManager.getLogger(LogManager.java:403) at com.company.service.config.WebInitialiser.(WebInitialiser.java:21)
What are we doing wrong? Should we be using a Failover Appender instead maybe?
--------- UPDATE ---------
After further testing, I can say that the above setup works in cases where the Socket Appender's host is available but NOT the port (e.g. service unavailable). But it does NOT work in cases where the Socket Appender's host is NOT available (e.g. unknown host).
Now, in order to have the fallbackFile Appender work in both cases (service unavailable and unknown host), I have included a Failover Appender over our SocketAppender:
<RollingFile name="fallbackFile"
fileName="${sys:catalina.base}/logs/${web:contextPath}-fallback.log"
filePattern="${sys:catalina.base}/logs/${web:contextPath}-%d{dd-MMM-yyyy}-%i.log"
append="true">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<Policies>
<SizeBasedTriggeringPolicy size="1 GB" />
</Policies>
</RollingFile>
<Socket name="logstash" host="logging" port="4560" protocol="tcp" >
<LogStashJSONLayout>
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<KeyValuePair key="tomcat.host" value="${env:HOSTNAME}"/>
<KeyValuePair key="tomcat.port" value="${env:CONNECTOR_PORT}"/>
<KeyValuePair key="tomcat.service" value="${web:contextPath}"/>
</LogStashJSONLayout>
</Socket>
<Failover name="failover" primary="logstash">
<Failovers>
<AppenderRef ref="fallbackFile"/>
</Failovers>
</Failover>
<Async name="async" bufferSize="10" blocking="false" ignoreExceptions="false" errorRef="fallbackFile">
<AppenderRef ref="failover"/>
</Async>
This configuration seems to be giving us the behaviour we wanted.
Any cleaner/neater solutions or even comments on this configuration would be most welcome.
Cheers,
PM

logstash-gelf is much more resilient than log4j2's SocketAppender. AsyncAppenders can protect applications up to a certain degree, still you can run into issues with TCP, such as service not available, slow or the connection/reconnection eats up time.
GELF works over UDP, so if the service is down/not reachable, it does not affect your application performance in any way. The only thing that might happen is that you might lose log events, but you've got the file fallback for that case.
A full example of logstash-gelf's config looks like:
<Configuration>
<Appenders>
<Gelf name="gelf" host="udp:localhost" port="12201" version="1.1" extractStackTrace="true"
filterStackTrace="true" mdcProfiling="true" includeFullMdc="true" maximumMessageSize="8192"
originHost="%host{fqdn}">
<Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss,SSS}" />
<Field name="level" pattern="%level" />
<Field name="simpleClassName" pattern="%C{1}" />
<Field name="className" pattern="%C" />
<Field name="server" pattern="%host" />
<Field name="server.fqdn" pattern="%host{fqdn}" />
<!-- This is a static field -->
<Field name="fieldName2" literal="fieldValue2" />
<!-- This is a field using MDC -->
<Field name="mdcField2" mdc="mdcField2" />
<DynamicMdcFields regex="mdc.*" />
<DynamicMdcFields regex="(mdc|MDC)fields" />
</Gelf>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="gelf" />
</Root>
</Loggers>
</Configuration>
Full documentation is available here: http://logging.paluch.biz/

The updated solution is 99% good.
As specified in log4j async appender, ignoreExceptions must be set to false on the socket appender to make it work with the failover appender.
You must set this to false when wrapping this Appender in a FailoverAppender.

Related

How to overwrite the SAME file every hour in log4j2

I have a very simple requirement to write DEBUG logs to a file but shouldn't keep that file for more than an hour due to audit purposes. So, is there a way to overwrite the same file (file_debug) every hour automatically?
I've seen there is "delete" in log4j2 as below but don't want to use this due to security issues. Also, append="false" won't work for me as I need to retain events for an hour - not overwrite every event immediately.
<DefaultRolloverStrategy>
<Delete basePath="${baseDir}" maxDepth="2">
<IfFileName glob="*/app-*.log.gz" />
<IfLastModified age="P60D" />
</Delete>
</DefaultRolloverStrategy>
My current log4j2.xml
<?xml version="1.0" encoding="utf-8"?>
<Configuration>
<Appenders>
<RollingFile name="file" fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}log4j_poc.log" filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}log4j_poc_%i.log">
<PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n"/>
<SizeBasedTriggeringPolicy size="10 MB"/>
<DefaultRolloverStrategy max="10"/>
</RollingFile>
<RollingFile name="file_debug" fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}log4j_poc_debug.log" filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}log4j_poc_debug.log">
<Filters>
<ThresholdFilter level="TRACE"/>
<ThresholdFilter level="INFO" onMatch="DENY" onMismatch="NEUTRAL"/>
</Filters>
<PatternLayout pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n"/>
<Policies>
<SizeBasedTriggeringPolicy size="10 MB"/>
<CronTriggeringPolicy schedule="0 0 * * * ?"/>
</Policies>
<DefaultRolloverStrategy max="0"/>
</RollingFile>
</Appenders>
<Loggers>
<!-- Http Logger shows wire traffic on DEBUG. -->
<AsyncLogger name="org.mule.service.http.impl.service.HttpMessageLogger" level="DEBUG" />
<AsyncLogger name="org.mule.service.http" level="WARN"/>
<AsyncLogger name="org.mule.extension.http" level="WARN"/>
<AsyncLogger name="org.mule.extension.db" level="DEBUG"/>
<AsyncLogger name="org.mule.db.commons" level="DEBUG"/>
<!-- Mule logger -->
<AsyncLogger name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor" level="INFO"/>
<AsyncRoot level="INFO">
<AppenderRef ref="file" level="INFO"/>
<AppenderRef ref="file_debug" level="TRACE"/>
</AsyncRoot>
</Loggers>
Please suggest if there is any option to overwrite the same file "file_debug" every hour?

log4j2 AsyncAppender occured a memory leak

log4j2 version : 2.7
My project is a high concurrent system, I use log4j2 asyncAppender to save logs, but every nigh at 00:00:00 when the rollover strategy triggering, the event blocked at rollover process.
the appender create lots of threads and occured a memory leak, here is my config:
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="info">
<Properties>
<Property name="logDir">/data/logs/q-mix</Property>
<Property name="rollingSuffix">log.gz</Property>
<Property name="CONSOLE_LOG_PATTERN">%clr{%d{yyyy-MM-dd HH:mm:ss.SSS}}{faint} %clr{%5p} %clr{%5.5T}{magenta} %clr{---}{faint} %clr{[%20.20t]}{faint} %clr{%-40.40c{1.}}{cyan} %clr{:}{faint} %m%n</Property>
<Property name="FILE_LOG_PATTERN">%d{yyyy-MM-dd HH:mm:ss.SSS} %5p %5.5T --- [%20.20t] %-40.40c{1.} : %m%n</Property>
</Properties>
<appenders>
<console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${CONSOLE_LOG_PATTERN}" />
</console>
<RollingFile name="RollingFileInfo" fileName="${logDir}/info.log"
filePattern="${logDir}/logs/%d{yyyy-MM-dd}-%i-info.${rollingSuffix}">
<Filters>
<ThresholdFilter level="INFO"/>
</Filters>
<PatternLayout pattern="${FILE_LOG_PATTERN}"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="2048 MB"/>
</Policies>
<DefaultRolloverStrategy max="80">
<Delete basePath="${logDir}/logs/" maxDepth="1">
<IfFileName glob="*.${rollingSuffix}">
<IfAny>
<IfAccumulatedFileSize exceeds="50 GB" />
<IfLastModified age="15d" />
</IfAny>
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="RollingFileWarn" fileName="${logDir}/warn.log"
filePattern="${logDir}/logs/%d{yyyy-MM-dd}-%i-warn.${rollingSuffix}">
<Filters>
<ThresholdFilter level="WARN"/>
</Filters>
<PatternLayout pattern="${FILE_LOG_PATTERN}"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024 MB"/>
</Policies>
<DefaultRolloverStrategy max="40"/>
</RollingFile>
<RollingFile name="RollingFileError" fileName="${logDir}/error.log"
filePattern="${logDir}/logs/%d{yyyy-MM-dd}-%i-error.${rollingSuffix}">
<ThresholdFilter level="ERROR"/>
<PatternLayout pattern="${FILE_LOG_PATTERN}"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024 MB"/>
</Policies>
<DefaultRolloverStrategy max="40"/>
</RollingFile>
<RollingFile name="RollingFileAlarm" fileName="${logDir}/alarm.log"
filePattern="${logDir}/logs/%d{yyyy-MM-dd}-%i-alarm.${rollingSuffix}">
<PatternLayout pattern="${FILE_LOG_PATTERN}"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024 MB"/>
</Policies>
<DefaultRolloverStrategy max="40"/>
</RollingFile>
<Async name="async" bufferSize="1024000">
<!-- <appender-ref ref="Console"/>-->
<appender-ref ref="RollingFileInfo"/>
<!-- <appender-ref ref="RollingFileWarn"/>-->
<appender-ref ref="RollingFileError"/>
</Async>
</appenders>
<loggers>
<logger name="com.upex.exchange.robot.util.AlarmUtils" level="error">
<appender-ref ref="RollingFileAlarm"/>
</logger>
<root level="info">
<appender-ref ref="async"/>
</root>
</loggers>
</configuration>
Here is the stacktrace:
AsyncAppender-async
at sun.misc.Unsafe.park(ZJ)V (Native Method)
at java.util.concurrent.locks.LockSupport.park(Ljava/lang/Object;)V (LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt()Z (AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(I)V (AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(I)V (AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.Semaphore.acquire()V (Semaphore.java:312)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.rollover(Lorg/apache/logging/log4j/core/appender/rolling/RolloverStrategy;)Z (RollingFileManager.java:247)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.rollover()V (RollingFileManager.java:192)
at org.apache.logging.log4j.core.appender.rolling.RollingFileManager.checkRollover(Lorg/apache/logging/log4j/core/LogEvent;)V (RollingFileManager.java:175)
at org.apache.logging.log4j.core.appender.RollingFileAppender.append(Lorg/apache/logging/log4j/core/LogEvent;)V (RollingFileAppender.java:280)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(Lorg/apache/logging/log4j/core/LogEvent;)V (AppenderControl.java:156)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(Lorg/apache/logging/log4j/core/LogEvent;)V (AppenderControl.java:129)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(Lorg/apache/logging/log4j/core/LogEvent;)V (AppenderControl.java:120)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(Lorg/apache/logging/log4j/core/LogEvent;)V (AppenderControl.java:84)
at org.apache.logging.log4j.core.appender.AsyncAppender$AsyncThread.callAppenders(Lorg/apache/logging/log4j/core/LogEvent;)Z (AsyncAppender.java:451)
at org.apache.logging.log4j.core.appender.AsyncAppender$AsyncThread.run()V (AsyncAppender.java:404)
This is the Eclipse Memory Analyzer leak report:
One instance of "java.util.concurrent.ArrayBlockingQueue" loaded by "<system class loader>" occupies 897,341,688 (95.23%) bytes.
The instance is referenced by org.apache.logging.log4j.core.appender.AsyncAppender$AsyncThread # 0xc0700150 AsyncAppender-async ,
loaded by "org.springframework.boot.loader.LaunchedURLClassLoader # 0xc0400000".
The memory is accumulated in one instance of "java.lang.Object[]" loaded by "<system class loader>".
The stacktrace of this Thread is available. See stacktrace.
Keywords
java.util.concurrent.ArrayBlockingQueue
java.lang.Object[]
org.springframework.boot.loader.LaunchedURLClassLoader # 0xc0400000
How can i change some configs to resolve this problem, I realy need some help. Thanks a lot.
Sorry for my english grammar
According to log4j's sugguest I upgrade the log4j2 to version 2.14
LMAX Disruptor technology. Asynchronous Loggers internally use the Disruptor, a lock-free inter-thread communication library, instead of queues, resulting in higher throughput and lower latency.
the new config xml like this:
<appenders>
<RollingRandomAccessFile name="RollingFileWarn" fileName="${logDir}/warn.log"
filePattern="${logDir}/logs/%d{yyyy-MM-dd}-%i-warn.${rollingSuffix}" immediateFlush="false">
<Filters>
<ThresholdFilter level="WARN"/>
</Filters>
<PatternLayout pattern="${FILE_LOG_PATTERN}"/>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024 MB"/>
</Policies>
<DefaultRolloverStrategy max="40"/>
</RollingRandomAccessFile>
</appenders>
<loggers>
<logger name="com.upex.exchange.robot.util.AlarmUtils" level="error">
<appender-ref ref="RollingFileAlarm"/>
</logger>
<AsyncRoot level="info" includeLocation="true">
<appender-ref ref="RollingFileInfo"/>
<appender-ref ref="RollingFileError"/>
</AsyncRoot>
</loggers>
Add a log4j2.component.properties to classpath,content config:
log4j2.AsyncQueueFullPolicy=Discard
AsyncLoggerConfig.SynchronizeEnqueueWhenQueueFull=true
AsyncLoggerConfig.RingBufferSize=131072
This new way may solve the memory leak problem(not confirmed), Because when the TimeBasedTriggeringPolicy started, the cpu load increased five times than usual , it seams still blocked when policy is triggering. However, this configuration limits the queue length and queue full policy. I will continue to observe

log4j2 does not roll application logs written without log4j

My log4j2.xml does not delete old logs.
My application writes logs to {sys:LOG_PATH}/onixs/fix/ without log4j (sys:LOG_PATH is a environment variable).
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout
pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
<RollingFile name="onixs"
fileName="${sys:LOG_PATH}/onixs/engine/engine_log.txt"
filePattern="${sys:LOG_PATH}/onixs/archive/engine/engine_log.%d{yyyy-MM-dd-HH-mm-ss}.txt"
append="true"
immediateFlush="false">
<PatternLayout>
<Pattern>%d{HH:mm:ss:SSS}|%-5.5level|%-20.20thread|%-30.30logger{30}|%msg%n
</Pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
<DefaultRolloverStrategy fileIndex="nomax">
<Delete basePath="${sys:LOG_PATH}/onixs/archive/">
<IfFileName glob="engine_log.*.txt"/>
<IfLastModified age="1d"/>
</Delete>
<Delete basePath="${sys:LOG_PATH}/onixs/fix/">
<IfAny>
<IfFileName glob="*.R*.summary"/>
<IfFileName glob="*.state"/>
</IfAny>
<IfLastModified age="1d"/>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<logger name="biz.onixs" level="info" additivity="false">
<AppenderRef ref="onixs"/>
</logger>
<Root level="debug">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
I expect my log4j to roll logs in ${sys:LOG_PATH}/onixs/fix/ every day (IfLastModified age="1d"). But this isn't happening. Could you help me understand why?
Your file pattern says you want the log to roll over every second however your policy indicates you only want to rollover when the application starts. The ifLastModified age="1d" indicates that you only want to keep the previous day's files in the archive folder. It has nothing to do with how frequently rollover will occur.
If you want the file to rollover while the application is running then you need to have a triggering policy that does that. One of SizeBasedTriggeringPolicy, TimeBasedTriggeringPolicy or CronBasedTriggeringPolicy will do the trick.
I suggest you review the configuration and RollingFileAppender sections of the manual one more time.

Unable to locate appender

We are moving from log4j1.x to log4j2
Changed the properties file to xml file to support log4j2
Below is the xml file which we are using
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="debug">
<Appenders>
<RollingFile name="syslog" fileName="/var/log/stor/gui/gui.log" filePattern="/var/log/stor/gui/gui-%d{MM-dd-yyyy}-%i.log" append="true">
<PatternLayout>
<pattern>%d %p %m%n</pattern>
</PatternLayout>
<Filters>
<!-- Now deny warn, error and fatal messages -->
<ThresholdFilter level="warn" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="fatal" onMatch="DENY" onMismatch="NEUTRAL"/>
<!-- This filter accepts info, warn, error, fatal and denies debug/trace -->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</Filters>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="16 MB"/>
</Policies>
<DefaultRolloverStrategy max="1"/>
</RollingFile>
</Appenders>
<appenders>
<RollingFile name="requestUrilog" fileName="/var/log/stor/gui/requestUrilog.log" filePattern="/var/log/stor/gui/requestUrilog-%d{MM-dd-yyyy}-%i.log" append="true">
<PatternLayout>
<pattern>%d %p %m%n</pattern>
</PatternLayout>
<Filters>
<!-- Now deny warn, error and fatal messages -->
<ThresholdFilter level="warn" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="fatal" onMatch="DENY" onMismatch="NEUTRAL"/>
<!-- This filter accepts info, warn, error, fatal and denies debug/trace -->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</Filters>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="8 MB"/>
</Policies>
<DefaultRolloverStrategy max="4"/>
</RollingFile>
</appenders>
<appenders>
<RollingFile name="userlog" fileName="/var/log/stor/gui/userlog.log" filePattern="/var/log/stor/gui/userlog-%d{MM-dd-yyyy}-%i.log" append="true">
<PatternLayout>
<pattern>%d %p %m%n</pattern>
</PatternLayout>
<Filters>
<!-- Now deny warn, error and fatal messages -->
<ThresholdFilter level="warn" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL"/>
<ThresholdFilter level="fatal" onMatch="DENY" onMismatch="NEUTRAL"/>
<!-- This filter accepts info, warn, error, fatal and denies debug/trace -->
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/>
</Filters>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="8 MB"/>
</Policies>
<DefaultRolloverStrategy max="4"/>
</RollingFile>
</appenders>
<Loggers>
<Logger name="com.tms.gui.sys" additivity="false" level="info">
<AppenderRef ref="syslog"/>
</Logger>
<Logger name="com.tms.gui.requestUri" additivity="false" level="info">
<AppenderRef ref="requestUrilog"/>
</Logger>
<Logger name="com.tms.gui.user" additivity="false" level="info">
<AppenderRef ref="userlog"/>
</Logger>
<Root level="info">
<AppenderRef ref="syslog"/>
</Root>
</Loggers>
</Configuration>
below is the debuf info which we are getting for the error
DEBUG Found factory method [createLoggers]: public static org.apache.logging.log4j.core.config.Loggers org.apache.logging.log4j.core.config.LoggersPlugin.createLoggers(org.apache.logging.log4j.core.config.LoggerConfig[]).
2016-01-14 12:40:14,757 localhost-startStop-1 DEBUG Calling createLoggers on class org.apache.logging.log4j.core.config.LoggersPlugin for element Loggers with params(={com.tms.gui.sys, com.tms.gui.requestUri, com.tms.gui.user, root})
2016-01-14 12:40:14,758 localhost-startStop-1 DEBUG Built Plugin[name=loggers] OK from factory method.
2016-01-14 12:40:14,758 localhost-startStop-1 ERROR Unable to locate appender requestUrilog for logger com.tms.gui.requestUri
2016-01-14 12:40:14,758 localhost-startStop-1 ERROR Unable to locate appender syslog for logger
2016-01-14 12:40:14,758 localhost-startStop-1 ERROR Unable to locate appender syslog for logger com.tms`enter code here`.gui.sys
2016-01-14 12:40:14,759 localhost-startStop-1 DEBUG Configuration XmlConfiguration[location=/opt/stor/gui/www/WEB-INF/classes/log4j2.xml] initialized
please guide us on what we are doing wrong
The ThresholdFilter checks for the level of the log event being the same or more specific than the specified level. So the filter that checks for level "warn" and on a match does a DENY will also deny error and fatal as well, making the next two filters unnecessary.
Can you provide the debug info for creating the appenders? It is possible that the appenders can't be found because problems were encountered creating them, possibly because of permissions problems creating a file in the /var/log directory.

Why isn't my Log4J code working with the FlumeAppender?

I'm getting the following error with Log4J :
2015-07-07 18:24:00,974 ERROR Error processing element Flume: CLASS_NOT_FOUND
2015-07-07 18:24:01,009 ERROR Appender AuditLogger cannot be located. Route igno
red
The following is my Log4J2 XML file :
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
<File name="MyFile" fileName="OutputLogFile.log" immediateFlush="false" append="false">
<PatternLayout pattern="%d{yyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
<Flume name="AuditLogger" compress="true">
<Agent host="192.168.10.101" port="8800"/>
<Agent host="192.168.10.102" port="8800"/>
<RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/>
</Flume>
<Routing name="Routing">
<Routes pattern="$${sd:type}">
<Route>
<RollingFile name="Rolling-${sd:type}" fileName="${sd:type}.log"
filePattern="${sd:type}.%i.log.gz">
<PatternLayout>
<pattern>%d %p %c{1.} [%t] %m%n</pattern>
</PatternLayout>
<SizeBasedTriggeringPolicy size="100" />
</RollingFile>
</Route>
<Route ref="AuditLogger" key="Audit"/>
</Routes>
</Routing>
</Appenders>
<Loggers>
<Root level="all">
<Appender-Ref ref="Console"/>
<Appender-Ref ref="MyFile"/> <!-- added_in now -->
</Root>
</Loggers>
</Configuration>
Then , I tried to add in FlumeAppender like this:
import org.apache.logging.log4j.flume;
But it didn't work after that ... How to set-up FlumeAppender ?
<RollingFile name="MESSAGING_FILE" fileName="log/messaging.log"
filePattern="log/MM_messaging.log.%i">
<PatternLayout pattern="%m%n/>
<Policies>
<SizeBasedTriggeringPolicy size="1MB"/>
</Policies>
<DefaultRolloverStrategy max="3"/>
<Filters>
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
</Filters>
</RollingFile>
Here is an example that creates a new file every time the file size becomes 1 MB, you can set the size to standard sizes such as 10 kb 345 MB etc.
The %i in the file pattern must be included for doing the size based rollovers. The max variable determines the number of files it will keep. Over time you will lose the oldest logs.
Lastly the Threshold filter logs only INFO messages and Below ie. (INFO,WARN,ERROR...) to get an exact match set the onMatch to Neutral and then add another Threshold filter like this
<Filters>
<ThresholdFilter level="INFO" onMatch="NEUTRAL" onMismatch="DENY"/>
<ThresholdFilter level="EROR" onMatch="DENY" onMismatch="ACCEPT"/>
</Filters>
They get processed in order. Neutral just means move on to the next filter. Accept means it will be immediately logged. If the message gets passed through all the filters with neutral status it will be default accepted just as if there were no filters.
As far as seperating logs by output the most straightforward way is to choose the appenders class path's log to by defining the logger for that path and adding the appender ref to the log you want. Another simple way is to use a marker and markerfilter. Lastly i believe there is also the routingappender which is probably more efficient but more complicated and i don't have experience using it.
Spend some time on the developer website for log4j2 they have a relatively good api reference. It will help you with most everything you want to do in a new implementation.

Resources