Log4j2 - LogManager.getLogger("name") not finding custom loggers - log4j

Log4j2 - LogManager.getLogger("name") not finding custom loggers.
All of the following return the same logger - that being the Root logger for the class that this code is in. I would like to think these would all be different and I would get my 3 custom ones back for the first 3 calls.
Note the files specified by the appenders are created, but no logs are sent to them.
Logger _l = (Logger) LogManager.getLogger("Global");
_l = (Logger) LogManager.getLogger("fakeswitch");
_l = (Logger) LogManager.getLogger("fakeswitch_two");
_l = (Logger) LogManager.getLogger();
I create custom loggers using the following:
ComponentBuilder triggeringPolicy = configurationBuilder.newComponent("Policies")
.addComponent(configurationBuilder.newComponent("TimeBasedTriggeringPolicy").
addAttribute("interval", "1"));
AppenderComponentBuilder log4jFileAppenderBuilder = configurationBuilder.
newAppender(pName + "_SmdrDailyRollingFileAppender", "RollingFile");
log4jFileAppenderBuilder.addAttribute("filename", pLogFilename);
log4jFileAppenderBuilder.addAttribute("filePattern", pLogFilenamePattern);
log4jFileAppenderBuilder.addComponent(triggeringPolicy);
// Configure the PatternLayout
LayoutComponentBuilder layoutComponentBuilder = configurationBuilder.newLayout("PatternLayout").
addAttribute("pattern", DEBUG_PATTERN_LAYOUT_STRING);
log4jFileAppenderBuilder.add(layoutComponentBuilder);
// Add it back into configuration
configurationBuilder.add(log4jFileAppenderBuilder);
// https://logging.apache.org/log4j/2.x/manual/customconfig.html
LoggerComponentBuilder logger = configurationBuilder.newLogger(pName, Level.DEBUG);
logger.add(configurationBuilder.newAppenderRef(pName + "_SmdrDailyRollingFileAppender"));
logger.addAttribute("additivity", false);
configurationBuilder.add(logger);
// Actually use it
LoggerContext _loggerContext = Configurator.initialize(configurationBuilder.build());
The equiv XML from writeXmlConfiguration is:
<?xml version="1.0" ?>
<Configuration>
<Appenders>
<RollingFile name="Global_SmdrDailyRollingFileAppender" filename="ps/debug/SMDR_DEBUG.txt"
filePattern="ps/debug/SMDR_DEBUG_%d{yyyyMMdd}.txt.gz">
<Policies>
<TimeBasedTriggeringPolicy interval="1"/>
</Policies>
<PatternLayout pattern="%d{MM.DD.yy-HH:mm:ss} %m%n"/>
</RollingFile>
<RollingFile name="fakeswitch_SmdrDailyRollingFileAppender" filename="ps/debug/SMDR_DEBUG_fakeswitch.txt"
filePattern="ps/debug/SMDR_DEBUG_fakeswitch_%d{yyyyMMdd}.txt.gz">
<Policies>
<TimeBasedTriggeringPolicy interval="1"/>
</Policies>
<PatternLayout pattern="%d{MM.DD.yy-HH:mm:ss} %m%n"/>
</RollingFile>
<RollingFile name="fakeswitch_two_SmdrDailyRollingFileAppender"
filename="ps/debug/SMDR_DEBUG_fakeswitch_two.txt"
filePattern="ps/debug/SMDR_DEBUG_fakeswitch_two_%d{yyyyMMdd}.txt.gz">
<Policies>
<TimeBasedTriggeringPolicy interval="1"/>
</Policies>
<PatternLayout pattern="%d{MM.DD.yy-HH:mm:ss} %m%n"/>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="Global" level="DEBUG" additivity="false">
<AppenderRef ref="Global_SmdrDailyRollingFileAppender"/>
</Logger>
<Logger name="fakeswitch" level="DEBUG" additivity="false">
<AppenderRef ref="fakeswitch_SmdrDailyRollingFileAppender"/>
</Logger>
<Logger name="fakeswitch_two" level="DEBUG" additivity="false">
<AppenderRef ref="fakeswitch_two_SmdrDailyRollingFileAppender"/>
</Logger>
</Loggers>
</Configuration>

This was answered for me by Piotr P Karwasz.
This only works if the LoggerContext has not been initialized yet. Since
every call to a LogManager method initializes a LoggerContext, it is
almost certainly too late to use Configurator.initialize.
Use Configuration.reconfigure instead, which works in all cases.
Piotr
--- my code change was
LoggerContext _loggerContext = Configurator.initialize(configurationBuilder.build());
to
Configurator.reconfigure(configurationBuilder.build());

Related

log4j2 RollingFileAppender old file gets removed after 7 rollovers

I use following log4j RollingFile appender in my webapp.
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
With filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i", when log is rolled over, old file gets renamed to a filename with an index number (specified with %i), so all old files should get renamed and should be preserved.
I rollover the log programmatically with following code.
org.apache.logging.log4j.Logger logManagerLogger = LogManager.getLogger();
Map<String, org.apache.logging.log4j.core.Appender> appenders = ((org.apache.logging.log4j.core.Logger) logManagerLogger).getAppenders();
appenders.forEach((appenderName, appender) -> {
if (appender instanceof RollingFileAppender) {
LOGGER.info("Switching log for appender " + appenderName);
((RollingFileAppender) appender).getManager().rollover();
}
});
But, after 7 rollovers, the existing file gets removed (not renamed according to the specified filePattern) and log is continued in a new file.
What could be the issue here?
set DefaultRolloverStrategy(default is 7), In your config will be:
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
<DefaultRolloverStrategy max="100"/>
</RollingFile>
</Appenders>
now, it will have 100 log file to rollover.
If you want unlimited rollingfile,
According to Log4j2 documentation, from release 2.8, it can be done by setting fileIndex attribute to nomax. For example:
<DefaultRolloverStrategy fileIndex="nomax" />

Mule - Object To XML with JAXB

I'm using Mule 3.8 to get some JSON data which I turn to Java and then to XML files. Everything works until my File endpoint where it all ends in disaster:
Message : Could not find a transformer to transform
"SimpleDataType{type=java.io.ByteArrayOutputStream, mimeType='text/xml',
encoding='null'}" to "SimpleDataType{type=java.io.InputStream,
mimeType='*/*', encoding='null'}".
Payload : <?xml version="1.0" encoding="UTF-8" standalone="yes"?><Header ....></Header>
Payload Type : java.io.ByteArrayOutputStream
..and all i get is dozens of empty .xml files..
I guess somehow I have to tranform my payload to something the file component could actually take and turn into a file, or is this something I have to do manually in a Java component?
Regards
EDIT - forgot the config
<flow name="Product">
<file:inbound-endpoint path="C:\temp\fileIn" responseTimeout="10000" doc:name="File"/>
<logger message="#[payload != null]" level="INFO" doc:name="Logger"/>
<json:json-to-object-transformer returnClass="java.util.List" encoding="UTF-8" doc:name="JSON to ObjectList"/>
<collection-splitter doc:name="Collection Splitter"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
<custom-transformer returnClass="se.131.Product.Header" encoding="UTF-8" class="se.131.Tranformer.Map2Product" doc:name="MapToProduct" mimeType="application/xml"/>
<mulexml:jaxb-object-to-xml-transformer name="myMarshaller" jaxbContext-ref="JAXB_Context" doc:name="JAXB Object to XML" encoding="UTF-8" mimeType="application/xml"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
<file:outbound-endpoint path="C:\temp\fileOut" responseTimeout="10000" doc:name="File" outputPattern="Product-#[function:dateStamp].xml" mimeType="text/xml"/>
<catch-exception-strategy doc:name="Catch Exception Strategy">
<logger message="Oh no!!" level="INFO" doc:name="Logger"/>
</catch-exception-strategy>
</flow>
Try with placing <object-to-string-transformer> just before File outbound endpoint and check ...
The File endpoint expect String format of the payload to create the file

connection not getting established between hazelcast client and hazelcast server

The client and server configs for hazelcast v3.6 are copied below. I can run the server (listening on 127.0.0.1:5706)
I get the following error on the hazelcast client side:
[warn] c.h.c.c.n.ClientConnection - Connection [/127.0.0.1:5701] lost. Reason: java.lang.NullPointerException[null]
[warn] c.h.c.s.i.ClusterListenerSupport - Unable to get alive cluster connection, try in 2986 ms later, attempt 1 of 2.
hazelcast-client.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config hazelcast-client-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<network>
<cluster-members>
<address>127.0.0.1:5701</address>
<!-- <address>0.0.0.0</address> -->
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<timeout>-1</timeout>
<buffer-size>32</buffer-size>
</socket-options>
<socket-interceptor enabled="false">
<class-name>com.hazelcast.examples.MySocketInterceptor</class-name>
<properties>
<property name="foo">bar</property>
</properties>
</socket-interceptor>
<ssl enabled="false">
<factory-class-name>com.hazelcast.examples.MySslFactory</factory-class-name>
</ssl>
<aws enabled="false" connection-timeout-seconds="11">
<inside-aws>true</inside-aws>
<access-key>TEST_ACCESS_KEY</access-key>
<secret-key>TEST_SECRET_KEY</secret-key>
<region>us-east-1</region>
<host-header>ec2.amazonaws.com</host-header>
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</network>
<executor-pool-size>40</executor-pool-size> <!-- reduce the pool size after profiling -->
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
<listeners>
<!--<listener>com.hazelcast.examples.MembershipListener</listener>
<listener>com.hazelcast.examples.InstanceListener</listener>
<listener>com.hazelcast.examples.MigrationListener</listener>
-->
</listeners>
<!-- change to kryo -->
<!-- <serialization>
<portable-version>3</portable-version>
<use-native-byte-order>true</use-native-byte-order>
<byte-order>BIG_ENDIAN</byte-order>
<enable-compression>false</enable-compression>
<enable-shared-object>true</enable-shared-object>
<allow-unsafe>false</allow-unsafe>
<data-serializable-factories>
<data-serializable-factory factory-id="1">com.hazelcast.examples.DataSerializableFactory
</data-serializable-factory>
</data-serializable-factories>
<portable-factories>
<portable-factory factory-id="2">com.hazelcast.examples.PortableFactory</portable-factory>
</portable-factories>
<serializers>
<global-serializer>com.hazelcast.examples.GlobalSerializerFactory</global-serializer>
<serializer type-class="com.hazelcast.examples.DummyType"
class-name="com.hazelcast.examples.SerializerFactory"/>
</serializers>
<check-class-def-errors>true</check-class-def-errors>
</serialization>
-->
<native-memory enabled="false" allocator-type="POOLED">
<size unit="MEGABYTES" value="128" />
<min-block-size>1</min-block-size>
<page-size>1</page-size>
<metadata-space-percentage>40.5</metadata-space-percentage>
</native-memory>
<!--
<proxy-factories>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ1" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ2" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ3" service="sampleService3"/>
</proxy-factories>
-->
<load-balancer type="random"/>
<!--
Beware that near-cache eviction configuration is different for NATIVE in-memory format.
Proper eviction configuration example for NATIVE in-memory format :
`<eviction max-size-policy="USED_NATIVE_MEMORY_SIZE" eviction-policy="LFU" size="60"/>`
-->
<!-- <near-cache name="default">
<max-size>2000</max-size>
<time-to-live-seconds>90</time-to-live-seconds>
<max-idle-seconds>100</max-idle-seconds>
<eviction-policy>LFU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<in-memory-format>OBJECT</in-memory-format>
<local-update-policy>INVALIDATE</local-update-policy>
</near-cache>
-->
<!--
<query-caches>
<query-cache name="query-cache-name" mapName="map-name">
<predicate type="class-name">com.hazelcast.examples.ExamplePredicate</predicate>
<entry-listeners>
<entry-listener include-value="true" local="false">com.hazelcast.examples.EntryListener</entry-listener>
</entry-listeners>
<include-value>true</include-value>
<batch-size>1</batch-size>
<buffer-size>16</buffer-size>
<delay-seconds>0</delay-seconds>
<in-memory-format>BINARY</in-memory-format>
<coalesce>false</coalesce>
<populate>true</populate>
<eviction eviction-policy="LRU" max-size-policy="ENTRY_COUNT" size="10000"/>
<indexes>
<index ordered="false">name</index>
</indexes>
</query-cache>
</query-caches>
-->
</hazelcast-client>
hazelcast server
here is the console message on the server and server config file:
console message
INFO: [127.0.0.1]:5701 [dev] [3.6] Established socket connection between /127.0.1.1:5701 and /127.0.0.1:47301
Mar 10, 2016 12:01:48 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [127.0.0.1]:5701 [dev] [3.6] Connection [/127.0.0.1:47301] lost. Reason: java.io.EOFException[Remote socket closed!]
hazelcast.xml (server)
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<member>127.0.0.1</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>127.0.0.1</interface>
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="userMap">
<async-backup-count>1</async-backup-count>
<near-cache>
<max-size>5000</max-size>
<invalidate-on-change>true</invalidate-on-change>
</near-cache>
<map-store enabled="false">
<class-name></class-name>
<write-delay-seconds>0</write-delay-seconds>
</map-store>
</map>
</hazelcast>
Client code
ClientConfig clientConfig = new XmlClientConfigBuilder().build(); //the xml file is being loaded
HazelcastInstance hazelcastClient = HazelcastClient.newHazelcastClient(clientConfig);
I do not have a firewall running on my computer. Any thoughts on what I may have misconfigured?
Update:
I am able to connect when i specify the ip address programmatically so I am assuming the issue is either with my client config or how I am readig it:
ClientCOnfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("127.0.0.1");
HazelcastInstance hcastClient = HazelcastClient.newHazelcastClient(clientConfig);
The issue was caused by the following line in the client config xml file:
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
Once this was commented out, the client was able to connect to the server. I will update once I gather more information regarding its usage.
Only seems to be available in enterprise edition, not the community one but ideally, it should have either let the connection establish or generate a meaningful error message.

Mule - JMS (ActiveMQ) Reconnection

This is my mule flow 1:
HTTP > Payload String > Logger > JMS /normalqueue
The first flow has an error handling:
File (Write a file per message handled)
Flow 2:
JMS /normalqueue > Logger
Recovery flow (Invoked with a groovy script):
File (Read file) > File to String > Flow reference (To First Flow again)
This is the XML from Mule:
<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration"/>
<jms:activemq-connector name="Active_MQ" username="admin" password="admin" brokerURL="tcp://192.168.198.131:61616" validateConnections="true" doc:name="Active MQ" persistentDelivery="true">
<reconnect blocking="false" frequency="6000"/>
</jms:activemq-connector>
<file:connector name="File" writeToDirectory="C:\errors" autoDelete="true" streaming="true" validateConnections="true" doc:name="File"/>
<flow name="lab-file-catchFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP"/>
<set-payload value="#[message.payloadAs(java.lang.String)]" doc:name="Set Payload"/>
<logger message="Started message: #[message.payloadAs(java.lang.String)]" level="INFO" doc:name="Logger"/>
<jms:outbound-endpoint queue="activemq" connector-ref="Active_MQ" doc:name="JMS">
<jms:transaction action="ALWAYS_BEGIN"/>
</jms:outbound-endpoint>
<catch-exception-strategy doc:name="Catch Exception Strategy">
<file:outbound-endpoint path="C:\errors" connector-ref="File" responseTimeout="10000" doc:name="File"/>
</catch-exception-strategy>
</flow>
<flow name="flow-recovery" initialState="stopped" processingStrategy="synchronous">
<file:inbound-endpoint path="C:\errors" connector-ref="File" responseTimeout="10000" doc:name="File"/>
<file:file-to-string-transformer doc:name="File to String"/>
<logger message=" Recovery message: #[message.payloadAs(java.lang.String)]" level="ERROR" doc:name="Logger"/>
<flow-ref name="lab-file-catchFlow" doc:name="Flow Reference"/>
</flow>
<flow name="lab-file-catchFlow2" processingStrategy="synchronous">
<jms:inbound-endpoint queue="activemq" connector-ref="Active_MQ" doc:name="JMS"/>
<logger message="#[message.payloadAs(java.lang.String)]" level="INFO" doc:name="Logger"/>
</flow>
<flow name="lab-file-catchFlow1" >
<http:listener config-ref="HTTP_Listener_Configuration" path="/modify" doc:name="HTTP"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[ if(muleContext.registry.lookupFlowConstruct('flow-recovery').isStopped())
{
muleContext.registry.lookupFlowConstruct('flow-recovery').start();
return 'Started';
} else
{
muleContext.registry.lookupFlowConstruct('flow-recovery').stop();
return 'Stopped';
}]]></scripting:script>
</scripting:component>
<set-payload value="#[message.payloadAs(java.lang.String)]" doc:name="Set Payload"/>
<logger message="#[message.payloadAs(java.lang.String)]" level="INFO" doc:name="Logger"/>
</flow>
I stop service from ActiveMQ, it store a file with the messages from the error handling and I receive the typical error:
Cannot process event as "Active_MQ" is stopped
Then, I run the ActiveMQ service again and start the recovery flow with a groovy script. That flow recover all messages, converts to string and return to the first flow to requeue.
The problem is that mule doesn't detect when service is running again, I need to restart the mule project to detect it.
Is there any way auto detect when the activeMQ is running again with Mule?
By <reconnect-forever/>, Mule will keep re-trying to connect to ActiveMQ
<jms:activemq-connector name="Active_MQ" username="admin" password="admin" brokerURL="tcp://192.168.198.131:61616" validateConnections="true" doc:name="Active MQ" persistentDelivery="true">
<reconnect-forever/>
</jms:activemq-connector>

how to configure log4j in a web application using jboss 7.1.1?

The steps to configure log4j are:
Step 1.
Create the file: jboss-deployment-structure.xml
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.apache.log4j" slot="main"/>
<module name="org.apache.commons.logging"/>
</exclusions>
</deployment>
</jboss-deployment-structure>
Step 2.
Create the servlet: Log4jInitServlet.java
import java.io.File;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletConfig;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.logging.LogFactory;
import org.apache.commons.logging.Log;
public class Log4JInitServlet extends HttpServlet{
/**
*
*/
private static final long serialVersionUID = -3677208571865966932L;
private static final Log log=LogFactory.getLog(Log4JInitServlet.class);
public Log4JInitServlet(){
}
protected void doGet(HttpServletRequest request
,HttpServletResponse response) throws ServletException,IOException{
PrintWriter out = response.getWriter();
out.write("<h1>LogTester Application Version Guide Erasmo Marciano 1.0</h1>");
out.write("<p>Loading this page generates multiple log events for the it.deinformatica.marciano.logtest category.</p>");
out.write("<p>Click on F5 reload this web-page.</p>");
out.write("<p>You wii find level log:debug|fatal|error|trace|info|warn</p>");
out.close();
for (int i = 1; i <= 20; i++) {
log.debug("This is DEBUG message. Event number " + i);
log.fatal("This is FATAL message. Event number " + i);
log.info("This is INFO message. Event number " + i);
log.error("This is ERROR message. Event number " + i);
log.trace("This is TRACE message. Event number " + i);
log.warn("This is WARN message. Event number " + i);
}
}
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
}
}
Step 3.
create the file log4j.properties
### set log levels - for more verbose logging change 'info' to 'debug' ###
log4j.rootLogger=info, stdout
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
What happens is that only shows INFO messages and no DEBUG. What am I doing wrong
or should do to display messages with lo4j DEBUG?
Please if anyone had a similar problem and solved it.
I have also faced problem for Jboss EAP 6. I have resolved. My working code is like follows:
1. WEB-INF/jboss-deployment-structure.xml file
<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure>
<deployment>
<exclusions>
<!-- first exclude -->
<module name="javaee.api" />
<module name="org.apache.log4j"/>
<module name="org.slf4j"/>
</exclusions>
<dependencies>
<!-- then include filtered -->
<module name="org.apache.log4j" />
</dependencies>
<exclude-subsystems> <subsystem name="jpa" /> </exclude-subsystems>
</deployment>
</jboss-deployment-structure>
2. resources/log4j.properties file
# Root logger option
log4j.rootLogger=INFO, stdout, INF, DBG, ERR
#---------------------------------------------
# Redirect log messages to a log file
#---------------------------------------------
# Output to Tomcat home
logs.dir=${jboss.home}/standalone/log/
logs.fmt.dly=.yyyy-MM-dd
logs.fmt.date=yyyy-MM-dd HH:mm:ss
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
# DEBUG Logs
log4j.appender.DBG.Threshold=DEBUG
log4j.appender.DBG.filter=org.apache.log4j.varia.LevelRangeFilter
#log4j.appender.DBG.filter.LevelMin=DEBUG
log4j.appender.DBG.filter.LevelMax=DEBUG
log4j.appender.DBG.filter.AcceptOnMatch=True
log4j.appender.DBG=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DBG.File=${jboss.server.log.dir}/app-debug-log.log
log4j.appender.DBG.DatePattern=${logs.dly.ptrn}
log4j.appender.DBG.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.DBG.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
# INFO Logs
log4j.appender.INF=org.apache.log4j.DailyRollingFileAppender
log4j.appender.INF.File=${jboss.server.log.dir}/app-info-log.log
log4j.appender.INF.DatePattern=${logs.fmt.dly}
log4j.appender.INF.Threshold=INFO
#log4j.appender.DBG.filter.LevelMin=INFO
log4j.appender.DBG.filter.LevelMax=INFO
log4j.appender.INF.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.INF.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
# ERROR Logs
log4j.appender.ERR=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ERR.File=${jboss.server.log.dir}/app-err-log.log
log4j.appender.ERR.DatePattern=${logs.fmt.dly}
log4j.appender.ERR.Threshold=ERROR
#log4j.appender.DBG.filter.LevelMin=ERROR
#log4j.appender.DBG.filter.LevelMax=ERROR
log4j.appender.ERR.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.ERR.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
Try to exclude even jboss logging, and slf4j if you use it.
Remenber the xmlns in your xml, and put the file in WEB-INF folder of your webapp:
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
<deployment>
<exclusions>
<module name="org.apache.log4j" />
<module name="org.slf4j" />
<module name="org.apache.commons.logging"/>
<module name="org.log4j"/>
<module name="org.jboss.logging"/>
</exclusions>
</deployment>
</jboss-deployment-structure>

Resources