Jooq does not find log4j2 Logger - log4j

Im using the JOOQ library to generate and perform SQL queries. For debugging reasons I want to see what queries exactly are performed without looking at the changing database every time. I think the jooq framework should show lots of informations about queries and their effects if the log level is set to debug. I use log4j2 for logging, and it is set to debug using the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="DEBUG" packages="ch.fhnw.ima.doggait">
<appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</appenders>
<loggers>
<root level="DEBUG">
<appender-ref ref="Console"/>
</root>
</loggers>
</configuration>
Unfortunally It looks like JOOQ uses another logger, which is set to info.
This is an example output:
2015-09-29 10:15:04,064 DEBUG Shutdown.. 2015-09-29 10:15:04,065 DEBUG
LoggerCo.. 2015-09-29 10:15:04,111 DEBUG Using d... 10:15:04.116 [Test
worker] DEBUG ch.fh.. Sep 29, 2015 10:15:04 AM
org.jooq.tools.JooqLogger info
INFORMATION:
######################################....
You can see the first few lines are logged in debug mode by log4j2, but Jooq uses JooqLogger in level info.
I load both libraries using gradle:
compile group: 'com.google.inject', name: 'guice-parent', version: '3.0'
compile group: 'com.google.inject.extensions', name: 'guice-persist', version: '3.0'
compile group: 'org.hibernate', name: 'hibernate-entitymanager', version: '4.3.8.Final'
compile group: 'com.h2database', name: 'h2', version: '1.4.185'
compile group: 'org.apache.commons', name: 'commons-math3', version: '3.2'
compile group: 'org.apache.logging.log4j', name: 'log4j-core', version: '2.0.2'
compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.0.2'
compile group: 'org.hamcrest', name: 'hamcrest-all', version: '1.3'
compile group: 'org.apache.pdfbox', name: 'pdfbox', version: '1.8.10'
compile group: 'postgresql', name:'postgresql', version:'9.0-801.jdbc4'
compile group: 'org.jooq', name:'jooq', version:'3.6.3'
compile group: 'org.jooq', name:'jooq-meta', version:'3.6.3'
compile group: 'org.jooq', name:'jooq-codegen', version:'3.6.3'
testCompile 'junit:junit:4.11'
Does somebody know why jooq does not use log4j2 as the default logger, or how I can set it to log in debug level?

jOOQ is still using log4j 1.2 as a default, internally. You should use that version (for now), or slf4j instead.
There are a couple of feature requests to improve these dependencies in jOOQ:
#1564 Replace JooqLogger by using slf4j, which already abstracts logger frameworks pretty well
#3280 Expose a new Log SPI, which can be implemented by popular loggers

I know it's little late, but I found the same problem and I fixed it with the help of apache itself.
You can include this adapter in your classpath, it bridges log4j2 with log4j 1.2
log4j-1.2-api-2.6.1.jar
But, you must not have log4j 1.2 jars or config files in your classpath. In short, do everything with log4j2 and just include above jar.
For further reading : http://logging.apache.org/log4j/2.x/faq.html#which_jars
This SO question helped me : Configuring log4j2 and log4j using a single log4j2 xml file

Related

Apache Spark + Parquet not Respecting Configuration to use “Partitioned” Staging S3A Committer

I am writing partitioned data (Parquet file) to AWS S3 using Apache Spark (3.0) from my local machine without having Hadoop installed in my machine. I was getting FileNotFoundException while writing to S3 when I have lot of files to write to around 50 partitions(partitionBy = date).
Then I have come across new S3A committer, So I tried to configure "partitioned" committer instead. But still I could see that Spark uses ParquetOutputCommitter instead of PartitionedStagingCommitter when the file format is "parquet". And still I am getting FileNotFoundException when I have lot of data to write.
My Configuration:
sparkSession.conf().set("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", 2);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.name", "partitioned");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.magic.enabled ", false);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.conflict-mode", "append");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.unique-filenames", true);
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads", true);
sparkSession.conf().set("spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a", "org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory");
sparkSession.conf().set("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol");
sparkSession.conf().set("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter");
sparkSession.conf().set("spark.hadoop.fs.s3a.committer.staging.tmp.path", "tmp/staging");
What am I doing incorrect? Could someone please help?
Note: I have created a JIRA in Spark for the same but no help till now: SPARK-31072
==============================================================
I tried the answer from (#Rajadayalan). But its still using FileOutputFormatter. I tried downgrading the spark version to 2.4.5 without any luck.
20/04/06 12:44:52 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 12:44:52 WARN AbstractS3ACommitterFactory: **Using standard FileOutputCommitter to commit work**. This is slow and potentially unsafe.
20/04/06 12:44:52 INFO FileOutputCommitter: File Output Committer Algorithm version is 2
20/04/06 12:44:52 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 12:44:52 INFO AbstractS3ACommitterFactory: Using Commmitter FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406124452_0000}; taskId=attempt_20200406124452_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter#61deb03f}; outputPath=s3a://******/observation, workPath=s3a://******/observation/_temporary/0/_temporary/attempt_20200406124452_0000_m_000000_0, algorithmVersion=2, skipCleanup=false, ignoreCleanupFailures=false} for s3a://********/observation
20/04/06 12:44:53 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 81.077046 ms
20/04/06 12:44:54 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 31.993775 ms
20/04/06 12:44:54 INFO CodeGenerator: Code generated in 9.967359 ms
Note: I don't have Spark installed in my local. So gave spark-hadoop-cloud_2.11 as compile time dependency
My build.gradle looks as follows:
compile group: 'org.apache.spark', name: 'spark-hadoop-cloud_2.11', version: '2.4.2.3.1.3.0-79'
compile group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.10.0'
// https://mvnrepository.com/artifact/org.apache.parquet/parquet-column
compile group: 'org.apache.parquet', name: 'parquet-column', version: '1.10.1'
// https://mvnrepository.com/artifact/org.apache.parquet/parquet-hadoop
compile group: 'org.apache.parquet', name: 'parquet-hadoop', version: '1.10.1'
compile group: 'org.apache.parquet', name: 'parquet-avro', version: '1.10.1'
// https://mvnrepository.com/artifact/org.apache.spark/spark-sketch
compile group: 'org.apache.spark', name: 'spark-sketch_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-core
compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-catalyst
compile group: 'org.apache.spark', name: 'spark-catalyst_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-tags
compile group: 'org.apache.spark', name: 'spark-tags_2.11', version: '2.4.5'
compile group: 'org.apache.spark', name: 'spark-avro_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.spark/spark-hive
compile group: 'org.apache.spark', name: 'spark-hive_2.11', version: '2.4.5'
// https://mvnrepository.com/artifact/org.apache.xbean/xbean-asm6-shaded
compile group: 'org.apache.xbean', name: 'xbean-asm7-shaded', version: '4.15'
compile group: 'org.apache.hadoop', name: 'hadoop-common', version: '3.2.1'
// compile group: 'org.apache.hadoop', name: 'hadoop-s3guard', version: '3.2.1'
compile group: 'org.apache.hadoop', name: 'hadoop-aws', version: '3.2.1'
compile group: 'org.apache.hadoop', name: 'hadoop-client', version: '3.2.1'
compile group: 'com.amazonaws', name: 'aws-java-sdk-bundle', version: '1.11.271'
Had the same issue, the solution from
How To Get Local Spark on AWS to Write to S3 worked to load PartitionedStagingCommitter. you also have to download spark-hadoop-cloud jar from as mentioned in the solution.
I also use spark 3.0 and this version of jar worked https://repo.hortonworks.com/content/repositories/releases/org/apache/spark/spark-hadoop-cloud_2.11/2.4.2.3.1.3.0-79/
Settings in my spark-defaults.conf
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
spark.hadoop.fs.s3a.committer.name partitioned
spark.hadoop.fs.s3a.committer.magic.enabled false
spark.hadoop.fs.s3a.commiter.staging.conflict-mode append
spark.hadoop.fs.s3a.committer.staging.unique-filenames true
spark.hadoop.fs.s3a.committer.staging.abort.pending.uploads true
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a
org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
spark.sql.sources.commitProtocolClass
org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
spark.sql.parquet.output.committer.class
org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
I got this working with a small change from what #Rajadayalan suggested. Apart from sparkSession.config().set() as in the initial question I have added the option() param in the df while writing parquet files
df.distinct()
.withColumn("date", date_format(col(EFFECTIVE_PERIOD_START), "yyyy-MM-dd"))
.repartition(col("date"))
.write()
.format(fileFormat)
.partitionBy("date")
.mode(SaveMode.Append)
.option("fs.s3a.committer.name", "partitioned")
.option("fs.s3a.committer.staging.conflict-mode", "append")
.option("spark.sql.sources.commitProtocolClass", "org.apache.spark.internal.io.cloud.PathOutputCommitProtocol")
.option("spark.sql.parquet.output.committer.class", "org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter")
.option("compression", compressionCodecName.name().toLowerCase())
.save(DOWNLOADS_NON_COMPACT_PATH);
This makes the difference and the following stacktrace depicts that its using PartitionedStagingCommitter
Also I could also see that _SUCCESS file is a JSON instead of empty touch file (_SUCCESS) in S3.
20/04/06 14:27:26 INFO ParquetFileFormat: Using user defined output committer for Parquet: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
20/04/06 14:27:26 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/04/06 14:27:26 INFO FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using committer partitioned to output data to s3a://************/observation
20/04/06 14:27:26 INFO AbstractS3ACommitterFactory: Using Commmitter PartitionedStagingCommitter{StagingCommitter{AbstractS3ACommitter{role=Task committer attempt_20200406142726_0000_m_000000_0, name=partitioned, outputPath=s3a://*********/observation, workPath=file:/tmp/hadoop-**********/s3a/local-1586197641397/_temporary/0/_temporary/attempt_20200406142726_0000_m_000000_0}, conflictResolution=APPEND, wrappedCommitter=FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_20200406142726_0000}; taskId=attempt_20200406142726_0000_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter#4494e88a}; outputPath=file:/Users/**********/Downloads/SparkParquetSample/tmp/staging/**********/local-1586197641397/staging-uploads, workPath=null, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}}} for s3a://parquet-uuid-test/device-metric-observation6
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 52.744811 ms
20/04/06 14:27:27 INFO HashAggregateExec: spark.sql.codegen.aggregate.map.twolevel.enabled is set to true, but current version of codegened fast hashmap does not support this aggregate.
20/04/06 14:27:27 INFO CodeGenerator: Code generated in 48.78277 ms

Enable debug symbols for log4j from gradle

My log4j statements are showing ?s for line numbers.
What I've tried
Other answers around the web say this happens when you haven't compiled debug information into your classes. In Ant for example, this would be done with <javac debug="true" .... Searching for a way to do this in gradle yields only
http://gradle.org/docs/2.3-rc-2/dsl/org.gradle.api.tasks.compile.CompileOptions.html
debug - Tells whether to include debugging information in the generated class files. Defaults to true
I verified this by putting println compileGroovy.options.debug in my build.gradle and it prints true. I also found http://forums.gradle.org/gradle/topics/compile-groovy-with-debugging-information, which recommended using tasks.withType(GroovyCompile) { options.debug = true }, which also did not work. Finally, I found compileGroovy.options.debugOptions.debugLevel, which can have source, lines, or vars. This is null by default, but setting it to source,lines did not give my line information.
Reproducing
gradle init --type groovy-library
Edit src/main/groovy/Library.groovy to have the additional lines:
import groovy.util.logging.Log4j
#Log4j
class Library {
boolean someLibraryMethod() {
log.info "XXXXXXXXXXXXX"
true
}
}
Add
compile 'log4j:log4j:1.2.17' to build.gradle
Add the file src/main/resources/log4j.xml
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j='http://jakarta.apache.org/log4j/'>
<appender name="console" class="org.apache.log4j.ConsoleAppender">
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %c:%L - %m%n" />
</layout>
</appender>
<root>
<level value="INFO" />
<appender-ref ref="console" />
</root>
</log4j:configuration>
And finally run gradle clean test -i and observe that you see
2015-03-30 08:47:31,996 Library:? - XXXXXXXXXXXXX
Note the ? in the message.
Version information
$ gradle -version
------------------------------------------------------------
Gradle 2.3
------------------------------------------------------------
Build time: 2015-02-16 05:09:33 UTC
Build number: none
Revision: 586be72bf6e3df1ee7676d1f2a3afd9157341274
Groovy: 2.3.9
Ant: Apache Ant(TM) version 1.9.3 compiled on December 23 2013
JVM: 1.8.0_40 (Oracle Corporation 25.40-b25)
OS: Windows 7 6.1 amd64
$ groovy -version
Groovy Version: 2.4.3 JVM: 1.8.0_40 Vendor: Oracle Corporation OS: Windows 7
$ java -version
java version "1.8.0_40"
Java(TM) SE Runtime Environment (build 1.8.0_40-b26)
Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode)
My question is, of course, how do I get line numbers to show instead of question marks?
Groovy includes the line numbers in the class files. The problem is that groovy calls methods slightly differently than Java which confuses the Log4j code that determines the line number.
In particular, if you change the line
log.info "XXXXXXXXXXXXX"
to
log.error "XXXXXXXXXXXXX", new Exception('test')
you can see some reflection methods are called when groovy invokes methods. The topmost method in the stack trace I generated was sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method), which, being a native method, doesn't have a line number. This is what Log4j is looking at. Unfortunately, there's no easy was to filter the stacktrace Log4j uses or tell Log4j to look further down the stack.
One possible solution is to turn on static compilation. If you add #groovy.transform.CompileStatic to the Library class or method you'll get the correct line number.

akka, spray, and log4j2 not logging right

I am scaffolding a new application in Akka, Spray, and Log4j2. I have most of the major systems working in "hello world" type mode, but I'm having a hard time getting the logging to work as I want and I can't tell if this is a log4j2 issue, an Akka/Spray issue, or a retarded me issue.
First, my Akka config file
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = DEBUG
log-config-on-start = on
actor{
debug {
receive = on
autoreceive = on
lifecycle = on
fsm = on
event-stream = on
unhandled = on
router-misconfiguration = on
}
}
}
Now, if I comment out the loggers line and run the app in IntelliJ, its perfect in the console. I get everything I want, I get very verbose and helpful logs, I get the config settings, I see the transactions coming in and going out, I'm a very happy person. The problem occurs when I try to pipe it thru Log4j2.
Now the reason I want to do this is in the not too distant future, i'm going to be pumping all logs to a Cassandra cluster for mapreduce work.
So I uncomment the slf4j line in the config file above, add slf4j, log4j2 and log4j2's log4j-slf4j-impl in the classpath and log4j activates, reads the config file, creates the log file, logs to the console the initializing of the appenders and the debug info, and thats it. nothing else comes out. not one blessed thing. nada.
here's my log4j2.xml file
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="DEBUG">
<appenders>
<Console name="CONSOLE" target="SYSTEM_OUT">
<PatternLayout pattern="%-5p %c{2} - %m%n" />
</Console>
<RollingRandomAccessFile name="logFileOutput" fileName="logs/wisdomOutput.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<pattern>%d %p %C [%t] %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="50 MB"/>
</Policies>
<DefaultRolloverStrategy max="20"/>
</RollingRandomAccessFile>
</appenders>
<loggers>
<!-- if you want to dump different packages to different files, here's where you do it
<logger name="com.onuspride" level="DEBUG" additivity="false">
<appender-ref ref="logFileOutput"/>
</logger>-->
<root>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="logFileOutput"/>
</root>
</loggers>
</configuration>
I don't know if I'm dealing with a bug in log4j2, or if my configuration file is wrong, or what.
Here is my console output when my logging is activated
/usr/lib/jvm/java-7-oracle/bin/java -Didea.launcher.port=7533 -Didea.launcher.bin.path=/home/willie/apps/intellij13/bin -Dfile.encoding=UTF-8 -classpath /usr/lib/jvm/java-7-oracle/jre/lib/jfr.jar:/usr/lib/jvm/java-7-oracle/jre/lib/javaws.jar:/usr/lib/jvm/java-7-oracle/jre/lib/jsse.jar:/usr/lib/jvm/java-7-oracle/jre/lib/jfxrt.jar:/usr/lib/jvm/java-7-oracle/jre/lib/deploy.jar:/usr/lib/jvm/java-7-oracle/jre/lib/rt.jar:/usr/lib/jvm/java-7-oracle/jre/lib/jce.jar:/usr/lib/jvm/java-7-oracle/jre/lib/resources.jar:/usr/lib/jvm/java-7-oracle/jre/lib/plugin.jar:/usr/lib/jvm/java-7-oracle/jre/lib/management-agent.jar:/usr/lib/jvm/java-7-oracle/jre/lib/charsets.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/sunec.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-7-oracle/jre/lib/ext/sunjce_provider.jar:/home/willie/code/onus/collective/build/production/wisdom:/home/willie/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-reflect/2.10.0/10ec1256a6e875e886fec050edb0669166912d0d/scala-reflect-2.10.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-routing/1.2.0/955690e1d6b0d92f06da4b53a185a0d59615e1c6/spray-routing-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.json4s/json4s-ast_2.10/3.2.6/e52c46c51f75a7536c3ffedefdbdc331ffaecbdd/json4s-ast_2.10-3.2.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-http/1.2.0/ba24112e069e61224255f6c3cf85727767351fcc/spray-http-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.json4s/json4s-core_2.10/3.2.6/c89c64b04fec958fc57a271bc741dfde759d8ffc/json4s-core_2.10-3.2.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.thoughtworks.paranamer/paranamer/2.6/52c3c8d8876440d714e23036eb87bcc4244d9aa5/paranamer-2.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-util/1.2.0/efea073f15b79a3f62f1f16b22cfcad9a2119439/spray-util-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.json4s/json4s-native_2.10/3.2.6/3728a03373e3eac2354ea0cc855fef71fe97d82e/json4s-native_2.10-3.2.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-compiler/2.10.0/fec8066cd2b4f8dc7ff7ba7a8e0a792939d9f9a/scala-compiler-2.10.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.jvnet.mimepull/mimepull/1.9.3/c55096ff89a27e22c2e081371d0570ac19cc6788/mimepull-1.9.3.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.parboiled/parboiled-scala_2.10/1.1.6/bc93b215151f7dc42653f3eb0243b3130a7e1e11/parboiled-scala_2.10-1.1.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.chuusai/shapeless_2.10/1.2.4/2f41ccc19eb4d38ae5ef907ce7469003e0253dc5/shapeless_2.10-1.2.4.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.parboiled/parboiled-core/1.1.6/11bd0c34fc6ac3c3cbf440ab8180cc6422c044e9/parboiled-core-1.1.6.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-httpx/1.2.0/328e8d28cd3fcd7a7aea6467048a17577d427272/spray-httpx-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.scala-lang/scalap/2.10.0/ab42ae21d1fd7311b367fe3d7f33343f2e4bff6b/scalap-2.10.0.jar:/home/willie/code/onus/collective/build/production/inclusive:/home/willie/.gradle/caches/modules-2/files-2.1/org.apache.commons/commons-lang3/3.1/905075e6c80f206bbe6cf1e809d2caa69f420c76/commons-lang3-3.1.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-web/3.1.4.RELEASE/56d52794df838d0bc3ccbc149a62ea019946f82a/spring-security-web-3.1.4.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.3.0/f5e853a20b60758922453d56f9ae1e64af5cb3da/jackson-annotations-2.3.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/commons-logging/commons-logging/1.1.1/5043bfebc3db072ed80fbd362e7caf00e885d8ae/commons-logging-1.1.1.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-jdbc/3.0.7.RELEASE/c53d9bf4e6a6a6cf49c7d7630e612042364295a1/spring-jdbc-3.0.7.RELEASE.jar:/home/willie/.m2/repository/org/springframework/social/spring-social-security/1.1.0.M4/spring-social-security-1.1.0.M4.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-webmvc/3.2.3.RELEASE/6d6d1b53c53dbefd0d8c15f2d5c194ced1f8d53c/spring-webmvc-3.2.3.RELEASE.jar:/home/willie/.m2/repository/org/springframework/social/spring-social-web/1.1.0.M4/spring-social-web-1.1.0.M4.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-core/3.2.3.RELEASE/accdd65db57e79e49f2af037bb76f5a55a580f00/spring-core-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/aopalliance/aopalliance/1.0/235ba8b489512805ac13a8f9ea77a1ca5ebe3e8/aopalliance-1.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework.security/spring-security-core/3.1.4.RELEASE/41af4a10d7e31722bd8ef2c101f60afb6c24e04b/spring-security-core-3.1.4.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/javax.inject/javax.inject/1/6975da39a7040257bd51d21a231b76c915872d38/javax.inject-1.jar:/home/willie/.m2/repository/org/springframework/social/spring-social-core/1.1.0.M4/spring-social-core-1.1.0.M4.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/3.2.3.RELEASE/a48eb92ef51e0aed7d23dcaa54225b8f24808db4/spring-beans-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-expression/3.2.3.RELEASE/51e4c94a79892c0a2d15c5efda8c5b336ebf5c37/spring-expression-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/3.2.3.RELEASE/833c5e9a60f82209a3f669a5026dfa8de57c1df3/spring-web-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-context/3.2.3.RELEASE/12ad5920647556704f79cc579d4e133430741c5/spring-context-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-aop/3.2.3.RELEASE/a70bec95e45a29cad126d250699d0ed0fedfe28a/spring-aop-3.2.3.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.springframework/spring-tx/3.0.7.RELEASE/c340bf8606f6bf235bc1277d25315df1abe51c31/spring-tx-3.0.7.RELEASE.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-can/1.2.0/4a3ec8def45e7f567739fe15a6537f57316612bf/spray-can-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/io.spray/spray-io/1.2.0/35c2866c4c7ac7f9b4ee2c06df18075a5bb8c55b/spray-io-1.2.0.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.0-beta9/1dd66e68cccd907880229f9e2de1314bd13ff785/log4j-api-2.0-beta9.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.0-beta9/678861ba1b2e1fccb594bb0ca03114bb05da9695/log4j-core-2.0-beta9.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.scala-lang/scala-library/2.10.3/21d99cee3d3e478255ef9fcc90b571fb2ab074fc/scala-library-2.10.3.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.typesafe/config/1.0.2/a0bca82c39f23f75e3afccd6e12840eeabaea123/config-1.0.2.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.typesafe.akka/akka-actor_2.10/2.2.3/3a74ea5a7ec2a23fa22077c4debebf96f015dc73/akka-actor_2.10-2.2.3.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.5/6b262da268f8ad9eff941b25503a9198f0a0ac93/slf4j-api-1.7.5.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.typesafe.akka/akka-slf4j_2.10/2.2.3/616c79a2d66ebd136bf0a4e5902bcd1e74dd4cb1/akka-slf4j_2.10-2.2.3.jar:/home/willie/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.0-beta9/e970fc3c10144a521d691db4ac38027cc1935b74/log4j-slf4j-impl-2.0-beta9.jar:/home/willie/.gradle/caches/modules-2/files-2.1/com.lmax/disruptor/3.2.0/ac62995678dd4b906e85b26354aa2ebfda130c32/disruptor-3.2.0.jar:/home/willie/apps/intellij13/lib/idea_rt.jar com.intellij.rt.execution.application.AppMain com.onuspride.wisdom.Boot
2014-01-10 17:42:36,673 DEBUG Generated plugins in 0.000064883 seconds
2014-01-10 17:42:36,719 DEBUG Calling createLayout on class org.apache.logging.log4j.core.layout.PatternLayout for element PatternLayout with params(pattern="%-5p %c{2} - %m%n", Configuration(/home/willie/code/onus/collective/build/production/wisdom/log4j2.xml), null, charset="null", alwaysWriteExceptions="null")
2014-01-10 17:42:36,721 DEBUG Generated plugins in 0.000061321 seconds
2014-01-10 17:42:36,727 DEBUG Calling createAppender on class org.apache.logging.log4j.core.appender.ConsoleAppender for element Console with params(PatternLayout(%-5p %c{2} - %m%n), null, target="SYSTEM_OUT", name="CONSOLE", follow="null", ignoreExceptions="null")
2014-01-10 17:42:36,729 DEBUG Calling createLayout on class org.apache.logging.log4j.core.layout.PatternLayout for element PatternLayout with params(pattern="%d %p %C [%t] %m%n", Configuration(/home/willie/code/onus/collective/build/production/wisdom/log4j2.xml), null, charset="null", alwaysWriteExceptions="null")
2014-01-10 17:42:36,732 DEBUG Calling createPolicy on class org.apache.logging.log4j.core.appender.rolling.TimeBasedTriggeringPolicy for element TimeBasedTriggeringPolicy with params(interval="null", modulate="null")
2014-01-10 17:42:36,735 DEBUG Calling createPolicy on class org.apache.logging.log4j.core.appender.rolling.SizeBasedTriggeringPolicy for element SizeBasedTriggeringPolicy with params(size="50 MB")
2014-01-10 17:42:36,743 DEBUG Calling createPolicy on class org.apache.logging.log4j.core.appender.rolling.CompositeTriggeringPolicy for element Policies with params(Policies={TimeBasedTriggeringPolicy, SizeBasedTriggeringPolicy(size=52428800)})
2014-01-10 17:42:36,748 DEBUG Calling createStrategy on class org.apache.logging.log4j.core.appender.rolling.DefaultRolloverStrategy for element DefaultRolloverStrategy with params(max="20", min="null", fileIndex="null", compressionLevel="null", Configuration(/home/willie/code/onus/collective/build/production/wisdom/log4j2.xml))
2014-01-10 17:42:36,754 DEBUG Calling createAppender on class org.apache.logging.log4j.core.appender.RollingRandomAccessFileAppender for element RollingRandomAccessFile with params(fileName="logs/wisdomOutput.log", filePattern="logs/${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz", append="null", name="logFileOutput", immediateFlush="null", Policies(CompositeTriggeringPolicy{TimeBasedTriggeringPolicy, SizeBasedTriggeringPolicy(size=52428800)}), DefaultRolloverStrategy(DefaultRolloverStrategy(min=1, max=20)), PatternLayout(%d %p %C [%t] %m%n), null, ignoreExceptions="null", advertise="null", advertiseURI="null", Configuration(/home/willie/code/onus/collective/build/production/wisdom/log4j2.xml))
2014-01-10 17:42:36,763 DEBUG Starting RollingRandomAccessFileManager logs/wisdomOutput.log
2014-01-10 17:42:36,767 DEBUG Generated plugins in 0.000057060 seconds
2014-01-10 17:42:36,772 DEBUG Calling createAppenders on class org.apache.logging.log4j.core.config.plugins.AppendersPlugin for element appenders with params(Appenders={CONSOLE, logFileOutput})
2014-01-10 17:42:36,773 DEBUG Generated plugins in 0.000052311 seconds
2014-01-10 17:42:36,777 DEBUG Calling createAppenderRef on class org.apache.logging.log4j.core.config.AppenderRef for element appender-ref with params(ref="CONSOLE", level="null", null)
2014-01-10 17:42:36,778 DEBUG Calling createAppenderRef on class org.apache.logging.log4j.core.config.AppenderRef for element appender-ref with params(ref="logFileOutput", level="null", null)
2014-01-10 17:42:36,783 DEBUG Calling createLogger on class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger for element root with params(additivity="null", level="null", includeLocation="null", AppenderRef={CONSOLE, logFileOutput}, Properties={}, Configuration(/home/willie/code/onus/collective/build/production/wisdom/log4j2.xml), null)
2014-01-10 17:42:36,787 DEBUG Calling createLoggers on class org.apache.logging.log4j.core.config.plugins.LoggersPlugin for element loggers with params(Loggers={root})
2014-01-10 17:42:36,799 DEBUG Reconfiguration completed
2014-01-10 17:42:36,808 DEBUG Using default SystemClock for timestamps
2014-01-10 17:42:36,815 DEBUG property AsyncLogger.WaitStrategy=null
2014-01-10 17:42:36,815 DEBUG disruptor event handler uses SleepingWaitStrategy
2014-01-10 17:42:37,045 DEBUG No AsyncLogger.ExceptionHandler specified
2014-01-10 17:42:37,058 DEBUG Starting AsyncLogger disruptor with ringbuffer size 262144...
I guess my problem is I can't figure out how to channel stdout from Akka, thru log4j2 to be formatted, and then written to a file. For the time being I really don't want to be any more complex than that.
What is wrong with my setup? Why isn't this working?
figure it out. here's my config
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="debug">
<appenders>
<!--<Console name="CONSOLE" target="SYSTEM_OUT">
<PatternLayout pattern="%-5p %c{2} - %m%n" />
</Console>-->
<RollingRandomAccessFile name="logFileOutput" fileName="logs/wisdomOutput.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<pattern>%d %p %C [%t] %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="50 MB"/>
</Policies>
<DefaultRolloverStrategy max="20"/>
</RollingRandomAccessFile>
</appenders>
<loggers>
<!-- if you want to dump different packages to different files, here's where you do it
<logger name="com.onuspride" level="DEBUG" additivity="false">
<appender-ref ref="logFileOutput"/>
</logger>-->
<root level="debug" includeLocation="true">
<!--<appender-ref ref="CONSOLE"/>-->
<appender-ref ref="logFileOutput"/>
</root>
</loggers>
</configuration>
All looks good, I think you just need to set the log level of the root logger (looks like you already found that).
One small thing: in the RollingFile appender PatternLayout you are using %C.
I recommend you use %c (lowercase) instead. %C walks the stack trace to get location info and will give much worse performance. (The log4j2 manual has more details.)

Recursive call to appender error using Log4j2 with Flume

I am using Log4j2 with the Flume appender with the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="debug" name="ALI" packages="">
<appenders>
<Flume name="eventLogger" suppressExceptions="false" compress="false">
<Agent host="localhost" port="4141"/>
<RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="com.weather.ali"/>
</Flume>
<RollingFile name="VTECs" fileName="logs/vtecs.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<pattern>[%d{ISO8601}] %m%n</pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="250 MB"/>
</Policies>
</RollingFile>
</appenders>
<loggers>
<root level="debug">
<appender-ref ref="eventLogger"/>
</root>
<logger name="VTEC" level="debug">
<appender-ref ref="VTECs"/>
</logger>
</loggers>
</configuration>
This works as expected on a couple nodes, but the remaining nodes produce the following error:
ERROR Recursive call to appender eventLogger
Here is the full output of the Log4j bootstrap:
2013-05-07 09:21:59,033 DEBUG Calling createAgent on class org.apache.logging.log4j.flume.appender.Agent for element Agent with params(host="localhost", port="4141")
2013-05-07 09:21:59,040 DEBUG Calling createLayout on class org.apache.logging.log4j.core.layout.RFC5424Layout for element RFC5424Layout with params(facility="null", id="null", enterpriseNumber="18060", includeMDC="true", mdcId="null", mdcPrefix="null", eventPrefix="null", newLine="null", newLineEscape="null", appName="com.weather.ali", messageId="null", mdcExcludes="null", mdcIncludes="null", mdcRequired="null", charset="null", exceptionPattern="null", Configuration(MyApp))
2013-05-07 09:22:02,148 DEBUG Generated plugins in 3.106963125 seconds
2013-05-07 09:22:02,156 DEBUG Calling createAppender on class org.apache.logging.log4j.flume.appender.FlumeAppender for element Flume with params(agents={host=localhost port=4141}, properties={}, embedded="null", type="null", dataDir="null", connectTimeout="null", requestTimeout="null", agentRetries="null", maxDelay="null", name="eventLogger", suppressExceptions="false", mdcExcludes="null", mdcIncludes="null", mdcRequired="null", mdcPrefix="null", eventPrefix="null", compress="false", batchSize="null", null, RFC5424Layout(facility=LOCAL0 appName=com.weather.ali defaultId=Audit enterpriseNumber=18060 newLine=false includeMDC=true messageId=null), null)
2013-05-07 09:22:02,159 DEBUG Starting FlumeAvroManager FlumeAvro[localhost:4141]
2013-05-07 09:22:02,552 DEBUG Calling createAppenders on class org.apache.logging.log4j.core.config.plugins.AppendersPlugin for element appenders with params(appenders={eventLogger})
2013-05-07 09:22:06,774 DEBUG Generated plugins in 4.221008626 seconds
2013-05-07 09:22:06,776 DEBUG Calling createAppenderRef on class org.apache.logging.log4j.core.config.AppenderRef for element appender-ref with params(ref="eventLogger", level="null", null)
2013-05-07 09:22:06,778 DEBUG Calling createLogger on class org.apache.logging.log4j.core.config.LoggerConfig$RootLogger for element root with params(additivity="null", level="debug", includeLocation="null", appender-ref={org.apache.logging.log4j.core.config.AppenderRef#55ed34f1}, properties={}, Configuration(MyApp), null)
2013-05-07 09:22:06,779 DEBUG Calling createLoggers on class org.apache.logging.log4j.core.config.plugins.LoggersPlugin for element loggers with params(loggers={root})
2013-05-07 09:22:06,779 DEBUG Shutting down OutputStreamManager SYSTEM_OUT
2013-05-07 09:22:06,779 DEBUG Reconfiguration completed
2013-05-07 09:22:07,237 ERROR Recursive call to appender eventLogger
So two questions:
Why would this work on some nodes and not on others?
I have looked at the code where this error is generated, and it appears to get called whenever the appender code gets called more than once. I see no reason why this should happen. Can someone explain this to me?
This turned out (not surprisingly) to be a flume configuration issue on the affected nodes. The Flume agents on those nodes were not starting properly, and this resulted in the fairly cryptic recursion error.
FYI, for others chasing similar log4j recursion warnings:
https://issues.apache.org/jira/browse/LOG4J2-2738

JAX-WS - Out of memory?

i call a simple endpoint-method from client.
MyStatus getStatus(Car car) throws NoSuchCarException;
what i got is:
java.lang.OutOfMemoryError: PermGen space
at com.sun.xml.internal.ws.client.Stub.process(Stub.java:317)
at com.sun.xml.internal.ws.client.sei.SEIStub.doProcess(SEIStub.java:146)
at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:98)
at com.sun.xml.internal.ws.client.sei.SyncMethodHandler.invoke(SyncMethodHandler.java:78)
at com.sun.xml.internal.ws.client.sei.SEIStub.invoke(SEIStub.java:129)
at $Proxy124.getStatus(Unknown Source)
at de.Babs.showState(Babs.java:622)
What is the problem?
How can i activate xml-logging from JAX comunication?
The problem might be that there are two many applications deployed into your container or application server. java.lang.OutOfMemoryError: PermGen space is more due to system's limitation (memory) rather than due to programming mistakes. For e.g. it could mean that there is not enough memory for some resources to be loaded. Make sure you restart your server after each deployment or redeployment.
To enable logging just put needed log4j or slf4j or whatever framework you are using to server 's global libraries, then create a log4j.properties file like this (if you're using log4j):
log4j.rootLogger=INFO,stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d [%t] %-5p %X{file} %c{1} - %m%n
####################### YourService ######################
log4j.logger.com.yourservice.servicename=DEBUG,servicename
log4j.additivity.com.yourservice.servicename=false
log4j.logger.servicename=DEBUG,servicename
log4j.additivity.servicename=false
log4j.appender.servicename=org.apache.log4j.DailyRollingFileAppender
log4j.appender.servicename.File=C:\\Servers\\glassfish-3.1.2\\gf-app-logs\\servicename.log
log4j.appender.servicename.DatePattern='.'yyyy-MM-dd
log4j.appender.servicename.layout=org.apache.log4j.PatternLayout
log4j.appender.servicename.layout.ConversionPattern=%d [%t] %-5p %X{file} %c{1} %m%n
log4j.appender.servicename.ImmediateFlush=true
########################################################
I was using glassfish 3, all the logging appeared in this file: C:\\Servers\\glassfish-3.1.2\\gf-app-logs\\servicename.log. For e.g. in glassfish 3, this log4j.properties file, needs to be added into: &{base.dir}\glassfish-3.1.2\glassfish3\glassfish\domains\domain1\config

Resources