Wildfly & log4j - log4j

The property has standalone.xml
<System-properties>
<Property name = "eclipselink.archive.factory" value = "org.jipijapa.eclipselink.JBossArchiveFactoryImpl" />
<Property name = "-Dlog4j.configuration" value = "file:///d:Work/Options/log4j.xml" />
</ System-properties>
But in the logs is a warning
log4j: WARN No appenders could be found for logger (org.jboss.logging).
log4j: WARN Please initialize the log4j system properly.
log4j: WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Connected to server
What I did not = (
I have my own log4j1.12.17 in modules

Related

Why doesn't org.apache.log4j.RollingFileAppender create the archive files?

This is Log4j 1.2.17. I have file:/C:/workspaces/gitlab/QMT/bin/log4j.properties on the classpath. Its contents are:
log4j.rootLogger=DEBUG, stdout, queuesLog
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Threshold=INFO
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}:%M:%L - %m%n
log4j.appender.queuesLog=org.apache.log4j.RollingFileAppender
log4j.appender.queuesLog.Threshold=DEBUG
log4j.appender.queuesLog.MaxFileSize=1MB
log4j.appender.queuesLog.MaxBackupIndex=10
log4j.appender.queuesLog.layout=org.apache.log4j.PatternLayout
log4j.appender.queuesLog.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}:%M:%L - %m%n
log4j.appender.queuesLog.File=C:/QEU/logs/queues.log
So when I saw that all of the archives (e.g. queues.log.1) are missing, I concluded that Log4j wasn't obeying this configuration file. I checked several things, whose details I don't believe are necessary for this question. Those checks told me that Log4j should be obeying the above configuration file. Finally, I learned about -Dlog4j.debug.
log4j: Trying to find [log4j.xml] using context classloader sun.misc.Launcher$AppClassLoader#659e0bfd.
log4j: Trying to find [log4j.xml] using sun.misc.Launcher$AppClassLoader#659e0bfd class loader.
log4j: Trying to find [log4j.xml] using ClassLoader.getSystemResource().
log4j: Trying to find [log4j.properties] using context classloader sun.misc.Launcher$AppClassLoader#659e0bfd.
log4j: Using URL [file:/C:/workspaces/gitlab/QMT/bin/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/C:/workspaces/gitlab/QMT/bin/log4j.properties
log4j: Parsing for [root] with value=[DEBUG, stdout, queuesLog].
log4j: Level token is [DEBUG].
log4j: Category root set to DEBUG
log4j: Parsing appender named "stdout".
log4j: Parsing layout options for "stdout".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}:%M:%L - %m%n].
log4j: End of parsing for "stdout".
log4j: Setting property [threshold] to [INFO].
log4j: Parsed "stdout" options.
log4j: Parsing appender named "queuesLog".
log4j: Parsing layout options for "queuesLog".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss} %-5p %C{1}:%M:%L - %m%n].
log4j: End of parsing for "queuesLog".
log4j: Setting property [file] to [C:/QEU/logs/queues.log].
log4j: Setting property [maxBackupIndex] to [10].
log4j: Setting property [maxFileSize] to [1MB].
log4j: Setting property [threshold] to [DEBUG].
log4j: setFile called: C:/QEU/logs/queues.log, true
log4j: setFile ended
log4j: Parsed "queuesLog" options.
log4j: Finished configuring.
log4j: rolling over count=1025467879
log4j: maxBackupIndex=10
log4j: Renaming file C:\QEU\logs\queues.log to C:\QEU\logs\queues.log.1
log4j: setFile called: C:/QEU/logs/queues.log, true
log4j: setFile ended
This confirms that file:/C:/workspaces/gitlab/QMT/bin/log4j.properties is in use. Furthermore,
log4j: Renaming file C:\QEU\logs\queues.log to C:\QEU\logs\queues.log.1
is explicit, but the archive files aren't there!
queues.log grows until the operating system doesn't like it anymore, which happens at 4 GiB.

In a java application, if all 3 log4j APIs exists in the classpath, how to choose one of them to be effective?

In a Cloudera Spark-on-YARN installation, I encounter an application that has multiple log4j APIs in its classpath:
log4j:1.2
log4j-1.2-api:2.x
log4j-core:2.x
When the application is launched. I found the following error:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
It cannot be fixed because the vendor of the platform exclusively used the log4j:1.2 configuration everywhere: it has integrated a managed log4j.properties file (not the log4j2.properties), which is in the classpath, used by many components, and cannot be tampered.
How do I tell the classloader or log4j implementation to not trying to find log4j2 file if it is missing?
UPDATE 1: I've tried the following Spark arguments to customise the driver JVM. Effects are as follows:
--conf spark.driver.extraJavaOptions="-XX:+UseG1GC -Dlog4j1.compatibility=true -Dlog4j.configuration=./conf_5/log4j.properties -Dlog4j.debug"
result is:
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Set system property 'org.apache.logging.log4j.simplelog.StatusLogger.level' to TRACE to show Log4j2 internal initialization logging.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
log4j: setFile called: /tmp/spark-b190d9df-64bf-4ff6-b72d-2cb8de1d888a/__driver_logs__/driver.log, true
log4j: setFile ended
log4j: Finalizing appender named [_DriverLogAppender].
--conf spark.driver.extraJavaOptions="-XX:+UseG1GC -Dlog4j1.compatibility=true -Dlog4j.configurationFile=./conf_5/log4j.properties -Dlog4j.debug"
result is:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
log4j: setFile called: /tmp/spark-817bc6eb-bcc0-4cee-ba32-dbd4d89048e5/__driver_logs__/driver.log, true
log4j: setFile ended
log4j: Finalizing appender named [_DriverLogAppender].
So none of them is working properly. There seems to be no way to either use the right version or use the wrong configuration file

k8s spark executor not able to parse log4j rootLogger Level

In my k8s spark application, I would like to change the log4j log LEVEL in executor pods.
In log4j properties file, I have set rootLogger to WARN, but still in executors pods I can see it is parsing to INFO.
log4j.properties:
log4j.rootLogger=WARN,console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %c{2}(%L): %m%n
spark-submit:
spark-submit \
--master k8s://.. \
--deploy-mode client \
--conf spark.driver.extraJavaOptions='-Dlog4j.debug=true -Dlog4j.configuration=file:///opt/log4j.properties'\
--conf spark.executor.extraJavaOptions='-Dlog4j.debug=true -Dlog4j.configuration=file:///opt/log4j.properties'\
--class org.log4jTestRunner \
logger-test-1.0.jar
Driver logs:
log4j: Using URL [file:/opt/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/opt/log4j.properties
log4j: Parsing for [root] with value=[WARN,console].
log4j: Level token is [WARN].
log4j: Category root set to WARN
log4j: Parsing appender named "console".
log4j: Parsing layout options for "console".
log4j: Setting property [conversionPattern] to [%d %-5p [%t] %c{2}(%L): %m%n].
log4j: End of parsing for "console".
log4j: Setting property [target] to [System.err].
log4j: Parsed "console" options.
Executor logs:
log4j: Using URL [file:/opt/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/opt/log4j.properties
log4j: Parsing for [root] with value=[INFO,console].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named ""console"".
log4j: Parsing layout options for ""console"".
log4j: Setting property [conversionPattern] to [%d %-5p [%t] %c{2}(%L): %m%n].
log4j: End of parsing for ""console"".
log4j: Setting property [target] to [System.err].
log4j: Parsed ""console"" options.
I can see in Driver it parsing correctly and log4j log level is respected.
Not sure, why in executors it is working differently.
I am using k8s with spark 3.x version.
Thank you in advance.

Failed to find affinity server node with data storage configuration for starting cache

I'm trying to store Spark in-memory table into Ignite. When I try to do that I gets an error message as
Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL, aliveSrvNodes=[]].
I'm running it in a HDP cluster setup in a Ec2 machine but when I do the same in the cluster machine here it works perfectly but not in the EC2 machine.
Thanks in advance.
UPDATE:
I'm using Spark shell. Here's the code.
val df = sqlContext.read.json("~/responses")
val s = df.select("response.id","response.name")
s.write.format(IgniteDataFrameSettings.FORMAT_IGNITE).option(IgniteDataFrameSettings.OPTION_CONFIG_FILE, "~/apache-ignite-fabric-2.6.0-bin/examples/config/spark/example-shared-rdd.xml").option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS,"id").option(IgniteDataFrameSettings.OPTION_TABLE, "json_table").save()
Here is the xml config file that I use for my single Ignite server:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<!-- SharedRDD cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="sharedRDD"/>
<!-- Set a cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<!-- Index Integer pairs used in the example. -->
<property name="indexedTypes">
<list>
<value>java.lang.Integer</value>
<value>java.lang.Integer</value>
</list>
</property>
<!-- Set atomicity mode. -->
<property name="atomicityMode" value="ATOMIC"/>
<!-- Configure a number of backups. -->
<property name="backups" value="1"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Here's the full log.
18/10/30 12:32:54 WARN GridDiagnostic: Initial heap size is 252MB (should be no less than 512MB, use -Xms512m -Xmx512m).
18/10/30 12:32:54 WARN TcpCommunicationSpi: Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
18/10/30 12:32:55 WARN NoopCheckpointSpi: Checkpoints are disabled (to enable configure any GridCheckpointSpi implementation)
18/10/30 12:32:55 WARN GridCollisionManager: Collision resolution is disabled (all jobs will be activated upon arrival).
18/10/30 12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
18/10/30 12:32:57 WARN IgniteAuthenticationProcessor: Cannot find the server coordinator node. Possible a client is started with forceServerMode=true. Security warning: user authentication will be disabled on the client.
18/10/30 12:32:58 ERROR ClusterCachesInfo: Failed to find affinity server node with data storage configuration for starting cache [cacheName=SQL_PUBLIC_JSON_TBL6, aliveSrvNodes=[]]
18/10/30 12:32:58 WARN CacheAffinitySharedManager: No server nodes found for cache client: SQL_PUBLIC_JSON_TBL
The log contains the following record:
12:32:57 WARN TcpDiscoverySpi: Failed to read message due to ClassNotFoundException (make sure same versions of all classes are available on all nodes) [rmtNodeId=3085dfa9-58ba-4ac0-a7f8-f78e2901a699, err=o.a.i.i.processors.hadoop.HadoopAttributes]
It says, that a discovery message cannot be deserialized because HadoopAttributes class is not on the classpath. It may lead to connectivity problems and affect ability of nodes to see each other.
Make sure, that all nodes have ignite-hadoop module on their classpath, or get rid of this dependency.

KafkLog4JAppender not pushing application logs to kafka topic

I am pretty new to using the Kafka stream.
In a particular requirement I have to push my log4j logs directly to Kafka topic.
I have a standalone kafka installation running on centos and i have verified it with the kafka publisher and consumer clients. Also i am using the bundled zookeeper instance.
Now i have also created a standalone java app with log4j logging enabled. Also i have edited the log4j.properties file as below -
log4j.rootCategory=INFO
log4j.appender.file=org.apache.log4j.DailyRollingFileAppender
log4j.appender.file.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.file.File=/home/edureka/Desktop/Anurag/logMe
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd'T'HH:mm:ss.SSS'Z'}{UTC} %p %C %m%n
log4j.logger.com=INFO,file,KAFKA
#Kafka Appender
log4j.appender.KAFKA=kafka.producer.KafkaLog4jAppender
log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout
log4j.appender.KAFKA.layout.ConversionPattern=%d{yyyy-MM-dd'T'HH:mm:ss.SSS'Z'}{UTC} %p %C %m%n
log4j.appender.KAFKA.ProducerType=async
log4j.appender.KAFKA.BrokerList=localhost:2181
log4j.appender.KAFKA.Topic=test
log4j.appender.KAFKA.Serializer=kafka.test.AppenderStringSerializer
Now when i am running the application, all the logs are going into the local log file but the consumer is still not showing any entry happening.
The topic i am using is test in either scenario.
Also no error log is being generated the the detailed logs of the log4j library are as below -
log4j: Trying to find [log4j.xml] using context classloader sun.misc.Launcher$AppClassLoader#a1d92a.
log4j: Trying to find [log4j.xml] using sun.misc.Launcher$AppClassLoader#a1d92a class loader.
log4j: Trying to find [log4j.xml] using ClassLoader.getSystemResource().
log4j: Trying to find [log4j.properties] using context classloader sun.misc.Launcher$AppClassLoader#a1d92a.
log4j: Using URL [file:/home/edureka/workspace/TestKafkaLog4J/bin/log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/home/edureka/workspace/TestKafkaLog4J/bin/log4j.properties
log4j: Parsing for [root] with value=[DEBUG, stdout, file].
log4j: Level token is [DEBUG].
log4j: Category root set to DEBUG
log4j: Parsing appender named "stdout".
log4j: Parsing layout options for "stdout".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n].
log4j: End of parsing for "stdout".
log4j: Setting property [target] to [System.out].
log4j: Parsed "stdout" options.
log4j: Parsing appender named "file".
log4j: Parsing layout options for "file".
log4j: Setting property [conversionPattern] to [%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n].
log4j: End of parsing for "file".
log4j: Setting property [file] to [/home/edureka/Desktop/Anurag/logMe].
log4j: Setting property [maxBackupIndex] to [10].
log4j: Setting property [maxFileSize] to [5MB].
log4j: setFile called: /home/edureka/Desktop/Anurag/logMe, true
log4j: setFile ended
log4j: Parsed "file" options.
log4j: Finished configuring.
2015-05-11 19:44:40 DEBUG TestMe:19 - This is debug : anurag
2015-05-11 19:44:40 INFO TestMe:23 - This is info : anurag
2015-05-11 19:44:40 WARN TestMe:26 - This is warn : anurag
2015-05-11 19:44:40 ERROR TestMe:27 - This is error : anurag
2015-05-11 19:44:40 FATAL TestMe:28 - This is fatal : anurag
2015-05-11 19:44:40 INFO TestMe:29 - message from log4j appender
Any help will be really great.
Thanks,
AJ
In your output, I don't see the KAFKA appender being created, so no wonder nothing is logged to Kafka. I'm guessing the reason for that is that you only log from a class named TestMe (probably in the default package), while the KAFKA appender is only added to the logger named "com".

Resources