I have an existing distributed application using log4j to write logs to the local server. I want to preserve the existing functionality and also have some of those logs sent to a central repository. I've seen examples of using log4j with SocketAppender to send a log to a remote server, but haven't seen an example of multiple server sending to the same remote server writing each server's logs in a separate file. Is there an example of this?
As an alternative, I'm curious about using the JDBCAppender using a database as the centralized log repository but have the same question regarding how do differentiate where the messages came from when viewing query results. Is there a log4j properties setting that identifies the sender that can be interpreted on the listener server?
For your first question - it will depend on the remote server to whom your client program is sending logs. If you have developed some program for receiving logs on remote server then, there are 2 approaches to create separate log file for each client logs -
Server program should listen on a particular port and after receiving some logs, it should check client IP and then create log file for each client IP.
Make server program to listen on different port for each client and once client is connected to its specific port, receive the data and dump in a log file. This approach seems easy but not recommended.
If your server is based on linux, I would recommend you to use rsyslog for centralizing log collection. In rsyslog, you can configure each client separately and dump the logs in separate log file.
For second question - you can use Nested diagnostic Context (NDC) feature of log4j for writing hostname in database. See this example. This example is using USER_ID as extra column for writing in database. Similarly, you can use this extra column for writing hostname. In the starting of your client program before writing any log statement, you have to put value in NDC using below code -
NDC.push(InetAddress.getLocalHost().getHostName());
you have to make some changes in the Configuration File. Considering multiple applications want to write their log files to the centralised remote location, If yes then the below mentioned changes would help:
You need to make the change in each individual application's Log4j config file.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration monitorInterval="60">
<Properties>
<Property name="server-log-path">REMOTE_SERVER_PATH</Property>
</Properties>
<Appenders>
<File name="Login-App-File-Appender" fileName="${server-log-path}/file_name.log" >
<PatternLayout>
<pattern>
[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
</pattern>
</PatternLayout>
</File>
<File name="CheckOut-App-File-Appender" fileName="${server-log-path}/file_name.log" >
<PatternLayout>
<pattern>
[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
</pattern>
</PatternLayout>
</File>
</Appenders>
<Loggers>
<Logger name="com.microService.LoginService" level="info" additivity="false">
<AppenderRef ref="Login-App-File-Appender"/>
</Logger>
<Logger name="com.microService.CheckOutService" level="info" additivity="false">
<AppenderRef ref="CheckOut-App-File-Appender"/>
</Logger>
<Root>
<AppenderRef ref="Login-App-File-Appender"/>
</Root>
</Loggers>
</Configuration>
Related
We are using spring boot to send metrics to app insight we are using applicationinsights-logging-log4j2.
Below are the appenders we are using in logj2-spring.xml
*
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{MM-dd-yyyy'T'HH:mm:ss.SSS,UTC} %correlationId [%thread] %-5level %logger{36}- %msg%n"/>
</Console>
<ApplicationInsightsAppender name="aiAppender">
</ApplicationInsightsAppender>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Console" />
<AppenderRef ref="aiAppender" />
</Root>
</Loggers>
We are seeing the logs in app insight search screen however i have few questions.
Is there a way to define a custom info in logging like correlationId(guid used to track a flow uniquely) and send it to AI just like we are appending in console logs.
Is there anything like pattern we can define for AI.
Is there a use of console appender and logging to console if we are logging to AI.
You can create a class that will extend the OncePerRequestFilter, and in that class generate one Id using UUID generator and set this UUID in variable, let's say RequestId.
And then write MDC.put('requestid', RequestId).
OncePerRequestFilter class is executed with every HTTP request, you won't be required to call the class extending it explicitly, and MDC.put('requestid', RequestId) will be added as external property in your application insight log.
This is just a workaround for correlationid though it is providing us a same feature, that we can aggregate a log. Whatever requestid is being generated, you can retrieve that and then use it application insight to see logs for that request.
I believe console appender is still helpful, because I. AI we can see loga after at least 4 to 5 mins, so for real time debugging console logs are helpful. Though you can. Configure what type of logs you want to see in console and what you wanna sent to ai.
Using latest version of Java SAP Cloud SDK
We have some code which uses ODataQueryBuilder API and VDM API as well. We want to log the HTTP requests that are being sent by these API's. We want to log whole of the HTTP request - headers, body everything. Please note that our application is running on SAP Cloud Platform's Cloud Foundry PAAS offering and using cf set-logging-level doesn't seem to work.
I've been using this Java arg when debugging my requests, but I've been doing it locally.
-Dorg.slf4j.simpleLogger.log.org.apache.http.wire=debug
If you can pass it withing CF environment I think you should start seeing all the payloads. I'll research a bit more to provide a better guidance if this won't work for you.
For applications deployed on SCP CF, there are different setups for which recommend other logging practices. The goal is to configure individual log levels for specific packages of your application and third-party dependencies, e.g. SAP Cloud SDK or SAP Service SDK or Apache HTTP components.
TomEE based application:
Edit the manifest.yml to include the following env entry for environment variable:
SET_LOGGING_LEVEL: '{ROOT: INFO, com.sap.cloud.sdk: INFO, org.apache.http.wire: DEBUG}'
Feel free to customize.
Spring Boot based application:
We expect the logback framework.
Edit/Create the file: application/src/main/resources/logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProfile name="!cloud">
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<root level="INFO"/>
<logger name="org.springframework.web" level="INFO"/>
</springProfile>
<springProfile name="cloud">
<appender name="STDOUT-JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="com.sap.hcp.cf.logback.encoder.JsonEncoder"/>
</appender>
<logger name="org.springframework.web" level="INFO"/>
<logger name="com.sap.cloud.sdk" level="INFO"/>
<logger name="org.apache.http.wire" level="DEBUG"/>
<root level="INFO">
<appender-ref ref="STDOUT-JSON"/>
</root>
</springProfile>
</configuration>
Feel free to customize.
Notice the different profile settings. Make sure the cloud profile is active for deployed applications. Edit the manifest.yml to include the following env entry for environment variable:
SPRING_PROFILES_ACTIVE: 'cloud'
I am trying to store log4j2 logs in bluemix's cloudant DB.
Could you help me out or point to any document , regarding log4j2 configuration I need to make ?
Thank you.
Take a look at the Log4j 2 Docs - Appenders. The NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface. Provider implementations currently exist for MongoDB and Apache CouchDB, and you can write a custom provider.
You specify which NoSQL provider to use by specifying the appropriate configuration element within the <NoSql> element. The types currently supported are <MongoDb> and <CouchDb>. To create your own custom provider, read the JavaDoc for the NoSQLProvider, NoSQLConnection, and NoSQLObject classes and the documentation about creating Log4j plugins.
Considering that Cloudant is built upon CouchDB you should be able to adapt the CouchDB appender for your purpose. The following is an example of appender configuration for CouchDB:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="error">
<Appenders>
<NoSql name="databaseAppender">
<CouchDb databaseName="applicationDb" protocol="https" server="couch.example.org" username="loggingUser" password="abc123" />
</NoSql>
</Appenders>
<Loggers>
<Root level="warn">
<AppenderRef ref="databaseAppender"/>
</Root>
</Loggers>
</Configuration>
I'm using the automatic log rolling and compression facilitated by the TimeBasedRollingPolicy provided in Log4J Extras (see config below).
It is normal for the application which is doing this logging to be constantly stopping/starting and I've noticed that the automatic compression does not occur if the application is stopped during a rollover triggering event (hourly rollover in this case). I find this strange as the rolling itself (without compression) still occurs and seems to work fine.
Is it not possible to have log compression work for an application that does not run continuously?
Does anyone know how to get this working with Log4J?
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration debug="true">
<appender name="ROLL" class="org.apache.log4j.rolling.RollingFileAppender">
<rollingPolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">
<param name="FileNamePattern" value="/var/batchproc/logs/log4j_roll_compress_%d{yyyy-MM-dd-kk}.log.gz"/>
</rollingPolicy>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="[%d] [%t] %-5p %c %m%n"/>
</layout>
</appender>
<root>
<appender-ref ref="ROLL"/>
</root>
</log4j:configuration>
The rollover process is only triggered by logging two messages that are in different time units (hours in the example) while the app is running. Past time units aren't scanned-for on app startup.
One thing you can do is use a separate "active" file name to be where all log messages go before they're rolled/zipped. If you do that, any existing active log file will be appended-to until another hour has gone by and then rolled into a gzipped, timestamped file. Unfortunately, this file's timestamp isn't checked on startup (at least in apache-log4j-extras 1.1), so the old hour's logs and new hour's logs will be together in the rolled file. But at least it'll be zipped!
<appender name="ROLL" class="org.apache.log4j.rolling.RollingFileAppender">
<param name="File" value="/var/batchproc/logs/log4j_roll_active.log"/>
...rest of example config here...
</appender>
Is there an appender in log4net that can allow a winform client to read a log4net log on another server without using a share? My application is hosted as a web service. I'm looking for an HTTP appender or something similar.
There is a GitHub project called PostLog that is a HttpAppender for log4net.
I think you could use the Remoting Appender, something like this:
<appender name="RemotingAppender" type="log4net.Appender.RemotingAppender" >
<sink value="http://localhost:8080/LoggingSink" />
<lossy value="false" />
<bufferSize value="95" />
<onlyFixPartialEventData value="true" />
</appender>
According to the docs:
This Appender is designed to deliver
events to a remote sink. That is any
object that implements the
RemotingAppender.IRemoteLoggingSink
interface. It delivers the events
using .NET remoting. The object to
deliver events to is specified by
setting the appenders Sink property.
There is also a UdpAppender and there is this open source client that can receive these messages:
http://log2console.codeplex.com/