Log statement is repeating in log files - log4j

I have to log in different-2 file. So I have created two appender. One for basic log which would log little bit information.
Second appender will be dynamic and depending on the one parameter log file name will be different. Both scenario are working fine.
Now just found the log statement are getting added.
Means first time it write once, second time tow lines and third time three and so on.. My program runs on every 20 seconds. If I close the program and run again it will not repeat but if continuous runs every 20 second then it start repeat log.
I have used log4j.Create to logger and adding appender in this. Every thing I am doing by code. Not using any log file. Below is one of them.
static Logger loggerCustom = Logger.getLogger("CustomLog");
PatternLayout plt = new PatternLayout();
plt.setConversionPattern("%-7p %d [%t] %c %x - %m%n");
fh = new FileAppender(plt, "logs\\" + strDate + "\\CustomLog.log");
loggerCustom.addAppender(fh);
loggerCustom.setAdditivity(false);

Dear All above issue has been resolve by adding below line before appending appender.
.removeAllAppenders()

Related

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Is it possible to use the ${shortdate} in the internalLogFile?

Is it possible to use the ${shortdate} in the internalLogFile?
<nlog internalLogFile="C:\Logs\${shortdate}_nlog.log"
<targets>
<target name="logfile"
fileName="C:/logs/${shortdate}_dev.log"
</target>
I'm getting the expected dated logfile, but the internal log file is named ...
${shortdate}_nlog.log
Short answer: No.
Longer answer: The internal logger file name is just a string. It's read in during initialization and the XmlLoggingConfiguration class ensures that the directory exists whereas (for example) the FileTarget uses a Layout for fileName that converts the value provided using LayoutRenderers.
https://github.com/NLog/NLog/issues/581#issuecomment-74923718
My understanding from reading their comments is that the internal logging should be simple, stable, and used sparingly. Typically you are only supposed to turn it on when trying to figure out whats going wrong with your setup.
You can still dynamically name your internal log file based on the date time if you want. However it won't have the same rollover effect a target file would. It would essentially have a different datetime whenever you initialized your logger I think.
DateTime dt = DateTime.Now;
NLog.Common.InternalLogger.LogFile = #"C:\CustomLogs\NLog_Internal\internal_NLogs" + dt.ToString("yyyy-MM-dd") + ".txt";

powershell multithreading log4net intercalated log issue

I have a PowerShell script that uses log4Net for the management of logs. The logs are written in log files and in MS SQL database. The script is using multi-threading with run spaces.
The issue is multiple threads are managing several objects and logging lots of data on different objects at the same time. And I need to regroup logs by object. An example will help me to explain myself better! ^^
log file line 1 OBJECT 1.ACTION 1
log file line 2 OBJECT 1.ACTION 2
log file line 3 OBJECT 1.ACTION 3
log file line 4 object2.action1
log file line 5 object3.action1
log file line 6 OBJECT 1.ACTION 4
log file line 7 object2.action2
log file line 8 OBJECT 1.ACTION 5
…
To manage this intercalated logging issue, I planned to log in memory, for example in a table, and at the end of the treatment of the objects; block other threads using mutex and write all logs with a foreach loop.
Main {
Treat object {
Action1 -> Logs +=log1
Action2 -> Logs +=log2
…
}
System.Threading.Mutex WaitOne()
For each ($log in $Logs) {
Write in log file
Write in SQL DB
}
System.Threading.Mutex ReleaseMutex()
}
I would like to know if there is any better solution to manage intercalated logging issue with multiple run spaces please.
Log4Net can perhaps natively manage this; stocks all logs in memory and “commits” the writes only when I type a command? Or some other solutions without using Mutex?
One way would be to use one logger per group, and if you don't know the number of groups in advance you could just create the loggers dynamically.
If you prefer to work with one logger, the best practice usually is to log things as they happen and do the grouping afterwards, for example when the logs are displayed.
It's trivial to do in SQL by adding a column for the grouping criteria, or for a text file you could use the unix sort command.

Log4net hourly rollover but with datePattern of seconds

I'm trying to write a log which creates new file every hour, which can be done simply using the datePattern set to but I need the datePattern (or at least the filename to consist of yyyyMMddHHmmss but still, rollover every 1 hour.
Obviously when I set it gives the result but the rollover is every second.
I've search all over but couldn't find any answer.
Thanks for the assistance.
Log4net does not support this. You could copy the source code of the rolling file appender and implement this feature for yourself. As far as I can tell you cannot derive from the class and override the behavior since the date pattern is used in private methods.

Log4J print empty line to logfile

In java you could use System.out.println() to print a blank line but how does it work with log4j in a logfile?
There I also want to have one or more blank lines inside the logfile.
I already read the follwing:
log4j how to append blank line
But this is not really helpful because with
logger.debug("\n");
logger.debug("");
you do not print a message but other information like time and so on (the layout of the logger) are still stored inside the logfile. But I just want to have a total empty line.
Can anyone help me please?
Log4j is not designed to write blank lines.
You cannot find this option in the logger, because the logger is independent of his appenders who write to a file or the console or something different.
I think you need to create your custom FileAppender which checks the Logging Message before the writing. If your message.equals("\n"); you append a blank line without the layout by accessing the file by your own and skip the normal logging with the layout. then you can use Logger.debug("\n");
Also it is a bad practice to add blank lines. It is similar to split your Log-Message over several lines. Your want all your Logmessage in one line each, so they are easy to parse for LogViewer-Tools like chainsaw or OtrosLogViewer One exception are Stacktraces.
Create your own appender with empty layout and a logger that uses that appender. Then write the blank line using that logger. Make certain to set the additivity attribute for logger to false so that the root logger doesn't write the message using the formatting of its appenders.

Resources