powershell multithreading log4net intercalated log issue - multithreading

I have a PowerShell script that uses log4Net for the management of logs. The logs are written in log files and in MS SQL database. The script is using multi-threading with run spaces.
The issue is multiple threads are managing several objects and logging lots of data on different objects at the same time. And I need to regroup logs by object. An example will help me to explain myself better! ^^
log file line 1 OBJECT 1.ACTION 1
log file line 2 OBJECT 1.ACTION 2
log file line 3 OBJECT 1.ACTION 3
log file line 4 object2.action1
log file line 5 object3.action1
log file line 6 OBJECT 1.ACTION 4
log file line 7 object2.action2
log file line 8 OBJECT 1.ACTION 5
…
To manage this intercalated logging issue, I planned to log in memory, for example in a table, and at the end of the treatment of the objects; block other threads using mutex and write all logs with a foreach loop.
Main {
Treat object {
Action1 -> Logs +=log1
Action2 -> Logs +=log2
…
}
System.Threading.Mutex WaitOne()
For each ($log in $Logs) {
Write in log file
Write in SQL DB
}
System.Threading.Mutex ReleaseMutex()
}
I would like to know if there is any better solution to manage intercalated logging issue with multiple run spaces please.
Log4Net can perhaps natively manage this; stocks all logs in memory and “commits” the writes only when I type a command? Or some other solutions without using Mutex?

One way would be to use one logger per group, and if you don't know the number of groups in advance you could just create the loggers dynamically.
If you prefer to work with one logger, the best practice usually is to log things as they happen and do the grouping afterwards, for example when the logs are displayed.
It's trivial to do in SQL by adding a column for the grouping criteria, or for a text file you could use the unix sort command.

Related

Azure adds timestamp at the beginning logs

I have a problem with the logs retrieving from my docker containers with Azure log analytics, all logs are retrieving well but Azure adds a date at the beginning of each line of the log, which means that an entry is created for each line and I can't analyze my logs correctly because they are divided...
For example on this image I have in the black rectangle an added date (by azure I think) and in the red rectangle the date appearing in my logs :
Also, if there is no date on a line of my logs, there is still an added date on all lines, even the empty ones
The problem is that azure cuts my log file line by line by adding a date on each line when I would like it to delimit with the dates already present in my logs files.
Do you have any solutions?
One of the solution I can think of is that, when you query the logs, you can use the replace() method to replace the redundant date(replace it with a empty string etc.). And you need to write the proper regular expression for your purpose.
A false query like below:
ContainerLog
| extend new_logEntry=replace(#'xxx', #'xxx', LogEntry)
Currently Azure Monitor for containers doesn’t support multi-line logging, but there are workarounds available. You can configure all the services to write in JSON format and then Docker/Moby will write them as a single line.
https://learn.microsoft.com/fr-fr/azure/azure-monitor/insights/container-insights-faq#how-do-i-enable-multi-line-logging

Logstash Read a Property File

I am looking for a way of reading property file in logstash config file so that I can do some data transformation based on the property file value? for example I can skip processing type 1 event and send to index a, process type 2 events and sent to index 2.
If I understand your question correctly, note that logstash will read all the files in your config directory. You can put different processing blocks in different config files, which makes for a nice separation of code. Be sure that each block is wrapped in a conditional so that they don't all run for all events.

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

Log4J print empty line to logfile

In java you could use System.out.println() to print a blank line but how does it work with log4j in a logfile?
There I also want to have one or more blank lines inside the logfile.
I already read the follwing:
log4j how to append blank line
But this is not really helpful because with
logger.debug("\n");
logger.debug("");
you do not print a message but other information like time and so on (the layout of the logger) are still stored inside the logfile. But I just want to have a total empty line.
Can anyone help me please?
Log4j is not designed to write blank lines.
You cannot find this option in the logger, because the logger is independent of his appenders who write to a file or the console or something different.
I think you need to create your custom FileAppender which checks the Logging Message before the writing. If your message.equals("\n"); you append a blank line without the layout by accessing the file by your own and skip the normal logging with the layout. then you can use Logger.debug("\n");
Also it is a bad practice to add blank lines. It is similar to split your Log-Message over several lines. Your want all your Logmessage in one line each, so they are easy to parse for LogViewer-Tools like chainsaw or OtrosLogViewer One exception are Stacktraces.
Create your own appender with empty layout and a logger that uses that appender. Then write the blank line using that logger. Make certain to set the additivity attribute for logger to false so that the root logger doesn't write the message using the formatting of its appenders.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources