Is there a log4j appender for Moogsoft? - log4j

Our current approach is to:
Send all events to Splunk (through Splunk's own log4j-appender).
Define Splunk alerts, which trigger Moogsoft.
Obviously, this increases the latency and relies on Splunk more than necessary. Which makes me wonder, if someone has already developed a Moogsoft-appender for log4j.
A simple search hasn't brought anything up -- hence this question.

i haven't done this, but log4j has a SocketAppender
https://howtodoinjava.com/log4j/log4j-socketappender-and-socket-server-example/
that might fit with Moogsofts SocketLam
https://docs.moogsoft.com/en/configure-the-socket-lam.html
Alternatively:
https://github.com/logstash/log4j-jsonevent-layout
gives json layout to log4j which then could be received with a REST Lam

I don't know of anyone that has put together an actual appender, but I don't think you'd need one. An HTTP appender with a JSON layout sending to a Moogsoft REST adapter should be able to do the job, and might be a lot easier to set up than handling raw bytes off a socket.
I haven't done it so I'm not sure how much work it would be to set up. I suspect there's some work involved on either the log4j side to get the layout to look like Moogsoft wants it, or on the Moogsoft side to normalize what it gets sent.

Related

Avoid collapsing several messages into one with node.js winston + logstash

In node.js, having set up winston and logstash...
I observe in the logstash user-interface (Kibana) that oftentimes several logging messages are tucked into one row as if they are a single message. Any quick shot at which component is causing this and how it can be avoided?
although message groups could be nice in general.... the messages are collapsed quite arbitrarily and it is detrimental - as the message structure of such a group is different than a regular message, which really doesn't help mining the data.
Hopefully the transport sends over chunks to save on communication overheads, but I would very much like that each message emitted by my code to winston, remains a single message and does not get grouped with other ones.
(I am currently using winston-logstash for funneling from winston to logstash).
Seems the logstash split filter avoids this. Poor documentation BTW.

Reporting progress on a million call process

I have a console/desktop application that crawls a lot (think million calls) of data from various webservices. At any given time I have about 10 threads performing these call and aggregating the data into a MySql database. All seeds are also stored in a database.
What would be the best way to report it's progress? By progress I mean:
How many calls already executed
How many failed
What's the average call duration
How much is left
I thought about logging all of them somehow and tailing the log to get the data. Another idea was to offer some kind of output to a always open TCP endpoint where some form of UI could read the data and display some aggregation. Both ways look too rough and too complicated.
Any other ideas?
The "best way" depends on your requirements. If you use a logging framework like NLog, you can plug in a variety of logging targets like files, databases, the console or TCP endpoints.
You can also use a viewer like Harvester as a logging target.
When logging multi-threaded applications I sometimes have an additional thread that writes a summary of progress to the logger once every so often (e.g. every 15 seconds).
since it is a Console Application, just use Writeline, just have the application spit the important stuff out to the Console.
I did something Similar in an application that I created to export PDF's from a SQL Server Database back into PDF Format
you can do it many different ways. if you are counting records and their size you can run a tally of sorts and have it show the total every so many records..
I also wrote out to a Text File, so that I could keep track of all the PDFs and what case numbers they went to and things like that. that information is in the answer that I gave to the above linked question.
you could also write things out to a Text File every so often with the statistics.
the logger that Eric J. mentions is probably going to be a little bit easier to implement, and would be a nice tool for your toolbox.
these options are just as valid depending on your specific needs.

Nlog Async and Log Sequence

In my nlog configuration, I've set
<targets async="true">
with the understanding that all logging now happens asynchronously to my application workflow. (and I have noticed a performance improvement, especially on the Email target).
This has me thinking about log sequence though. I understand that with async, one has no guarantee of the order in which the OS will execute the async work. So if, in my web app, multiple requests come in to the same method, each logging their occurrence to NLog, does this really mean that the sequence in which the events appear in my log target will not necessarily be the sequence in which the log method was called by the various requests?
If so, is this just a consequence of async that one has to live with? Or is there something I can do to keep have my logs reflect the correct sequence?
Unfortunately this is something you have to live with. If it is important to maintain the sequence you'll have to run it synchronously.
But if it is possible for you to manually maintain a sequence number in the log message, it could be a solution.
I know this is old and I'm just ramping up on NLog but if you see a performance increase for the email client, you may want to just assert ASYNC for the email target?
NLog will not perform reordering of LogEvent sequence, by activating <targets async="true">. It just activates an internal queue, that provides better handling of bursts and enables batch-writing.
If a single thread writes 1000 LogEvents then they will NOT become out-of-order, because of async-handling.
If having 10 threads each writing 1000 LogEvents, then their logging will mix together. But the LogEvents of an individual thread will be in the CORRECT order.
But be aware that <targets async="true"> use the overflowAction=Discard as default. See also: https://github.com/nlog/NLog/wiki/AsyncWrapper-target#async-attribute-will-discard-by-default
For more details about performance. See also: https://github.com/NLog/NLog/wiki/performance

Usage of log4J levels

What is the best practice in using log4j levels while coding.
I mean when do we use INFO logging, when do we use DEBUG logging / ERROR logging etc.
In general, I follow these guidelines:
DEBUG : low level stuff. cache hit, cache miss, opening db connection
INFO : events that have business meaning - creating customer, charging cc...
WARN : could be a problem but doesn't stop your app. email address not found / invalid
ERROR : unexpected problem. couldn't open db connection. etc...
My baseline is always that INFO level is equivalent to System.out, and ERROR is equivalent to System.err.
DEBUG - Here you put all your tracing information, and specifically information that you don't want to see when your "comfort level" is system.out.
INFO - use this one for general messages, progress messages, for application related messages, but not for tracing.
WARN - provide alerts that something is wrong, perhaps unexpected, or that a workaround is used, but the application can still continue (socket timeout/retries, invalid user input, etc.).
ERROR - alerts about problems that prevent your app from continuing normally, e.g. database is down, missing bootstrap configuration.
A common mistake when writing libraries is to use ERROR level to indicate problems with the calling application (the code that uses the library) instead of indicating real errors within the library itself. See for example this hibernate bug -> http://opensource.atlassian.com/projects/hibernate/browse/HHH-3731
This is really annoying because messages from ERROR level are really difficult to suppress, so use them only to indicate problems with your own code.
All - I don't really use this one, it is practically the same as DEBUG or TRACE.
The best way to learn is by example. Read source of some open source things, like, oh, Tomcat or whatever is in your application area, and see what you see.
Despite this question is pretty old, this is really an important point that every developer should know, I highly recommend you to check the official page of Apache log4j.
Also I have found and useful image that describes this perfectly, log4jImage taken from supportweb.cs.bham.ac.uk/documentation/tutorials/docsystem/build/tutorials/log4j/log4j.html
TRACE:
The least level of log. Provides most detailed level of information.
DEBUG:
Log statement here are meant to help developers. Detailed state of your application.
INFO:
General business information. Progress and state of your application
WARN:
Warnings about unexpected events. These are not serious enough to abort your application.
ERROR :
Serious problems in your application.
Also having the right level of logging turned on in different environments is equally crucial.
Here are some guidelines I use:
TRACE: verbose logging for very low level debugging, things I would not normally need to see in a log unless there is some very obscure or unusual issue.
DEBUG: logging intended for developers' eyes only - contents of variables, results of comparisons, and other bits of information that help debug business logic.
INFO: high level information such as task X is now complete or some rule is satisfied and here is what I'm going to do next because of that rule.
WARN: there might be a problem but it's not severe enough to do any real harm to the flow of the business logic. For example maybe some variable is sometimes going to be null but we don't necessarily need it or we can work around it somehow. At the same time we still want to know about it just in case we're told later we need to find examples of it or investigate more carefully why it happens.
ERROR: A serious problem that definitely needs to be investigated further, but not serious enough to stop the application from accomplishing the task at hand.
FATAL: A very serious unexpected problem that we can't work around or recover from and may even prevent the application from doing something useful.
ERROR logging should always be on.
INFO + DEBUG should be on when tracking down problems/bugs.
To what others mentioned, I'd add TRACE and FATAL levels, the former is good for very verbose logging, the later is to indicate total show stopper. There are general guide lines on how you use levels, as mentioned by other above. However, the most important is how YOU will use it and how your USERS will interpret them. You need levels to focus on problems, so decide what is the problem in your case. Your users will hardly ever need trace or debug statements, but they will definitely want to nail problems and report them to you.

Can two log4j fileappenders write to the same file?

Forget for a second the question of why on earth would you do such a thing - if, for whatever reason, two FileAppenders are configured with the same file - will this setup work?
Log4j's FileAppender does not allow for two JVM's writing to the same file. If you try, you'll get a corrupt log file. However, logback, log4j's successor, in prudent mode allows two appenders even in different JVMs to write to the same file.
It doesn't directly answer your question, but log4*net*'s FileAppender has a LockingModel attribute that you can set to only lock when the file is actually in use. So if you had two FileAppenders working in the same thread with MinimalLock set, it would probably work perfectly fine. On different threads, you might hit deadlock once in a while.
The FileAppender supports pluggable file locking models via the LockingModel property. The default behavior, implemented by FileAppender.ExclusiveLock is to obtain an exclusive write lock on the file until this appender is closed. The alternative model, FileAppender.MinimalLock, only holds a write lock while the appender is writing a logging event.
A cursory web search didn't turn up any useful results about implementing MinimalLock in log4j.
From Log4j FAQ a3.3
How do I get multiple process to log to the same file?
You may have each process log to a SocketAppender. The receiving SocketServer (or SimpleSocketServer) can receive all the events and send them to a single log file.
As to what that actually means I will be investigating myself.
I also found the following workaround on another SO question:
Code + Example

Resources