Forget for a second the question of why on earth would you do such a thing - if, for whatever reason, two FileAppenders are configured with the same file - will this setup work?
Log4j's FileAppender does not allow for two JVM's writing to the same file. If you try, you'll get a corrupt log file. However, logback, log4j's successor, in prudent mode allows two appenders even in different JVMs to write to the same file.
It doesn't directly answer your question, but log4*net*'s FileAppender has a LockingModel attribute that you can set to only lock when the file is actually in use. So if you had two FileAppenders working in the same thread with MinimalLock set, it would probably work perfectly fine. On different threads, you might hit deadlock once in a while.
The FileAppender supports pluggable file locking models via the LockingModel property. The default behavior, implemented by FileAppender.ExclusiveLock is to obtain an exclusive write lock on the file until this appender is closed. The alternative model, FileAppender.MinimalLock, only holds a write lock while the appender is writing a logging event.
A cursory web search didn't turn up any useful results about implementing MinimalLock in log4j.
From Log4j FAQ a3.3
How do I get multiple process to log to the same file?
You may have each process log to a SocketAppender. The receiving SocketServer (or SimpleSocketServer) can receive all the events and send them to a single log file.
As to what that actually means I will be investigating myself.
I also found the following workaround on another SO question:
Code + Example
Related
Our current approach is to:
Send all events to Splunk (through Splunk's own log4j-appender).
Define Splunk alerts, which trigger Moogsoft.
Obviously, this increases the latency and relies on Splunk more than necessary. Which makes me wonder, if someone has already developed a Moogsoft-appender for log4j.
A simple search hasn't brought anything up -- hence this question.
i haven't done this, but log4j has a SocketAppender
https://howtodoinjava.com/log4j/log4j-socketappender-and-socket-server-example/
that might fit with Moogsofts SocketLam
https://docs.moogsoft.com/en/configure-the-socket-lam.html
Alternatively:
https://github.com/logstash/log4j-jsonevent-layout
gives json layout to log4j which then could be received with a REST Lam
I don't know of anyone that has put together an actual appender, but I don't think you'd need one. An HTTP appender with a JSON layout sending to a Moogsoft REST adapter should be able to do the job, and might be a lot easier to set up than handling raw bytes off a socket.
I haven't done it so I'm not sure how much work it would be to set up. I suspect there's some work involved on either the log4j side to get the layout to look like Moogsoft wants it, or on the Moogsoft side to normalize what it gets sent.
I have a test suite harness which is used to run test scripts (classes defined therein actually), and as it iterates through the tests, it manipulates the python logger such that the log messages are all output to different files, each associated with its own test (class). This works fine for tests run in a sequential manner where i can control the log handlers in the root logger which enable all log messages (from whatever libraries the test classes may use) to log their messages into the proper test log file.
But what I am really trying to figure out is how to run such tests in parallel (via threading or multiprocessing) such that each thread will have its own log file to place all such messages.
I believe that I still need to manipulate the root logger, because that is the only place both tests and the libraries they use will converge on to do all logging to a common place.
I was thinking that I could add a handler for each thread which would contain a log filter to only log from a particular thread, and that would get me close (haven't tried this yet, but seems possible in theory). And this would possibly be the full solution (if indeed such would work) except for one thing. I cannot tell test writers to not use threads themselves, in their tests. So if they did so, again, this solution would fail. I'm fine with test-internal threads all logging to the one file, but these new threads would fail to log to the file their parent thread is logging to. The filter doesn't know anything about them.
And I could be mistaken, but it seems that threading.Thread objects cannot determine their own parent thread? This precludes a better log handler filter that accepts messages generated in a thread or any of its child/descendant threads. (?)
Any suggestions about how to approach this would be great.
Thanks,
Bruce
Ηi,
Say I have a file called something.txt. I would like to find the most recent program to modify it, specifically the full path to said program (eg. /usr/bin/nano). I only need to worry about files modified while my program is running, so I can add an event listener at program startup and find out what program modified it when my program was running.
Thanks!
auditd in Linux could perform actions regarding file modifications
See the following URI xmodulo.com/how-to-monitor-file-access-on-linux.html
Something like this generally isn't going to be possible for arbitrary processes. If these aren't arbitrary processes, then you could use some sort of network bus (e.g. redis) to publish "write" messages. Otherwise your only other bet would be to implement your own filesystem using FUSE. Even with FUSE though, you may not always have access to the pid depending on who/what is writing to the file and the security setup of your OS.
In my nlog configuration, I've set
<targets async="true">
with the understanding that all logging now happens asynchronously to my application workflow. (and I have noticed a performance improvement, especially on the Email target).
This has me thinking about log sequence though. I understand that with async, one has no guarantee of the order in which the OS will execute the async work. So if, in my web app, multiple requests come in to the same method, each logging their occurrence to NLog, does this really mean that the sequence in which the events appear in my log target will not necessarily be the sequence in which the log method was called by the various requests?
If so, is this just a consequence of async that one has to live with? Or is there something I can do to keep have my logs reflect the correct sequence?
Unfortunately this is something you have to live with. If it is important to maintain the sequence you'll have to run it synchronously.
But if it is possible for you to manually maintain a sequence number in the log message, it could be a solution.
I know this is old and I'm just ramping up on NLog but if you see a performance increase for the email client, you may want to just assert ASYNC for the email target?
NLog will not perform reordering of LogEvent sequence, by activating <targets async="true">. It just activates an internal queue, that provides better handling of bursts and enables batch-writing.
If a single thread writes 1000 LogEvents then they will NOT become out-of-order, because of async-handling.
If having 10 threads each writing 1000 LogEvents, then their logging will mix together. But the LogEvents of an individual thread will be in the CORRECT order.
But be aware that <targets async="true"> use the overflowAction=Discard as default. See also: https://github.com/nlog/NLog/wiki/AsyncWrapper-target#async-attribute-will-discard-by-default
For more details about performance. See also: https://github.com/NLog/NLog/wiki/performance
Is there a way to remove a Logger once it has been added? Say via:
LogManager.GetLogger("loggerName")
In my research, it would appear it is not possible. The closest I have found is the ability to call the Clear hierarchy method. This will indeed clear out the existing loggers but I would like to selectively remove them, not to mention this likely isn't the safest thing to be doing.
For some background, I am logging one file per task where there could be potentially thousands of concurrent tasks and hundreds of thousands of tasks per application lifetime. One approach is to create a Logger for each task and then remove it once the task has completed. Without the ability to selectively remove a Logger though, memory will get chewed up by the retired instances.
Of course, there are alternative designs that would work. The problem could be addressed by adding/removing Appenders and Filters as necessary to a given Logger. Also, a pool of Loggers and Appenders could be created and then configured per task.
It obviously isn't a show stopper if there is no way to remove a specific logger once it is added. I'm just curious if there is a way to delete a Logger that I may have missed?
Wrapping this up since it has been open a long time. I haven't found a way to remove a logger by itself and nothing I've seen in the documentation suggests it is possible either. The solution that worked for me to share a pool of loggers that would only grow to the max amount of concurrent tasks. Each task could take a logger from the pool with an appender added specific to that task. On task completion, the appender is removed and the logger returned to the pool.
No, it isn't possible. This is specifically mentioned in their FAQ these days:
Logger instances seem to be create only. Why isn't there a method to remove logger instances?
It is quite nontrivial to define the semantics of a "removed" logger which is still referenced by the user.
Source:
Ability to programmatically remove a Logger