The Log4net FAQ says that log4net is thread-safe:
Question: Is log4net thread-safe?
Answer: Yes, log4net is thread-safe.
However, the manual for the AdoNetAppender class says the following:
Instance members are not guaranteed to be thread-safe.
My guess is that log4net is not thread-safe. I'm using this class to log into a database, and manually flushing the appended data from time to time. I see that some records are duplicated.
Looking at the code, it appears to me that the base class BufferingAppenderSkeleton locks the object for the flush(), however the function SendFromBuffer() which is eventually called can be invoked from other places without lock (for instance Append()). Therefore, it appears that the class is not thread-safe.
So should I conclude that the class is not thread-safe, however the usage in log4net makes it so?
Related
I'm running a profiler against a running Java service (Spring Boot framework), containing multiple Groovy files all with the #CompileStatic annotation.
Now one of the most time consuming methods is an internal Groovy method (getSAMMethodImpl()). I've been unsuccessful tracking down what this method is actually doing under the covers.
What exactly does this method do, and is there any way to prevent it from being called?
This method gets executed when a CachedClass for a class with a single abstract method (aka SAM) is created. Cached classes are Groovy mechanism to deal with a reflection in a more effective way - instead of always retrospecting classes from the beginning at the runtime it remembers e.g. modifications applied with metaprogramming (adding new methods to classes for instance), so it gets all class information very quickly. Of course it comes with some overhead.
For instance, when the meta class registry is initialized (once), it registers about 1180 methods. About 190 of them cause CachedSAMClass.getSAMMethodImpl(Class<?> c) to be executed. It happens, because ClassInfo.isSAM(Class<?> c) which checks if given class is a single abstract method class calls this method. And if you take a look at ClassInfo.createCachedClass(Class klazz, ClassInfo classInfo) you will see that this isSAM() method gets called always as a last check.
In most cases creating a registry of cached classes shouldn't be a problem - it happens one time for each class. Most of them get registered when you simply access metaClass property of any class. Or when you create a first closure. When it comes to performance, many different factors matter. For instance, Spring Boot uses hot swapping to reload classes at the runtime. In this case Groovy meta class registry gets recreated and all cached classes have to be recreated as well. The same thing may happen when you run a Spring Boot application with spring-boot-devtools dependency added - it uses additional class loader called RestartClassLoader which requires additional meta class registry to be initialized. Actually the number of class loaders you have, that number of times meta class registry will be initialized (once for each class loader). This RestartClassLoader also causes recreating cached classes when it restarts.
And last but not least - if you want to measure performance correctly, try doing it on a production server instead of a local dev environment. If you can attach a debugger to the running process on a server and put a breakpoint in CachedSAMClass.getSAMMethodImpl(Class<?> c) at line 169, you can see how many times and for what classes this method gets executed. If it happens that it gets executed multiple times for the same class, it may suggest that your application is restarting class loader and Groovy has to rebuild meta class registry. It shouldn't happen - production application once started should not make any changes to class loader without a purpose. It is acceptable on a local dev - devtools and hot swapping will force meta class registry to be recreated any time class loader gets refreshed.
Context
This is a unit test scenario.
The methods of the test target class can be called concurrently from different threads, so instead of guarding the logger implementation instance itself with locks, I've chosen to have a thread bound singleton loggers. The methods under test always creating their thread bound loggers via service locator pattern (please do no hijack the question about is this an antipattern or not).
Ninject is programmed as follows in the Arrange part of the test:
kernel.Bind<ILogger>().To<MyLogger>().InThreadScope();
Question
During the Act part of the test, one or more thread is created by the instance under test (inside).
In the Assert part of the test, I would like to access to the one or more loggers what were created an used by the threads in the class under test, and examine that loggers in the purpose of assertion.
How can I accomplish this task? (access the loggers what was created)
Ninject does not offer a specific API for this, however, you can make use of "OnActivation".
Either add it to your existing binding, or use Rebind in the unit test, as follows:
kernel.Rebind<ILogger>().To<MyLogger>().OnActivation(createdInstance => ...do something...);
Replace the "...do something..." with an Action<ILogger> that adds the instance to a (concurrency-safe?) list or similar.
Also see Intercept creation of instances in Ninject for further information.
I've been searching, but not finding, so I thought I'd ask.
When using a log4net appender that buffers, do I need to call some kind of flush on application exit or does log4net take care of that itself?
You can check the source code at http://svn.apache.org/repos/asf/logging/log4net/trunk/src/log4net/Appender/
But basically, as I understand it, if your program closes down correctly, then the appenders should be flushed.
AdoNetAppender inherits from BufferingAppenderSkeleton which inherits from AppenderSkeleton and
so the finalizer on class AppenderSkeleton will call Close() on your AdoNetAppender which calls base.Close() and the base class is BufferingAppenderSkeleton and this methods calls Flush().
Of course, there are times when your finalizer does not run (See Are .net finalizers always executed?)
Is there a way to remove a Logger once it has been added? Say via:
LogManager.GetLogger("loggerName")
In my research, it would appear it is not possible. The closest I have found is the ability to call the Clear hierarchy method. This will indeed clear out the existing loggers but I would like to selectively remove them, not to mention this likely isn't the safest thing to be doing.
For some background, I am logging one file per task where there could be potentially thousands of concurrent tasks and hundreds of thousands of tasks per application lifetime. One approach is to create a Logger for each task and then remove it once the task has completed. Without the ability to selectively remove a Logger though, memory will get chewed up by the retired instances.
Of course, there are alternative designs that would work. The problem could be addressed by adding/removing Appenders and Filters as necessary to a given Logger. Also, a pool of Loggers and Appenders could be created and then configured per task.
It obviously isn't a show stopper if there is no way to remove a specific logger once it is added. I'm just curious if there is a way to delete a Logger that I may have missed?
Wrapping this up since it has been open a long time. I haven't found a way to remove a logger by itself and nothing I've seen in the documentation suggests it is possible either. The solution that worked for me to share a pool of loggers that would only grow to the max amount of concurrent tasks. Each task could take a logger from the pool with an appender added specific to that task. On task completion, the appender is removed and the logger returned to the pool.
No, it isn't possible. This is specifically mentioned in their FAQ these days:
Logger instances seem to be create only. Why isn't there a method to remove logger instances?
It is quite nontrivial to define the semantics of a "removed" logger which is still referenced by the user.
Source:
Ability to programmatically remove a Logger
I have a custom logging class for my Python script with a flush() method which print()s the contents of a list.
I would like to include flush() in the special __del__() method in case the program ends without the log being flushed. However a note in the documentation states:
[...] when del() is invoked in response to a module being deleted (e.g., when execution of the program is done), other globals referenced by the del() method may already have been deleted or in the process of being torn down (e.g. the import machinery shutting down).
Would anyone recommend a different way of doing this, and if so, why?
You might want to look into making this logger a context manager. That still will not flush in the case of abnormal termination, but few things will. But __del__ might not be called on objects even in normal termination.
Loggers might be one of the things that doesn't fit well when using the with statement, as they are quite global, so it's not sure context manager is a good fit.