log4net programmatically set appender - log4net

I want to have log4net write to one file for debug messages and another for all other messages and I want to set this all up programatically. I can see how to specify the lower limit of an appender but no the upper limit (ie prevent the debug appender from writing messages above debug level).
Is there a way to do this?

You can do this with:
Hierarchy hierarchy =
(Hierarchy)LogManager.GetRepository();
hierarchy.Root.AddAppender(appender);
where appender is of type IAppender

Related

What defines the set of appenders that a Log4j version 1 logger "contains"?

I'm trying to read some Log4j v1 code in order to update it to Log4j v2, and I've found something I can't resolve from the documentation.
The issue is the Logger.getAllAppenders() method. The documentation, linked, says that the method "Get[s] the appenders contained in this category as an Enumeration." There is no definition of what it means for an appender to be "contained in" a category there, however, and I can't find a definition anywhere else in the documentation. Thus, I'm unable to predict exactly what the method will return.
As precisely as possible, what defines the set of appenders that are "contained in" a category like a logger? In particular, does a child logger contain appenders assigned to its parent? Does a parent logger contain appenders assigned to its child?
Answers that explain the basis of your knowledge, like links to documentation that I missed, are especially appreciated.
For end-of-life software like Log4j 1.x, the best documentation is the source code (which will never change).
Category.getAllAppenders() returns the list of appenders directly linked to the given Logger (cf. source code) and does not include the appenders of parent loggers. Hence it is a subset of the appenders that will be used by that logger.

Custom metrics using grok with logstash

I'm trying to integrate some code into an existing ELK stack, and we're limited to using filebeats + logstash. I'd like to have a way to configure a grok filter that will allow different developers to log messages in a pre-defined format such that they can capture custom metrics, and eventually build kibana dashboards.
For example, one team might log the following messages:
metric_some.metric=2
metric_some.metric=5
metric_some.metric=3
And another team might log the following messages from another app:
metric_another.unrelated.value=17.2
metric_another.unrelated.value=14.2
Is there a way to configure a single grok filter that will capture everything after metric_ as a new field, along with the value? Everything I've read here seem to indicate that you need to know the field name ahead of time, but my goal is to be able to start logging new metrics without having to add/modify grok filters.
Note: I realize Metricsbeat is probably a better solution here, but as we're integrating with an existing ELK cluster which we do not control, that's not an option for me.
As your messages seems to be a series of key-value pairs, you can use the kv filter instead of grok.
When using grok you need to name the destination field, with kv the name of the destination field will be the same as the key.
The following configuration should work for your case.
filter { kv { prefix => "metric_" } }
For the event metric_another.unrelated.value=17.2 your output will be something like { "another.unrelated.value": "17.2" }

How to specify options to Custom Formatter in Accumulo

I am planning to use same formatter for different accumulo tables with one configurable option.
Is it possible to provide options to Custom Formatter in accumulo? I tried using OptionDescriber but it seems that OptionDescriber gets invoked only during setiterator command.
Or at least is there any way to get current table properties (on which table the custom fomatter is set). I mean if the formatter was set on TABLE_A, then formatter code should be able to load all the table properties during initialization. So that I can set the required properties to table using "config" and the custom formatter can access them.
There is currently no way to set options directly on the formatter in the shell. If your custom formatter needs to accept options, you'll have to write your formatter so that it accepts options outside of Accumulo, for example by reading java system properties you set in the environment, or by reading a configuration file stored locally on your system.

Log4Net: Enumerating GlobalContext properties?

I'm trying to utilize the Loggly appender utility for log4net.
I've found that their code is enumerating through the ThreadContext properties and appending them to the payload getting sent over the wire to the loggly service. Good idea! However, the same feature is not being applied to the GlobalContext properties. Figuring this was a miss on their part I tried my hand at enumerating through the GlobalContext properties and adding these to the payload as well.
However, this has proven to be a problem. There doesn't appear to be any way to access the keys and associated values as the ThreadContext properties are accessed.
How can the GlobalContext properties be enumerated?
The only way I see would be to retrieve the properties class for the global context (GlobalContext.Properties which returns a GlobalContextProperties class) and get the ReadOnlyPropertiesDictionary returned by the internal method GetReadOnlyProperties() through reflection. Once you have the ReadOnlyPropertiesDictionary you can iterate on keys and values
From what I see the ThreadContext has more or less the same mechanism so you could take example on the ThreadContext enumeration to port it to the GlobalContext.

JAXB marshaling: how to include exceptions info into xml output file?

I have a very basic application that uses JAXB marshaller to validate input information against an xsd schema. I register a validation event handler to obtain information about the exceptions. What I would like to achieve is the ability to include this information into xml output structure I receive as a result of marshaling. I’ve included exception collection section into my xsd and now I can instantiate the corresponding exception object once an exception is encountered. The question is how do I attach this object to the rest of my JAXB generated Java objects structure considering the fact that marshaling process had already started? Is it even possible? Or should I try and modify the xml result after the marshaling is done? Any advice would be highly appreciated.
Thanks!
There a couple of ways to do this:
Option #1 - Add an "exceptions" Property to You Root Object
Ensure that the exceptions property is marshalled last, this can be configured using propOrder on the #XmlType annotation.
Create a validation handler that holds onto the root object.
When the validation handler encounters an exception, add that exception to the exceptions property on the root object.
Option #2 - Use an XMLStreamWriter
Create an XMLStreamWriter
Write out a root element
Set the validation handler on the marshaller, ensure that it will store the exceptions encountered.
Marshal the root object to the XMLStreamWriter.
Marshal the individual exceptions encountered to the XMLStreamWriter.
Write out the close for the root element.
Short answer: no. JAXB is intended to take an object graph and produce XML. it's not intended to do this.
Longer answer: You could inject the exception representation into the graph after JAXB is done the first time.
Even longer answer: There are a number of plugin and customization technologies for JAX-B, and it's possible that you could use one of them. However, it's very hard to conceptualize this at the abstract level of your question.

Resources