Use custom parameters in JSON Layout [Log4j 2] - log4j

I'm confused about the meaning of property substitution, lookups and layout parameters in Log4j 2. The documentation mentions that JSON layout supports custom fields. However it doesn't seem to support conversion patterns like %d{ISO8601}, %m, %l and the like. it does however support Lookups.
Thus when I define in the xml:
<JsonLayout complete="false" compact="false">
<KeyValuePair key="#timestamp" value="%d{ISO8601}" />
<KeyValuePair key="message" value="%message" />
<KeyValuePair key="process.thread.name" value="%tn" />
</JsonLayout >
As output I simply get the strings %d{ISO8601}, %message... instead of the values.
What I'm trying to achieve is a JSON layout where I can include parameters similar to Pattern Layout where I simply write <pattern>%d %p %C{1.} [%t] %m%n</pattern> to get what I want. Or, alternatively, should I use the Pattern layout and stitch together a string in JSON Format, making use of the Pattern Layout's JSON encoding %enc{%m}{JSON}?

The GelfLayout currently supports a messagePattern attribute that will format just the message field in the JSON using the patternLayout. I have planned to add this to the JSONLayout as well but have not done it yet. There is a new JsonTemplateLayout that is in the final stages of being merged into Log4j 2 that will also support this. You could either work from the current pull request to get the Layout or wait for the Log4j 2.14.0 release when likely both options will be available.

Related

Is it possible to have a java object in log4j Conversion pattern?

Its multi tenancy application and generates lots of logs.
I want to see tenant information in an individual log statement.
I have the tenant information in my thread context.
How can i configure log4j to add tenant information to log statements by default.
I saw Conversion pattern says the pattern of log4j messages like %d [%t] %-5p %c - %m%n.
It didnt helped, not able to print thread context in it.
Say CurrentThread.getTenantName() gives me the current tenant, how could add it to log4j.
In log4j , patterns parsed by PatternParser
You can write your own parser by overriding it and parse custom literal like %i where "i" will denote tenant id in your case.
Please refer below blog for creating custom literal and parser
http://fw-geekycoder.blogspot.in/2010/07/creating-log4j-custom-patternlayout.html

Why int-ftp:outbound-gateway payload is not List<java.io.File>?

According to http://docs.spring.io/spring-integration/reference/html/ftp.html#ftp-outbound-gateway the mget payload is a List of files
mget retrieves multiple remote files based on a pattern and supports the following option:
...
The message payload resulting from an mget operation is a ListFile> object - a List of File objects, each representing a retrieved file.
I have the following configuration
<int-ftp:outbound-gateway
session-factory="ftpSesionFactory"
request-channel="request-channel"
reply-channel="reply-channel"
auto-create-directory="true"
local-directory="${local-directory}"
command="mget"
command-options="-stream"
expression="payload">
<int-ftp:request-handler-advice-chain>
<int:retry-advice />
</int-ftp:request-handler-advice-chain>
</int-ftp:outbound-gateway>
<int-file:splitter input-channel="reply-channel" output-channel="logger"/>
But the payload is a List<FTPFile> and the splitter doesn't work. Is this a bug? How can I obtain the downloaded Listjava.io.File> in the payload (as the documentation says)?.
The workaround is using another component to read the file from the local directory, described at how to get file with int-ftp:outbound-gateway and remove from server if exists?.
I'm using spring-integration 4.2.5 and commons-net-2.0.
What makes you believe it's List<FTPFile?
This test shows it's a List<java.io.File>.
The ls command returns either a list of String or FTPFile, depending on the -1 option.
Finally, -stream is not supported on mget, only get.
Also, you don't want a file splitter there - that reads each file - you need a regular <int:splitter/> to split the List<File> into separate files; then the file splitter will read the file lines.

SlowCheetah transform ignores multiple conditions

I have a WCF configuration file that I am trying to transform with SlowCheetah. For development use, we want to include the MEX endpoints, but when we release the product, these endpoints should be removed on all services except one. The server for which it should be left has the following endpoint:
<endpoint address="MEX"
binding="mexHttpBinding"
contract="IMetadataExchange" />
The ones that should be removed are as follows:
<endpoint address="net.tcp://computername:8001/WCFAttachmentService/MEX"
binding="netTcpBinding"
bindingConfiguration="UnsecureNetTcpBinding"
name="WCFAttachmentServiceMexEndpoint"
contract="IMetadataExchange" />
The transform I am using is:
<service>
<endpoint xdt:Locator="Condition(contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding')))" xdt:Transform="RemoveAll" />
</service>
However, when I run this, ALL MEX endpoints are removed from the config file including the one that I wish to keep. How do I make this work properly?
The Locator Condition expression that selects the nodes seems to be correct. If you had only the two endpoints you posted in your example, this expression will select the second endpoint.
According to the documentation the Transform attribute RemoveAll should "remove the selected element or elements." Based on the information you posted it's not working as expected, since the first element was not selected and was removed anyway. Based on this StackOverflow answer it seems to me that the issue is with Condition. I'm not sure if that's a bug (it's poorly documented), but you could try some alternative solutions:
1) Using XPath instead of Condition. The effective XPath expression that is applied to your configuration file as a result of the Condition expression is:
/services/service/endpoint[contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding'))]
You should also obtain the same result using the XPath attribute instead of Condition:
<endpoint xdt:Locator="XPath(/services/service/endpoint[contains(#address, 'MEX')
and not(contains(#binding, 'mexHttpBinding'))])" xdt:Transform="RemoveAll" />
2) Using Match and testing an attribute such as binding. This is a simpler test, and would be IMO the preferred way to perform the match. You could select the nodes you want to remove by the binding attribute
<endpoint binding="netTcpBinding" xdt:Locator="Match(binding)" xdt:Transform="RemoveAll" />
3) UsingXPath instead of Match in case you have many different bindings and only want to eliminate only those which are not mexHttpBinding:
<endpoint xdt:Locator="XPath(/services/service/endpoint[not(#binding='mexHttpBinding'))" xdt:Transform="RemoveAll" />
4) Finally, you could try using several separate statements with Condition() or Match() to individually select the <endpoint> elements you wish to remove, and use xdt:Transform="Remove" instead of RemoveAll.

log4j pattern %X and what property to assign to it

i am trying to use a log viewer (doesn't matter which one) to parse my log files.
my log4j pattern is this.
%p [%t] (%C{1}:%M():%L) %d{dd/MM/yyyy-HH:mm:ss,SSS} S:%X{serviceType} N:%X{requestID}- %m%n
the log viewers (at least the open source ones) need you to implement a pattern so they will be able to read the file.
for example:
for the log4j pattern: %p [%t] (%C{1}:%M():%L) %d{dd/MM/yyyy-HH:mm:ss,SSS} - %m%n
the log viewer pattern would be:
pattern= pattern=LEVEL [THREAD] (CLASS:METHOD():LINE) TIMESTAMP - MESSAGE
the example works well.
but i have not been able to parse the %X property in any way. i have seen there are property types NDC and PROP(key) but i seem to either miss use them or they are not related to the %X
so the question is how to implement the pattern so it will read the %X parameter.
thanks.
Ok, i think i see the problem.
Your application use the log4J MDC since it use the %X in the pattern layout. Your log viewer seems to support only NDC.
log4j pattern layout for NDC is %x (lowercase).
If you have control on the application, you have to change MDC -> NDC and modifiy the log4j.xml to use %x instead of %X. That may be a big task if the app is huge...
Another solution would be to find a log viewer that support MDC(%X)
I tried to look around for the PROP(key), but there is not much doc on it ;-(
Good luck

How to use an analyzer in compass-lucene search

How do I add compass analyzer while indexing and searching data in compass.I am using schema based configuration for compass.I want to use StandardAnalyzer with no stopwords.Because I want to index data as it is,without ignoring search terms like AND , OR , IN . The default analyzer will ignore AND , OR , IN from the data I give for indexing.
How do I configure snowball analyzer either thru code or thru xml. If someone could post me an example.
Below is the example. You can also find more details here
<comp:searchEngine useCompoundFile="false" cacheInvalidationInterval="-1">
<comp:allProperty enable="false" />
<!--
By Default, compass uses StandardAnalyzer for indexing and searching. StandardAnalyzer
will use certain stop words (stop words are not indexed and hence not searcheable) which are
valid search terms in the DataSource World. For e.g. 'in' for Indiana state, 'or' for Oregon etc.
So we need to provide our own Analyzer.
-->
<comp:analyzer name="default" type="CustomAnalyzer"
analyzerClass="com.ICStandardAnalyzer" />
<comp:analyzer name="search" type="CustomAnalyzer"
analyzerClass="com.ICStandardAnalyzer" />
<!--
Disable the optimizer as we will optimize the index as a separate batch job
Also, the merge factor is set to 1000, so that merging doesnt happen during the commit time.
Merging is a time consuming process and will be done by the batched optimizer
-->
<comp:optimizer schedule="false" mergeFactor="1000"/>
</comp:searchEngine>

Resources