When I use this in nlog config:
<attribute name="Exception" encode="false" layout="${exception:format=#}"/>
The json string includes "\r\n". How can I replace it with "\n"?
Many thanks!
Gunnar
Have you tried to use this:
${replace:searchFor=\\\\r\\\\n:replaceWith=\\\\n:inner=${exception:format=#}}
See also: https://github.com/nlog/NLog/wiki/Replace-Layout-Renderer
Related
I am having trouble extracting the "EXTRACT_THIS_PLEASE" from a similar XML file using xmllint --xpath. I understand sed and awk should not be used from some Googling. I also see that other XML parsers are usually recommended, but this is the only one I seem to have on my RHEL system. I have tried various things and understand that the issue has to do with white spaces.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<model-response-list xmlns="http://www.website.com/thing/link/linktothing/linklink" total-models="1" throttle="1" error="EndOfResults">
<model-responses>
<model mh="0x12345678">
<attribute id="0x12345">EXTRACT_THIS_PLEASE</attribute>
</model>
</model-responses>
</model-response-list>
EDIT: kjhughes and j_b, you guys are both wizards. Thank you so much. Could I also also extract 0x12345678 from "". I am looking to do this 5000+ times and ultimately have a list of devices in rows or columns like this:
"0x12345678
EXTRACT_THIS_PLEASE
0x99999999
EXTRACT_THIS_PLEASE
0x11111111
NOTHING
0x33333333
EXTRACT_THIS_PLEASE
0x22222222
NOTHING"
This xmllint command line,
xmllint --xpath "//*[#id='0x12345']/text()" file.xml
will select
EXTRACT_THIS_PLEASE
as requested.
See also
Daniel Haley's answer showing how to use XML namespaces in xmllint.
Another option to extract the contents of the <attribute> elemenet:
xmllint --xpath "//*[name()='attribute']/text()" x.xml
Output:
EXTRACT_THIS_PLEASE
I'm confused about the meaning of property substitution, lookups and layout parameters in Log4j 2. The documentation mentions that JSON layout supports custom fields. However it doesn't seem to support conversion patterns like %d{ISO8601}, %m, %l and the like. it does however support Lookups.
Thus when I define in the xml:
<JsonLayout complete="false" compact="false">
<KeyValuePair key="#timestamp" value="%d{ISO8601}" />
<KeyValuePair key="message" value="%message" />
<KeyValuePair key="process.thread.name" value="%tn" />
</JsonLayout >
As output I simply get the strings %d{ISO8601}, %message... instead of the values.
What I'm trying to achieve is a JSON layout where I can include parameters similar to Pattern Layout where I simply write <pattern>%d %p %C{1.} [%t] %m%n</pattern> to get what I want. Or, alternatively, should I use the Pattern layout and stitch together a string in JSON Format, making use of the Pattern Layout's JSON encoding %enc{%m}{JSON}?
The GelfLayout currently supports a messagePattern attribute that will format just the message field in the JSON using the patternLayout. I have planned to add this to the JSONLayout as well but have not done it yet. There is a new JsonTemplateLayout that is in the final stages of being merged into Log4j 2 that will also support this. You could either work from the current pull request to get the Layout or wait for the Log4j 2.14.0 release when likely both options will be available.
I want to modify file with groovy using:
<from uri="file:/data/inbox?delete=true" />
<transform>
<groovy>
body = body[1..3]
</groovy>
</transform>
<to uri="file:/data/outbox"/>
I get an error:
groovy.lang.MissingMethodException: No signature of method:
org.apache.camel.component.file.GenericFile.getAt() is applicable for
argument types: (groovy.lang.IntRange) values: [1..3]
What am I doing wrong?
Yes the input is file based and you attempt to use a groovy function that works on a list to grab the 1st to 3rd elements. You cannot do that. If you want to grab only the first 3 lines of a file, then you need to convert the message first to a list etc, or use the splitter eip to split the file line by line and group them together in a list which you can then afterwards do the groovy script
I have a WCF configuration file that I am trying to transform with SlowCheetah. For development use, we want to include the MEX endpoints, but when we release the product, these endpoints should be removed on all services except one. The server for which it should be left has the following endpoint:
<endpoint address="MEX"
binding="mexHttpBinding"
contract="IMetadataExchange" />
The ones that should be removed are as follows:
<endpoint address="net.tcp://computername:8001/WCFAttachmentService/MEX"
binding="netTcpBinding"
bindingConfiguration="UnsecureNetTcpBinding"
name="WCFAttachmentServiceMexEndpoint"
contract="IMetadataExchange" />
The transform I am using is:
<service>
<endpoint xdt:Locator="Condition(contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding')))" xdt:Transform="RemoveAll" />
</service>
However, when I run this, ALL MEX endpoints are removed from the config file including the one that I wish to keep. How do I make this work properly?
The Locator Condition expression that selects the nodes seems to be correct. If you had only the two endpoints you posted in your example, this expression will select the second endpoint.
According to the documentation the Transform attribute RemoveAll should "remove the selected element or elements." Based on the information you posted it's not working as expected, since the first element was not selected and was removed anyway. Based on this StackOverflow answer it seems to me that the issue is with Condition. I'm not sure if that's a bug (it's poorly documented), but you could try some alternative solutions:
1) Using XPath instead of Condition. The effective XPath expression that is applied to your configuration file as a result of the Condition expression is:
/services/service/endpoint[contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding'))]
You should also obtain the same result using the XPath attribute instead of Condition:
<endpoint xdt:Locator="XPath(/services/service/endpoint[contains(#address, 'MEX')
and not(contains(#binding, 'mexHttpBinding'))])" xdt:Transform="RemoveAll" />
2) Using Match and testing an attribute such as binding. This is a simpler test, and would be IMO the preferred way to perform the match. You could select the nodes you want to remove by the binding attribute
<endpoint binding="netTcpBinding" xdt:Locator="Match(binding)" xdt:Transform="RemoveAll" />
3) UsingXPath instead of Match in case you have many different bindings and only want to eliminate only those which are not mexHttpBinding:
<endpoint xdt:Locator="XPath(/services/service/endpoint[not(#binding='mexHttpBinding'))" xdt:Transform="RemoveAll" />
4) Finally, you could try using several separate statements with Condition() or Match() to individually select the <endpoint> elements you wish to remove, and use xdt:Transform="Remove" instead of RemoveAll.
i've got a question about ant and String split.
In a IniFile i've a section "[app_version]" with 1 element: "VERSION = 3.48".
My goal is to split "3.48" in 3 and 48.
I've try to read the ini file sucessfully with this code and it's work.
<target name="get_new_version_number">
<property file="${basedir}/Ini File/Config.ini" prefix="config.">
</property>
<property name="version_actuelle" value="${config.VERSION}" />
<echo message="version de l'application: ${version_actuelle}"/>
but, how can i split "3.48" witch is my value, in 3 and 48. I need to do this to increment 48 each time i execute the script.
Thanks by advance for your considerations.
Regards.
Simon
thanks for your answer.
I've try your solution but it not work for me because, i've got for result, 3.48.1, 3.48.1.2, 3.48.1.2.3....... etc
I really need to increment "48" so i have to split my value 3.48 with split fonction or something else.
But, again, thanks very much for your time.
regards
Simplest solution would be to read the major number from the ini file and then use the buildnumber task to manage the incrementing number
<buildnumber/>
<echo message="${majorNum}.${build.number}"/>
The Ant addon Flaka provides a split function, f.e. =
<project name="demo" xmlns:fl="antlib:it.haefelinger.flaka">
<property name="yourvalue" value="3.48"/>
<fl:echo>#{split('${yourvalue}', '\.')[0]}${line.separator}#{split('${yourvalue}', '\.')[1]}</fl:echo>
</project>
if you have further requirements -- you mentioned a "need to increment" -- you have to give more details.It's no problem to wrap it in a for loop with Flaka.