SlowCheetah transform ignores multiple conditions - xsd

I have a WCF configuration file that I am trying to transform with SlowCheetah. For development use, we want to include the MEX endpoints, but when we release the product, these endpoints should be removed on all services except one. The server for which it should be left has the following endpoint:
<endpoint address="MEX"
binding="mexHttpBinding"
contract="IMetadataExchange" />
The ones that should be removed are as follows:
<endpoint address="net.tcp://computername:8001/WCFAttachmentService/MEX"
binding="netTcpBinding"
bindingConfiguration="UnsecureNetTcpBinding"
name="WCFAttachmentServiceMexEndpoint"
contract="IMetadataExchange" />
The transform I am using is:
<service>
<endpoint xdt:Locator="Condition(contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding')))" xdt:Transform="RemoveAll" />
</service>
However, when I run this, ALL MEX endpoints are removed from the config file including the one that I wish to keep. How do I make this work properly?

The Locator Condition expression that selects the nodes seems to be correct. If you had only the two endpoints you posted in your example, this expression will select the second endpoint.
According to the documentation the Transform attribute RemoveAll should "remove the selected element or elements." Based on the information you posted it's not working as expected, since the first element was not selected and was removed anyway. Based on this StackOverflow answer it seems to me that the issue is with Condition. I'm not sure if that's a bug (it's poorly documented), but you could try some alternative solutions:
1) Using XPath instead of Condition. The effective XPath expression that is applied to your configuration file as a result of the Condition expression is:
/services/service/endpoint[contains(#address, 'MEX') and not(contains(#binding, 'mexHttpBinding'))]
You should also obtain the same result using the XPath attribute instead of Condition:
<endpoint xdt:Locator="XPath(/services/service/endpoint[contains(#address, 'MEX')
and not(contains(#binding, 'mexHttpBinding'))])" xdt:Transform="RemoveAll" />
2) Using Match and testing an attribute such as binding. This is a simpler test, and would be IMO the preferred way to perform the match. You could select the nodes you want to remove by the binding attribute
<endpoint binding="netTcpBinding" xdt:Locator="Match(binding)" xdt:Transform="RemoveAll" />
3) UsingXPath instead of Match in case you have many different bindings and only want to eliminate only those which are not mexHttpBinding:
<endpoint xdt:Locator="XPath(/services/service/endpoint[not(#binding='mexHttpBinding'))" xdt:Transform="RemoveAll" />
4) Finally, you could try using several separate statements with Condition() or Match() to individually select the <endpoint> elements you wish to remove, and use xdt:Transform="Remove" instead of RemoveAll.

Related

Use custom parameters in JSON Layout [Log4j 2]

I'm confused about the meaning of property substitution, lookups and layout parameters in Log4j 2. The documentation mentions that JSON layout supports custom fields. However it doesn't seem to support conversion patterns like %d{ISO8601}, %m, %l and the like. it does however support Lookups.
Thus when I define in the xml:
<JsonLayout complete="false" compact="false">
<KeyValuePair key="#timestamp" value="%d{ISO8601}" />
<KeyValuePair key="message" value="%message" />
<KeyValuePair key="process.thread.name" value="%tn" />
</JsonLayout >
As output I simply get the strings %d{ISO8601}, %message... instead of the values.
What I'm trying to achieve is a JSON layout where I can include parameters similar to Pattern Layout where I simply write <pattern>%d %p %C{1.} [%t] %m%n</pattern> to get what I want. Or, alternatively, should I use the Pattern layout and stitch together a string in JSON Format, making use of the Pattern Layout's JSON encoding %enc{%m}{JSON}?
The GelfLayout currently supports a messagePattern attribute that will format just the message field in the JSON using the patternLayout. I have planned to add this to the JSONLayout as well but have not done it yet. There is a new JsonTemplateLayout that is in the final stages of being merged into Log4j 2 that will also support this. You could either work from the current pull request to get the Layout or wait for the Log4j 2.14.0 release when likely both options will be available.

Azure App Configuration: labelFilter : is it possible to 'prefer' a certain label without excluding other labels?

In Azure App Configuration you can store a key with multiple values, differentiated by labels.
When building the config it is possible to filter which keys to read from the store by using labelFilter="SomeLabel"
In my case i have 50 keys in the app store without any label (No Label), and 4 keys which has two values, one value for label SomeLabel and another value for (No Label).
I want to retrieve all 54 keys. For the 4 keys which have multiple values, i want the value with label SomeLabel.
If i use labelFilter="SomeLabel" i only get the 4 keys with the label, the 50 keys without any label are filtered out.
Is it possible to achieve my desired functionality?
<configBuilders>
<builders>
<add name="SomeAzureAppConfigStore" labelFilter="SomeLabel" mode="Greedy" prefix="My.App:" stripPrefix="true" connectionString="${MyConnectionString}" useAzureKeyVault="true" type="Microsoft.Configuration.ConfigurationBuilders.AzureAppConfigurationBuilder, Microsoft.Configuration.ConfigurationBuilders.AzureAppConfiguration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx" />
</builders>
</configBuilders>
You can resolve this by defining multiple configbuilders. The first builder will get all keys (although documentation suggests it should only be keys without labels). The second builder will override any previous key/values with the environment specific key/value. Note that the order in the tag also matters for the order in which the override occurs.
<configBuilders>
<builders>
<add name="SomeAzureAppConfigStoreNoLabel"
labelFilter=""
mode="Greedy" prefix="My.App:" stripPrefix="true" connectionString="${MyConnectionString}" useAzureKeyVault="true" type="Microsoft.Configuration.ConfigurationBuilders.AzureAppConfigurationBuilder, Microsoft.Configuration.ConfigurationBuilders.AzureAppConfiguration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx" />
<add name="SomeAzureAppConfigStoreSomeLabel"
labelFilter="SomeLabel"
mode="Greedy" prefix="My.App:" stripPrefix="true" connectionString="${MyConnectionString}" useAzureKeyVault="true" type="Microsoft.Configuration.ConfigurationBuilders.AzureAppConfigurationBuilder, Microsoft.Configuration.ConfigurationBuilders.AzureAppConfiguration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx" />
</builders>
</configBuilders>
<appSettings configBuilders="SomeAzureAppConfigStoreNoLabel,SomeAzureAppConfigStoreSomeLabel">
The solution to your problem is using multiple labels. If you set "%00" as one of your labels then it is considered an empty label. It will then load both sets of labels and depending on the order they are set in will result in your 4 other labels being used instead of the non labeled versions.

In MSBuild, how to split a string on endlines?

Other questions (MSBUILD Splitting text file into lines) mention implementation-specific alternatives, but none seem to directly address how to split a simple string property into an item group based on endlines.
How can you do this? Attempts that didn't work:
<ItemGroup>
<SplitLines Include="$(SourceString.Split('\r\n'))" />
</ItemGroup>: (splits on 'r' or 'n')
<ItemGroup>
<SplitLines Include="$(SourceString.Split('%0A%0D'))" />
</ItemGroup>: (doesn't split at all)
In case you're curious: SourceString is the output of an Exec command that needs splitting, so ReadLinesFromFile isn't an option. It can't output to an intermediary file because file systems are slow and this needs to be used by build processes that care about file operations.
Using property functions is the way to go and you can search for sulutions using e.g. 'C# split string lines' in your search engine of choice, then translate the answer. This comes up with this SO question and the Regex.Split method is the easiest to implement:
<ItemGroup>
<SplitLines Include="$([System.Text.RegularExpressions.Regex]::Split(`$(SourceString)`, `\r\n|\r|\n`))" />
</ItemGroup>

How to add top level element using Linq to XML

Assuming I have a xdocument called xd, with the following xml already created.
<Alert>
<Source>
<DetectTime>12:03:2010 12:22:21</DetectTime>
</Source>
</Alert>
How would I be able to add another Alert element, such that the xml becomes:
<Alert>
<Source>
<DetectTime>12:03:2010 12:22:21</DetectTime>
</Source>
</Alert>
<Alert>
</Alert>
Adding an additional elements seems to be fairly easy, but when adding in a top level element it excepts.
Your desired XML structure is invalid; you need a root element in order to add another "Alert" node. The following code shows how to add it when a root node exists:
var xdoc = XDocument.Parse(#"<root>
<Alert>
<Source>
<DetectTime>12:03:2010 12:22:21</DetectTime>
</Source>
</Alert>
</root>");
xdoc.Root.Add(new XElement("Alert"));
Console.WriteLine(xdoc);
The above code produces <Alert /> since no child nodes are added to it (this will change once you add to it). If you want the closing tag as you have shown you can use xdoc.Root.Add(new XElement("Alert", String.Empty)); instead.
To verify that your desired output has an invalid structure you can try parsing it using XDocument.Parse similar to what I've shown above.

How to use an analyzer in compass-lucene search

How do I add compass analyzer while indexing and searching data in compass.I am using schema based configuration for compass.I want to use StandardAnalyzer with no stopwords.Because I want to index data as it is,without ignoring search terms like AND , OR , IN . The default analyzer will ignore AND , OR , IN from the data I give for indexing.
How do I configure snowball analyzer either thru code or thru xml. If someone could post me an example.
Below is the example. You can also find more details here
<comp:searchEngine useCompoundFile="false" cacheInvalidationInterval="-1">
<comp:allProperty enable="false" />
<!--
By Default, compass uses StandardAnalyzer for indexing and searching. StandardAnalyzer
will use certain stop words (stop words are not indexed and hence not searcheable) which are
valid search terms in the DataSource World. For e.g. 'in' for Indiana state, 'or' for Oregon etc.
So we need to provide our own Analyzer.
-->
<comp:analyzer name="default" type="CustomAnalyzer"
analyzerClass="com.ICStandardAnalyzer" />
<comp:analyzer name="search" type="CustomAnalyzer"
analyzerClass="com.ICStandardAnalyzer" />
<!--
Disable the optimizer as we will optimize the index as a separate batch job
Also, the merge factor is set to 1000, so that merging doesnt happen during the commit time.
Merging is a time consuming process and will be done by the batched optimizer
-->
<comp:optimizer schedule="false" mergeFactor="1000"/>
</comp:searchEngine>

Resources