cc.net: set subject email publisher from trigger name - cruisecontrol.net

folks!
I have project in cc.net and this project nay start by 3 ways
Forced (when user click button "force" in web
By project trigger
By sheduler trigger
After build server send mail to stackholders.
And now, i want to add trigger name to mail subject. e.g.
force_Project name ...buikd result
I have tried use variables:
<projectTrigger project="Someproject">
<triggerStatus>Success</triggerStatus>
<variable name="Trigger" value="commit" />
</projectTrigger>
and
<subjectSettings>
<subject buildResult="Broken" value="{Trigger} is broken" />
<subject buildResult="StillBroken" value="{Trigger} is still broken" />
</subjectSettings>
but this way doesnt have positive result.
What kind of way able help me?

You can use the fileLabeller of ccnet.
<tasks>
build tasks...
<fileLabeller>
<labelFilePath>c:\buildstuff\mybuild-label.txt</labelFilePath>
<allowDuplicateSubsequentLabels>false</allowDuplicateSubsequentLabels>
</fileLabeller>
</tasks>
Setup your build task to write to the contents of mybuild-label.txt. I write something like this.
[repo1 rev: 99, repo2 rev: 9999]
This will become part of the subject.

Related

Nlog / event-properties: how to hint NLog to ignore/skip empty/null properties from the final log entry

Basically the title itself kinda explains what i'm trying to achieve but in greater detail:
Let's say the one has similar to the following XML setup for the layout:
layout="<log level='${level:lowerCase=True}' time='${longdate:universalTime=true}' myCustomProperty1='${event-properties:item=myCustomProperty1}' myCustomProperty2='${event-properties:item=myCustomProperty2}'>${newline}
...."
Now when myCustomProperty1 is set to let's say 'blah1' but myCustomProperty2 is not added to eventInfo.Properties collection the resulting entry looks like following:
<log level='blah' time='blah' myCustomProperty1='blah1' myCustomProperty2=''>
...
The question is - what can be done (preferably in the config file) to exclude the myCustomProperty2 attribute from finally rendered result so the output looks as following:
<log level='blah' time='blah' myCustomProperty1='blah1'>
...
Here is the gotcha - the same logger is used by multiple threads so i can't simply alter target's layout configuration at the runtime since it may negatively affect the rest of the threads
Thank you in advance for your suggestions.
-K
You could try using When :
<variables>
<variable name="var_myCustomProperty1" value="${when:when=length('${event-properties:item=myCustomProperty1}')>0:Inner= myCustomProperty1="${event-properties:item=myCustomProperty1}"}"/>
<variable name="var_myCustomProperty2" value="${when:when=length('${event-properties:item=myCustomProperty2}')>0:Inner= myCustomProperty2="${event-properties:item=myCustomProperty2}"}"/>
</variables>
<targets>
<target name="test" type="Console" layout="<log level='${level:lowerCase=True}' time='${longdate:universalTime=true}'${var_myCustomProperty1}${var_myCustomProperty2} />" />
</targets>
NLog 4.6 will include the XmlLayout, that might make things easier:
https://github.com/NLog/NLog/pull/2670
Alternative you can use the JsonLayout, if xml-output is not a requirement (renderEmptyObject="false")

Spring File Inbound Channel Adapter slows down

I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.

aggregator release strategy depend on another service activator running

I understand how aggregating based on size works but I also want to make the release strategy depend on another step in the pipeline to be still running. The idea is that i move files to a certain dir "source", aggregate enough files and then move from "source" to "stage" and then process the staged files. While this process is running I dont want to put more files in stage but I do want to continue to add more files to source folder (that part is handled by using a dispatcher channel connected with file inbound adapter before the aggregator)
<int:aggregator id="filesBuffered"
input-channel="sourceFilesProcessed"
output-channel="stagedFiles"
release-strategy-expression="size() == 10"
correlation-strategy-expression="'mes-group'"
expire-groups-upon-completion="true"
/>
<int:channel id="stagedFiles" />
<int:service-activator input-channel="stagedFiles"
output-channel="readyForMes"
ref="moveToStage"
method="move" />
so as you can see I dont want to release the aggregated messages if an existing instance of moveToStage service activator is running.
I thought about making the stagedFiles channel a queue channel but that doesnt seems right because I do want the files to be passed to moveToStage as a Collection not a single file which I am assuming by making stagedFiles a queue channel it will send a single file. Instead I want to get to a threshold e.g. 10 files, pass those to stagedFiles which allows the moveToStage to process those files but then until this step is done I want the aggregator to continue to aggregate files and then release all aggregated files.
Thanks
I suggest you to have some flag as a AtomicBoolean bean and use it from your moveToStage#move and check it's state from:
release-strategy-expression="size() >= 10 and #stagingFlag.get()"

WSO2 - Using get-property() function in Property/Xquery Mediators

Our current service has 7 operations. when writing an outbound xquery "local entry" in wso2, we're trying to retrieve the name of the current operation being executed (how can this be so difficult?).
After reading what i could find in wso2's documentation. it appears as if we need to set up both a Property and an Xquery mediator. supposedly the property mediator would pull the value doing something like get-property('OperationName') and then this would be referenced and passed thru the Xquery mediator.
The other idea was that we needed to define it as a variable in the "Local Registry entry definitions" and than it would be around at all parts of the sequence.
I've tried for 2 days but haven't quite got it.
Please tell me what I'm missing...
Did you try the following xquery sample[1]? I modified the query mediator to get the operation name as follows.
<variable xmlns:ax21="http://services.samples/xsd" xmlns:m0="http://services.samples" name="code" expression="get-property('OperationName')" type="STRING" />
this worked fine. I could see the getQuote in the response message.
[1] http://wso2.org/project/esb/java/4.0.2/docs/samples/advanced_mediation_samples.html#Sample390

Sending emails with the error log through CruiseControl

How do you configure cruiseControl to send out emails that contains the error log whenever a build fails? I've gotten it to send out emails to users when the build fails, but it does not include the actual error that caused the build to fail. I know that if I only configure it to send out emails to the users that have made modifications, the error log is included in those emails. This is a sample of what I have:
< publishers>
< rss/>
< xmllogger/>
< email from="abc#abc.com" mailhost="abc.abc.com" includeDetails="TRUE">
< users>
< user name="Joe" group="devs" address="joe#abc.com"/>
< user name="Jim" group="devs" address="jim#abc.com"/>
< /users>
< groups>
< group name="devs" notification="Failed"/>
< /groups>
< /email>
< /publishers>
You can check if \cruisecontrol.net\server\xsl\compile.xsl is the same as \cruisecontrol.net\webdashboard\xsl\compile.xsl.
Compile.xsl is the default file used to print the error messages from your error log. The one in \webdashboard\ is used for the web dashboard (as the name implies) and the one under \server\ is used for emails.
You can also check ccnet.exe.config whether or not \cruisecontrol.net\server\xsl\compile.xsl is used for emails.
Mine's for example points to compile.xsl on \server:
<!-- Specifies the stylesheets that are used to transform the build results when using the EmailPublisher -->
<xslFiles>
<file name="xsl\header.xsl" />
<file name="xsl\compile.xsl" />
<file name="xsl\unittests.xsl" />
<file name="xsl\fit.xsl" />
<file name="xsl\modifications.xsl" />
<file name="xsl\fxcop-summary.xsl" />
</xslFiles>
Your email publisher will take the buildlog.xml and transorm it against whatever XSL's are configured either in your console or service config depending on which you use. There should be no difference in the content of the email though no matter on who you have it configured to be sent to and when. As long as you have the merge before the email publiseher and the email in the publishers section. I don't see how it could be different Are you sure the same failure produces different emails? My guess would be you are failing somewhere bad and the build log is not being genereted one way.
The build log is getting generated. I can see the error. It's just not included in the email.

Resources