Order of Forcebuild publisher in CruiseControl - cruisecontrol.net

Is there a way to order the execution of a force build publisher and wait for the prior forcebuild to finish up before executing the next one?
<publishers>
<forcebuild>
<project>Project A</project>
</forcebuild>
<forcebuild>
<project>Project B</project>
</forcebuild>
<forcebuild>
<project>Project C</project>
</forcebuild>
</publishers>

Try putting the project in the same queue and set their priorities to order the correctly. I've never used queues in the same situation but it should have the desired effect.

Related

Trigg Oozie workflow from input-event even if missing

I have classic coordinator with input-event on HDFS path.
<datasets>
<dataset name="rawData" frequency="${coord:days(1)}" initial-instance="${startDate}" timezone="UTC">
<uri-template>
${rawData}/${YEAR}${MONTH}${DAY}
</uri-template>
<done-flag>_SUCCESS</done-flag>
</dataset>
</datasets>
<input-events>
<data-in name="inputRawData" dataset="rawData">
<instance>${coord:current(0)}</instance>
</data-in>
</input-events>
It's working well.
Now I'm looking to force workflow to run EVEN if input-events is not OK after X times.
PseudoCode :
Wait inputEvent
If(inputEvent):
run.
else if(waitTme > 10min):
run.
My application will read the lastAvailable data in specific HiveTables.
Thanks.

Spring File Inbound Channel Adapter slows down

I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.

inbound-channel-adapter - How to update row field on failure?

I have an integration that starts with a standard database query and it update the state in the database to indicate that the integration has worked fine. It works.
But if the data cannot be processed and an exception is raised, the state is not updated as intended, but I would like to update my database row with a 'KO' state so the same row won't fail over and over.
Is there a way to provide a second query to execute when integration fails?
It seems to me that it is very standard way of doing things but I couldn't find a simple way to do it. I could catch exception in every step of the integration and update the database, but it creates coupling, so there should be another solution.
I tried a lot of Google search but I could not find anything, but I'm pretty sure the answer is out there.
Just in case, there is my xml configuration to do the database query (nothing fancy) :
<int-jdbc:inbound-channel-adapter auto-startup="true" data-source="datasource"
query="select * FROM MyTable where STATE='ToProcess')"
channel="stuffTransformerChannel"
update="UPDATE MyTable SET STATE='OK' where id in (:id)"
row-mapper="myRowMapper" max-rows-per-poll="1">
<int:poller fixed-rate="1000">
<int:transactional />
</int:poller>
</int-jdbc:inbound-channel-adapter>
I'm using spring-integration version 4.0.0.RELEASE
Since you are within Transaction, it is normal behaviuor, that rallback is caused and your DB returns to the clear state.
And it is classical pattern to get the deal with data on application purpose in that case, not from some built-in tool. That's why we don't provide any on-error-update, because it can't be a use-case for evrything.
As soon as you are going to update the row anyway you should do something on onRallback event and do it within new transaction, though. However it should be in the same Thread, to prevent fetching the same row from the second polling task.
For this purpose we provide a transaction-synchronization-factory feature:
<int-jdbc:inbound-channel-adapter max-rows-per-poll="1">
<int:poller fixed-rate="1000" max-messages-per-poll="1">
<int:transactional synchronization-factory="syncFactory"/>
</int:poller>
</int-jdbc:inbound-channel-adapter>
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback channel="stuffErrorChannel"/>
</int:transaction-synchronization-factory>
<int-jdbc:outbound-channel-adapter
query="UPDATE MyTable SET STATE='KO' where id in (:payload[id])"
channel="stuffErrorChannel">
<int-jdbc:request-handler-advice-chain>
<tx:advice id="requiresNewTx">
<tx:attributes>
<tx:method name="handle*Message" propagation="REQUIRES_NEW"/>
</tx:attributes>
</tx:advice>
</int-jdbc:request-handler-advice-chain>
</int-jdbc:outbound-channel-adapter>
Hope I am clear

aggregator release strategy depend on another service activator running

I understand how aggregating based on size works but I also want to make the release strategy depend on another step in the pipeline to be still running. The idea is that i move files to a certain dir "source", aggregate enough files and then move from "source" to "stage" and then process the staged files. While this process is running I dont want to put more files in stage but I do want to continue to add more files to source folder (that part is handled by using a dispatcher channel connected with file inbound adapter before the aggregator)
<int:aggregator id="filesBuffered"
input-channel="sourceFilesProcessed"
output-channel="stagedFiles"
release-strategy-expression="size() == 10"
correlation-strategy-expression="'mes-group'"
expire-groups-upon-completion="true"
/>
<int:channel id="stagedFiles" />
<int:service-activator input-channel="stagedFiles"
output-channel="readyForMes"
ref="moveToStage"
method="move" />
so as you can see I dont want to release the aggregated messages if an existing instance of moveToStage service activator is running.
I thought about making the stagedFiles channel a queue channel but that doesnt seems right because I do want the files to be passed to moveToStage as a Collection not a single file which I am assuming by making stagedFiles a queue channel it will send a single file. Instead I want to get to a threshold e.g. 10 files, pass those to stagedFiles which allows the moveToStage to process those files but then until this step is done I want the aggregator to continue to aggregate files and then release all aggregated files.
Thanks
I suggest you to have some flag as a AtomicBoolean bean and use it from your moveToStage#move and check it's state from:
release-strategy-expression="size() >= 10 and #stagingFlag.get()"

cc.net: set subject email publisher from trigger name

folks!
I have project in cc.net and this project nay start by 3 ways
Forced (when user click button "force" in web
By project trigger
By sheduler trigger
After build server send mail to stackholders.
And now, i want to add trigger name to mail subject. e.g.
force_Project name ...buikd result
I have tried use variables:
<projectTrigger project="Someproject">
<triggerStatus>Success</triggerStatus>
<variable name="Trigger" value="commit" />
</projectTrigger>
and
<subjectSettings>
<subject buildResult="Broken" value="{Trigger} is broken" />
<subject buildResult="StillBroken" value="{Trigger} is still broken" />
</subjectSettings>
but this way doesnt have positive result.
What kind of way able help me?
You can use the fileLabeller of ccnet.
<tasks>
build tasks...
<fileLabeller>
<labelFilePath>c:\buildstuff\mybuild-label.txt</labelFilePath>
<allowDuplicateSubsequentLabels>false</allowDuplicateSubsequentLabels>
</fileLabeller>
</tasks>
Setup your build task to write to the contents of mybuild-label.txt. I write something like this.
[repo1 rev: 99, repo2 rev: 9999]
This will become part of the subject.

Resources