CC.NET alternative trigger for waiting until another step has completed (was using intervalMultiActivityTrigger - triggerActivity) - cruisecontrol.net

I am using an old verison of CC.NET on a continuous build server and am using the intervalTrigger and intervalMultiActivityTrigger. These types of triggers do not exist in recent versions of CC.NET and I am having difficulty finding a workaround in the documentation or on stackoverflow.
I have a continuous build server with 4 projects/steps.
Get latest code (checks every 5 minutes and continues if there is new code checked in)
Build database (triggers when #1 is complete)
Build code (triggers when #2 is complete)
Run unit tests (triggers when #3 is complete)
Step 1 uses a intervalMultiActivityTrigger to check if any of the other 3 projects are not "Sleeping" as to not start a second build until the rest of the steps have completed.
<trigger type="intervalMultiActivityTrigger" seconds="300" project="04-Do_UnitTests" projectTwo="03-Build_Code" projectThree="02-Build_Database" triggerActivity="Sleeping"/>
What are some alternatives to do the same functionality using the latest versions of CC.NET (check for another project activity="Sleeping")?

A queue will be what you are looking for. A queue can be set to contain all of your projects. The project will only be forced if it is at the head of the queue.
By default each project is in a queue with the same name aa the project. The following would force each project into the same queue, named queue1, with default settings.
<project>
<queue>queue1</queue>
</project>
The queue can be configured by defining it outside of the project scope with the additional properties.
Your case probably does not need this, but the information is here: http://cruisecontrolnet.org/projects/ccnet/wiki/Queue_Configuration
Queues have existed since version 1.3 so as long as you are using that or a later version you should be fine.

Related

How to restore NuGet package in Azure Pipeline?

I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.

Azure Artifacts Feed is much slower than maven central

I'm working on a project in Azure DevOps and, as recommended in the doc, I created an Artifacts Feed with maven central as upstream source to store all my dependencies (I don't really need to publish artifacts for now).
So I configured my local maven to fetch all the dependencies from my feed instead of maven central and it all works fine, except that it's very slow compared to maven central.
When I start from an empty .m2 on my local machine, it takes 1 min 15 secs to build my project when downloading the dependencies from maven central, but it takes over 8 minutes to do the same when downloading the dependencies from the Feed (which contains already all the dependencies).
I could live with that, since the download of everything happens only on the first build.
But the issue is that it's also slower when building my project from Azure Pipelines, which I really didn't expect since it's a connection from Azure to Azure and within the same organization. In this case, it takes at least twice the time when using the feed rather than maven central. And this will be true every time since Azure Pipelines gives you a fresh VM each time you build (I'm using a hosted agent), so there's no dependencies caching in this case.
It's really annoying since my project is just a HelloWorld so far, so it will only get worse over time.
Using a repository manager/feed is the best practice according to both Maven and Azure, but at this point I'm really thinking of going for the bad practice of getting everything from maven central instead of my feed, at least in my pipeline, to improve the performance.
Am I the only one having this issue ? What are your thoughts about this ?
Finally, after diving into the documentation for Azure Pipelines recently, I found out there is a way to cache the maven repository between runs so it partially solves my issue since the full download of the dependencies will happen only once.
Here is the doc in question for those who are interested.

In GitLab is it possible to configure a Scheduled Pipeline that runs on all branches periodically?

I am using GitLab for Git version control and GitLab CI / CD for my automated builds. Usually, the builds are triggered by Git repository activity but I also have a weekly build to ensure that projects not under active development continue to work. When there is only a "master" branch on a project, it is easy to ensure a weekly build is run on the latest code. When there are multiple branches in a project, I would like to repeat the pipeline work for each of them in turn.
What I would like to be able to do is schedule a build (weekly, fortnightly or monthly) that runs on all current branches visible in Git. Is that possible within GitLab's Continuous Delivery system?
The motivation behind doing this is to ensure that external activity, such as tool and library updates, do not introduce an issue without it being promptly visible. Assuming there are reasonable automated testing, coverage and comprehensive builds for target platforms, a monthly build with the latest tools should highlight the problem promptly. This is better than an invisible mountain to problems accumulating while a project is shelved for a few years (or months). Sometimes all that is required is occasional maintenance.
There are only a handful of feature branches and release lines on the projects currently. I would not expect that number to grow significantly. There is time enough over a weekend to run the required pipelines dozens if not hundreds of times at present.
Ideally, I would like something straightforward to set up. I cannot see anything in the admin GUI that would allow this at present. I did look at the API and I can see there is some scope there to script the addition and removal. Perhaps some script that is run once a month to create new Scheduled pipelines based on git branches is the only way. A pre-made solution on those lines would be perfectly acceptable. If nothing exists I might start work on something like that in time.
I am currently running GitLab Community Edition 11.2.3 06cbee3 (GitLab CE 11.2.3). If there is an Enterprise Edition only answer, that is fine and will add to the justifications of purchasing the EE version. I would pick at CE one above the EE one though.
You cannot set a schedule for all branches at once, you have to configure one schedule per branch yourself.
Perhaps some script that is run once a month to create new Scheduled
pipelines based on git branches is the only way.
I would go in that way.

Will CruiseControl.NET queue triggered builds while a forced build is already running?

What happens to a CruiseControl.NET project if a trigger fires to build the project while it's already building due to someone "Force" building it earlier?
Will the build request of the trigger got queued?
We use CCNET 1.5 and 1.6.
Nothing, while a build is running triggers are not executed. However if you configured a sourcecontrol block for example for svn or p4, modifications during forced build will still be detected and cause another build in consequence.
You can see easier this if you add <queueStatusServerPlugin /> to the serverPlugins in dashboard.config.

Does CC.NET detect modification when a build script performs a checkin

I've been doing some research into finally automating our Development builds and still have one nagging question that I'm hoping the StackOverflow community can solve for me.
My understanding is that an IntervalTrigger when setup properly will check VSS every X seconds for changes and if it finds a modified file, will run my tasks. One of my tasks would be to checkout the AssemblyInfo files and update the version numbers. After these files are updated they would be checked back into VSS.
Thinking about this solution it doesn't make much sense because in my mind, I'm forcing the check for changed files to true every time the trigger fires. Am I missing something here? Is there a way of doing this without triggering an automatic build on the AssemblyInfo check-in?
You can use a Filtered Source Control Block to exclude certain files from the trigger.
I just posted a bunch about my default build process here which may be of some interest to you: SVN Website Development and Deployment Solution
The way I usually configure my projects with CC.NET is to have two project blocks per solution. One configured as an interval trigger that does nothing more than get the latest from my repository, build the solution, and run unit tests. The other is a schedule trigger that does all the things the other one does, but actually publishes a build. This includes changing version numbers, publishing files, etc. This might work in your case, since the change in version would cause the interval project to trigger, but only once.
Checking the automatically generated AssemblyInfo into the version control system is a bad idea, don't do it. You'll get a lot of noise (50% of all commits!) in your history. Also, it does not give you any new information - you can always pull this from VCS. Have your build script autogenerate those files is a good practice, but don't push those changes back!

Resources