I'm setting up a Cruise Control configuration for one project. I have an <msbuild> task under <tasks>. It seems that I have the option of putting my file deployment under either <tasks> or <publishers>.
Logically I would think it should reside under <publishers> but none of the examples I have seen online work this way.
Should deployment happen within in <tasks> or <publishers>?
It depends. Since CC.Net 1.5 tasks and publishers are quite the same, you can put your any task in the publishers section. The main difference is that, if a publisher fail, your project does not fail (at least it is not shown as failed in CCTray).
For "simple" deployment (for example, copying a dll to a server) I did it under the publishers because this deployment task does not impact the success of the build and it's not that much important if the deploy fails.
If the deployment is a important part of the build (website deployment for example), then I put it in the tasks section to be sure to be notified when it fails.
Deployment task should be in the tasks section.
As deployment part is playing with a final package, the build must succeed.
The publisher section is executed whatever the build result is. If you want to deploy only if all tasks succeed, then write the deployment section as the last task of the tasks section.
So if a task fails, the deployment will not occur.
EDIT: from ccnet documentation:
The publishers section is run after
the build completes (whether it passes
or fails). This is where you aggregate
and publish the build results.
and
Historical Note Publishers and Tasks
were different objects in earlier
version of ccnet. Now they are
interchangeable, and can appear either
in the <prebuild> section, the <tasks>
section, or the <publishers> section
of the ccnet.config file depending on
whether they should be run before,
during or after the build.
reference : http://confluence.public.thoughtworks.org/display/CCNET/Task+And+Publisher+Blocks
Related
I have a number of solutions, each of which have a mixture of applications and libraries. Generally speaking, the applications get built and deployed, and the libraries get published as NuGet packages to our internal packages feed. I'll call these "apps" and "nugets."
In my Classic Pipelines, I would have one build for the apps, and one for the nugets. With path filters, I would indicate folders that contain the nuget material, and only trigger the nuget build if those folders had changes. Likewise, the app build would have path filters to detect if any app code had changed. As a result, depending on what was changed in a branch, the app build might run, the nuget build might run, or both might run.
Now I'm trying to convert these to YAML. It seems we can only have one pipeline set up for CI, so I've combined the stages/jobs/steps for nugets and apps into this single pipeline. However, I can't seem to figure out a good way to only trigger the nuget tasks if the nuget path filters are satisfied and only the app tasks if the app path filters are satisfied.
I am hoping someone knows a way to do something similar to one of the following (or anything else that would solve the issue):
Have two different CI pipelines with their own sets of triggers and branch/path filters such that one or both might run on a given branch change
Set some variables based on which paths have changes so that I could later trigger the appropriate tasks using conditions
Make a pipeline always trigger, but only do tasks if a path filter is satisfied (so that the nuget build could always run, but not necessarily do anything, and then the app build could be triggered by the nuget build completing, and itself only do stuff if path filters are satisfied.
It seems we can only have one pipeline set up for CI
My issue was that this was an erroneous conclusion. It appeared to me that, out of the box, a Pipeline is created for a repo with a YAML file in it, and that you can change which file the Pipeline uses, but you can't add a list of files for it to use. I did not realize I could create an additional Pipeline in the UI, and then associate it to a different YAML file in the repo.
Basically, my inexperience with this topic is showing. To future people who might find this, note that you can create as many Pipelines in the UI as you want, and associate each one to a different YAML file.
We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.
I can't seem to find the most obvious CI feature that one ever needs from such a tool: run a project pipeline after another project's pipeline has finished. You can do it with trigger but only for downstream triggering, which is the opposite of what you want in case you have a project that is a core dependency of 20 other projects which all need to be rebuilt.
What you need in this case is to be able to define something like:
Project A: nothing particular, just a normal pipeline
Project B, that "depends" on project A:
.gitlab-ci.yml
from_upstream:
stage: pre
trigger:
project: ProjectA
What it does is trigger ProjectB build whenever a ProjectA pipeline has [successfully] finished.
Instead you must declare all dozens of downstreams in ProjectA in a similar fasion, which is silly and counter-productive, especially when ProjectA is a core library that gets constantly reused everywhere.
So, can someone please explain why GitlabCI is missing an obvious feature (which isn't available even in EE) that has been in Bamboo and Hudson/Jenkins for decades? And how do i do what i need with Gitlab-CI?
UPDATE:
It seems the notion of upstream/downstream is really confusing for some people, so just to clarify: upstream Project A is and must always be decoupled from downstream Project B because separation of concern is a thing and upstream maintainers couldn't and shouldn't possibly have any knowledge on how their project is used downstream.
So, desired functionality (which, again, exists for decades in Bamboo and Jenkins) is that downstream pipelines declare passive triggers on upstream pipelines, not the other way around with active triggers as it's currently implemented in Gitlab-CI.
There's documentation and example apps for multi-project pipelines here: https://about.gitlab.com/blog/2018/10/31/use-multiproject-pipelines-with-gitlab-cicd/ and here: https://gitlab.com/gitlab-examples/multi-project-pipelines
With the example projects, the project called "simple-maven-dep" triggers the pipeline for the other app ("simple-maven-app") that depends on it.
You can trigger a pipeline in your project whenever a pipeline finishes for a new tag in a different project. This feature was introduced in version 12.8. It is limited to new tags and requires a premium subscription.
https://gitlab.com/gitlab-org/gitlab/-/issues/9045
I have a VSTS project and I'm setting up CI/CD at the moment. All fine, but I seem to have 2 options for the publishing step:
Option 1: it's a task as part of the CI Build, e.g. see build step 3 here:
https://medium.com/#flu.lund/setting-up-a-ci-pipeline-for-deploying-your-angular-application-to-azure-using-visual-studio-team-f686c8f190cf
Option 2: The build phase produces artifacts, and as part of a separate release phase these artifacts are published, see here:
https://learn.microsoft.com/en-us/vsts/build-release/actions/ci-cd-part-1?view=vsts
Both options seem well supported in the MS documentation, is one of these options better than the other? Or is it a case of pros & cons for each and it depends on circumstances, etc?
Thanks!
You should definitely use "Option 2". Your build should not make changes in your environments whatsoever, that is strictly what a "release" is. That link you have under "Option 1" is the wrong way to do it, a build should be just that, compiling code and making artifacts, not actually deploying code.
When you mesh build/releases together, you make it very difficult to debug build issues. Since your code is always being released, you really have to disable the "deploy" step to get any idea of what was built before you deployed.
Also, the nice thing about creating an artifact is you have a deployable package, and if in the future you need to rollback to a previous working version, you have that ready to go. Using the "build only" strategy, you'd have to revert your code or make unnecessary backups to achieve this.
I think you'll find any new Microsoft documentation pointing you toward this approach, and VSTS is completely set up like this. Even the "Configure Continuous Delivery in Azure..." feature in Visual Studio 2017 will create a build and a release.
Almost all build tasks are the same as release tasks, so you can deploy the app after building the project in build process.
Also there are many differences between release and build, for example, many environments, deployment group phase in release.
So which way is better is per to your detail requirement, for example, if build > deploy > other process is simple, you can do it just in build.
Regarding Publish artifact task, it is used to publish the files to VSTS server or other place (e.g. shared folder), which can be used in release as Artifact (Click Add artifact > Build in release definition), you also can download them for troubleshooting, for example, if you are using Hosted Agent that you can’t access, but you want to get some files (e.g. build result), you can add publish artifact task to publish to VSTS server, then download them (Open build result > Artifacts)
In CruiseControl.NET, I have two projects set up, one for building and one for deploying build packages.
Our build is largely based around MSBuild, and as it runs the dashboard constantly updates with the latest output from the build. This means that even though a full build may take 15 minutes, you can see exactly where it is, and that it's making progress.
The deploy is run using another tool (VisualBuild, though I'm see the same basic behaviour with other tools like PowerShell). This is another long-running task, but in this case the dashboard is not updated with its output as it progresses. Since a deploy may take a long time, it's hard to tell whether things are progressing or if things have stalled. The output is getting logged to the CruiseControl.NET log, and will display on the dashboard once things are done, but not while the deploy is in progress.
Is there a way to get output from other arbitrary long-running tasks updated on the dashboard in something resembling real time? What makes MSBuild special in this regard?
CruiseControl.Net, since version 1.4, includes support for build listener files: it's a mechanism which allows tracking the execution of long-running tasks by reading from a log file. While this mechanism is generic, and can be used with any tool, CruiseControl.Net by itself ships only with build listeners for MSBuild and NAnt (which means that for those two tools progress is reported automatically, without the need for extra configuration).
For an external tool, such as VisualBuild, called by <exec> task, you would have to plug in your own logger that creates a simple progress file:
<data>
<Item Time="2007-10-14 08:43:12" Data="Starting Build timetester" />
<Item Time="2007-10-14 08:43:16" Data="Starting Target build" />
<Item Time="2007-10-14 08:43:16" Data="Sleeping for 5000 milliseconds." />
</data>
in the location pointed to by the CCNetListenerFile environment variable.