In GitLab is it possible to configure a Scheduled Pipeline that runs on all branches periodically? - gitlab

I am using GitLab for Git version control and GitLab CI / CD for my automated builds. Usually, the builds are triggered by Git repository activity but I also have a weekly build to ensure that projects not under active development continue to work. When there is only a "master" branch on a project, it is easy to ensure a weekly build is run on the latest code. When there are multiple branches in a project, I would like to repeat the pipeline work for each of them in turn.
What I would like to be able to do is schedule a build (weekly, fortnightly or monthly) that runs on all current branches visible in Git. Is that possible within GitLab's Continuous Delivery system?
The motivation behind doing this is to ensure that external activity, such as tool and library updates, do not introduce an issue without it being promptly visible. Assuming there are reasonable automated testing, coverage and comprehensive builds for target platforms, a monthly build with the latest tools should highlight the problem promptly. This is better than an invisible mountain to problems accumulating while a project is shelved for a few years (or months). Sometimes all that is required is occasional maintenance.
There are only a handful of feature branches and release lines on the projects currently. I would not expect that number to grow significantly. There is time enough over a weekend to run the required pipelines dozens if not hundreds of times at present.
Ideally, I would like something straightforward to set up. I cannot see anything in the admin GUI that would allow this at present. I did look at the API and I can see there is some scope there to script the addition and removal. Perhaps some script that is run once a month to create new Scheduled pipelines based on git branches is the only way. A pre-made solution on those lines would be perfectly acceptable. If nothing exists I might start work on something like that in time.
I am currently running GitLab Community Edition 11.2.3 06cbee3 (GitLab CE 11.2.3). If there is an Enterprise Edition only answer, that is fine and will add to the justifications of purchasing the EE version. I would pick at CE one above the EE one though.

You cannot set a schedule for all branches at once, you have to configure one schedule per branch yourself.
Perhaps some script that is run once a month to create new Scheduled
pipelines based on git branches is the only way.
I would go in that way.

Related

Self hosted azure agent - how to configure pipelines to share the same build folder

We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.

GitLab CI: How to fail on new compiler warnings

We are trying to get an old legacy code base under control while simultaneously developing new features. Currently the code compiles with a hell of a lot of compiler warnings and warnings from static code analyzers. For that reason it is not uncommon that code introducing new warnings reaches production simply because the new warning got lost in the shuffle.
Currently we are using Jenkins for nightly builds and make the build fail on new warnings. However, when Jenkins detects the new warnings the code was already merged a few hours ago. So we would like to not only shorten the feedback cycle but also ensure to only merge changes that do not introduce new warnings.
As far as I know it is possible to trigger a Jenkins build on a push to GitLab. But Jenkins can only compare the count of warnings to the previous build. But we would need to compare to a build of a different branch.
Can GitLab CI or a combination of GitLab EE and Jenkins somehow be configured to detect if a merge request introduces new warnings?
Yes that is possible but that's rather an open-ended question that will depend a lot on how long a build takes and how you will compare the outcomes.
You don't have to run only the checks on the branch you have checked out. You may set up two jobs in parallel that run tests on current branch and the develop branch, pass them as artifacts to a third job and compare them there.
You may want to store the state of a build on your develop branch and download the artifact to your current job and compare it against the local results. You could also store them in a database, on a file server or wherever else it's comfortable.
Finally you may try an external code quality tool like SonarQube which has greater insight into what's new and what's old.
In the meantime tools got developed that allow a workflow which is not perfect but comes quite close.
Jenkins has the Warnings Next Generation Plugin which can compare the warnings found in one Jenkins job to the warnings found in another Jenkins job. So we set up a job to compile our develop branch each time a new commit is pushed to it. We then use the results as baseline. Another job that gets triggered for each merge request in GitLab then uses this baseline to determine the new warnings introduced by the merge request.
This works reasonably well.

Octopus Deploy and Multiple Branches/Release Candidates

We have currently released our code to Production, and as a result have cut and branched to ensure we can support our current release, whilst still supporting hot-fixes without breaking the current release from any on-going development.
Here is our current structure:
Project-
/Development
/RC1
Until recently using Octopus we have had the following process:
Dev->Staging/Dev Test->UAT
Which works fine as we didn't have an actual release.
My question is how can Octopus support our new way of working?
Do we create a new/clone project in Octopus named RC1 and have CI from our RC1 branch into that? Then add/remove as appropriate as this RC's are no longer required?
Or is there another method that we've clearly missed out on?
It seems that most organisations that are striving for continuous something end up with a CI server and continuous deployment up to some manual sign off environment and then require continuous delivery to production. This generally leads to a branching strategy in order to isolate the release candidate to allow hot fixing.
I think a question like this raises more points for discussion, before trying to provide a one size fits all answer IMHO.
The kind of things that spring to mind are:
Do you have "source code" dependencies or binary ones for any shared components.
What level of integration / automated regression testing do you have.
Is your deployment orchestrated by TFS, or driven by a user in Octopus.
Is there a database as part of the application that needs consideration.
How is your application version numbering controlled.
What is your release cycle.
In the past where I've encountered this scenario, I would look towards a code promotion branching strategy which provides you with one branch to maintain in production - This has worked well where continuous deployment to production is not an option. You can find more branching strategies discussed on the ALM Rangers page on CodePlex
Developers / Testers can continually push code / features / bug fixes through staging / uat. At the point of release the Dev branch is merged to Release branch, which causes a release build and creates a nuget package. This should still be released to Octopus in exactly the same way, only it's a brand new release and not a promotion of a previous release. This would need to ensure that there is no clash on version numbering and so a strategy might be to have a difference in the major number - This would depend on your current setup. This does however, take an opinionated view that the deployment is orchestrated by the build server rather than Octopus Deploy. Primarily TeamCity / TFS calls out to the Ocotpus API, rather than a user choosing the build number in Octopus (we have been known to make mistakes)
ocoto.exe create-release --version GENERATED_BY_BUILD_SERVER
To me, the biggest question I ask clients is "What's the constraint that means you can't continuously deploy to production". Address that constraint (see theory of constraints) and you remove the need to work round an issue that needn't be there in the first place (not always that straight forward I know)
I would strongly advise that you don't clone projects in Octopus for different environments as it's counter intuitive. At the end of the day you're just telling Octopus to go and get this nuget package version for this app, and deploy it to this environment please. If you want to get the package from a different NuGet feed for release, then you could always make use of the custom binding on the NuGet field in Octopus and drive that by a scoped variable depending on the environment you're deploying to.
Step 1 - Setup two feeds
Step 2 - Scope some variables for those feeds
Step 3 - Consume the feed using a custom expression
I hope this helps
This is unfortunately something Octopus doesn't directly have - true support for branching (yet). It's on their roadmap for 3.1 under better branching support. They have been talking about this problem for some time now.
One idea that you mentioned would be to clone your project for each branch. You can do this under the "Settings" tab (on the right-hand side) in your project that you want to clone. This will allow you to duplicate your project and simply rename it to one of your branches - so one PreRelease or Release Candidate project and other is your mainline Dev (I would keep the same name of the project). I'm assuming you have everything in the same project group.
Alternatively you could just change your NuSpec files in your projects in different branches so that you could clearly see what's being deployed at the overview project page or on the dashboard. So for your RC branch, you could just add the suffix -release within the NuSpec in your RC branch which is legal (rules on Semantic Versioning talk about prereleases at rule #9). This way, you can use the same project but have different packages to deploy. If your targeted servers are the same, then this may be the "lighter" or simpler approach compared to cloning.
I blogged about how we do this here:
http://www.alexjamesbrown.com/blog/development/working-branch-deployments-tfs-octopus/
It's a bit of a hack, but in summary:
Create branch in TFS Create branch specific build definition
Create branch specific drop location for Octopack
Create branch specific Octopus Deployment Project (by cloning your ‘main’ deployment
Edit the newly cloned deployment, re-point the nuget feed location to your
branch specific output location, created in step 3

Using mercurial branches for automatic deployment of website

I am currently thinking of a way to nicely structure my web project with mercurial. I was thinking of having two branches default (for development & testing) and release (the finished code which gets published). I would develop and test on the default branch until I have a stable application running. Then I would merge into the release branch. When I push the code to my central repository (on the server where my web application lives) I would want the code to be automatically published.
Is this the right way to go and if yes can this automatic publishing of the release branch be achieved with hooks?
Have you considered the git-flow branching model? I would recommend it and also hgflow by yujiewu. The latter is an implementation of the git-flow idea for mercurial.
Instead of a "release" branch, you should name it 'stable' as several projects do.
Do you use Continuous integration yet? Maybe, you should. In Jenkins, you could create a post-build step to publish the release if everything went well. It's better than a changegroup hook.

Does CC.NET detect modification when a build script performs a checkin

I've been doing some research into finally automating our Development builds and still have one nagging question that I'm hoping the StackOverflow community can solve for me.
My understanding is that an IntervalTrigger when setup properly will check VSS every X seconds for changes and if it finds a modified file, will run my tasks. One of my tasks would be to checkout the AssemblyInfo files and update the version numbers. After these files are updated they would be checked back into VSS.
Thinking about this solution it doesn't make much sense because in my mind, I'm forcing the check for changed files to true every time the trigger fires. Am I missing something here? Is there a way of doing this without triggering an automatic build on the AssemblyInfo check-in?
You can use a Filtered Source Control Block to exclude certain files from the trigger.
I just posted a bunch about my default build process here which may be of some interest to you: SVN Website Development and Deployment Solution
The way I usually configure my projects with CC.NET is to have two project blocks per solution. One configured as an interval trigger that does nothing more than get the latest from my repository, build the solution, and run unit tests. The other is a schedule trigger that does all the things the other one does, but actually publishes a build. This includes changing version numbers, publishing files, etc. This might work in your case, since the change in version would cause the interval project to trigger, but only once.
Checking the automatically generated AssemblyInfo into the version control system is a bad idea, don't do it. You'll get a lot of noise (50% of all commits!) in your history. Also, it does not give you any new information - you can always pull this from VCS. Have your build script autogenerate those files is a good practice, but don't push those changes back!

Resources