I can't seem to find the most obvious CI feature that one ever needs from such a tool: run a project pipeline after another project's pipeline has finished. You can do it with trigger but only for downstream triggering, which is the opposite of what you want in case you have a project that is a core dependency of 20 other projects which all need to be rebuilt.
What you need in this case is to be able to define something like:
Project A: nothing particular, just a normal pipeline
Project B, that "depends" on project A:
.gitlab-ci.yml
from_upstream:
stage: pre
trigger:
project: ProjectA
What it does is trigger ProjectB build whenever a ProjectA pipeline has [successfully] finished.
Instead you must declare all dozens of downstreams in ProjectA in a similar fasion, which is silly and counter-productive, especially when ProjectA is a core library that gets constantly reused everywhere.
So, can someone please explain why GitlabCI is missing an obvious feature (which isn't available even in EE) that has been in Bamboo and Hudson/Jenkins for decades? And how do i do what i need with Gitlab-CI?
UPDATE:
It seems the notion of upstream/downstream is really confusing for some people, so just to clarify: upstream Project A is and must always be decoupled from downstream Project B because separation of concern is a thing and upstream maintainers couldn't and shouldn't possibly have any knowledge on how their project is used downstream.
So, desired functionality (which, again, exists for decades in Bamboo and Jenkins) is that downstream pipelines declare passive triggers on upstream pipelines, not the other way around with active triggers as it's currently implemented in Gitlab-CI.
There's documentation and example apps for multi-project pipelines here: https://about.gitlab.com/blog/2018/10/31/use-multiproject-pipelines-with-gitlab-cicd/ and here: https://gitlab.com/gitlab-examples/multi-project-pipelines
With the example projects, the project called "simple-maven-dep" triggers the pipeline for the other app ("simple-maven-app") that depends on it.
You can trigger a pipeline in your project whenever a pipeline finishes for a new tag in a different project. This feature was introduced in version 12.8. It is limited to new tags and requires a premium subscription.
https://gitlab.com/gitlab-org/gitlab/-/issues/9045
Related
I have a number of solutions, each of which have a mixture of applications and libraries. Generally speaking, the applications get built and deployed, and the libraries get published as NuGet packages to our internal packages feed. I'll call these "apps" and "nugets."
In my Classic Pipelines, I would have one build for the apps, and one for the nugets. With path filters, I would indicate folders that contain the nuget material, and only trigger the nuget build if those folders had changes. Likewise, the app build would have path filters to detect if any app code had changed. As a result, depending on what was changed in a branch, the app build might run, the nuget build might run, or both might run.
Now I'm trying to convert these to YAML. It seems we can only have one pipeline set up for CI, so I've combined the stages/jobs/steps for nugets and apps into this single pipeline. However, I can't seem to figure out a good way to only trigger the nuget tasks if the nuget path filters are satisfied and only the app tasks if the app path filters are satisfied.
I am hoping someone knows a way to do something similar to one of the following (or anything else that would solve the issue):
Have two different CI pipelines with their own sets of triggers and branch/path filters such that one or both might run on a given branch change
Set some variables based on which paths have changes so that I could later trigger the appropriate tasks using conditions
Make a pipeline always trigger, but only do tasks if a path filter is satisfied (so that the nuget build could always run, but not necessarily do anything, and then the app build could be triggered by the nuget build completing, and itself only do stuff if path filters are satisfied.
It seems we can only have one pipeline set up for CI
My issue was that this was an erroneous conclusion. It appeared to me that, out of the box, a Pipeline is created for a repo with a YAML file in it, and that you can change which file the Pipeline uses, but you can't add a list of files for it to use. I did not realize I could create an additional Pipeline in the UI, and then associate it to a different YAML file in the repo.
Basically, my inexperience with this topic is showing. To future people who might find this, note that you can create as many Pipelines in the UI as you want, and associate each one to a different YAML file.
We are building a Microservices based architecture and we are having 50 odd CI and 50 odd CD pipelines. Is there a way to script the CI / CD Build and Release definitions? We want this to be a repeatable process and do not want to leave it to our DevOps engineer(s) as it is prone to errors. Please note that I am not talking about ARM (which is already being used by us). Is there a way to do the above?
For builds, you can use YAML builds, which are currently in preview.
For releases, there's nothing equivalent yet.
You could always use the REST APIs to extract the build and release definitions as JSON, source control them, and then create a continuous delivery pipeline to update them when the definitions in source control change.
I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.
We have currently released our code to Production, and as a result have cut and branched to ensure we can support our current release, whilst still supporting hot-fixes without breaking the current release from any on-going development.
Here is our current structure:
Project-
/Development
/RC1
Until recently using Octopus we have had the following process:
Dev->Staging/Dev Test->UAT
Which works fine as we didn't have an actual release.
My question is how can Octopus support our new way of working?
Do we create a new/clone project in Octopus named RC1 and have CI from our RC1 branch into that? Then add/remove as appropriate as this RC's are no longer required?
Or is there another method that we've clearly missed out on?
It seems that most organisations that are striving for continuous something end up with a CI server and continuous deployment up to some manual sign off environment and then require continuous delivery to production. This generally leads to a branching strategy in order to isolate the release candidate to allow hot fixing.
I think a question like this raises more points for discussion, before trying to provide a one size fits all answer IMHO.
The kind of things that spring to mind are:
Do you have "source code" dependencies or binary ones for any shared components.
What level of integration / automated regression testing do you have.
Is your deployment orchestrated by TFS, or driven by a user in Octopus.
Is there a database as part of the application that needs consideration.
How is your application version numbering controlled.
What is your release cycle.
In the past where I've encountered this scenario, I would look towards a code promotion branching strategy which provides you with one branch to maintain in production - This has worked well where continuous deployment to production is not an option. You can find more branching strategies discussed on the ALM Rangers page on CodePlex
Developers / Testers can continually push code / features / bug fixes through staging / uat. At the point of release the Dev branch is merged to Release branch, which causes a release build and creates a nuget package. This should still be released to Octopus in exactly the same way, only it's a brand new release and not a promotion of a previous release. This would need to ensure that there is no clash on version numbering and so a strategy might be to have a difference in the major number - This would depend on your current setup. This does however, take an opinionated view that the deployment is orchestrated by the build server rather than Octopus Deploy. Primarily TeamCity / TFS calls out to the Ocotpus API, rather than a user choosing the build number in Octopus (we have been known to make mistakes)
ocoto.exe create-release --version GENERATED_BY_BUILD_SERVER
To me, the biggest question I ask clients is "What's the constraint that means you can't continuously deploy to production". Address that constraint (see theory of constraints) and you remove the need to work round an issue that needn't be there in the first place (not always that straight forward I know)
I would strongly advise that you don't clone projects in Octopus for different environments as it's counter intuitive. At the end of the day you're just telling Octopus to go and get this nuget package version for this app, and deploy it to this environment please. If you want to get the package from a different NuGet feed for release, then you could always make use of the custom binding on the NuGet field in Octopus and drive that by a scoped variable depending on the environment you're deploying to.
Step 1 - Setup two feeds
Step 2 - Scope some variables for those feeds
Step 3 - Consume the feed using a custom expression
I hope this helps
This is unfortunately something Octopus doesn't directly have - true support for branching (yet). It's on their roadmap for 3.1 under better branching support. They have been talking about this problem for some time now.
One idea that you mentioned would be to clone your project for each branch. You can do this under the "Settings" tab (on the right-hand side) in your project that you want to clone. This will allow you to duplicate your project and simply rename it to one of your branches - so one PreRelease or Release Candidate project and other is your mainline Dev (I would keep the same name of the project). I'm assuming you have everything in the same project group.
Alternatively you could just change your NuSpec files in your projects in different branches so that you could clearly see what's being deployed at the overview project page or on the dashboard. So for your RC branch, you could just add the suffix -release within the NuSpec in your RC branch which is legal (rules on Semantic Versioning talk about prereleases at rule #9). This way, you can use the same project but have different packages to deploy. If your targeted servers are the same, then this may be the "lighter" or simpler approach compared to cloning.
I blogged about how we do this here:
http://www.alexjamesbrown.com/blog/development/working-branch-deployments-tfs-octopus/
It's a bit of a hack, but in summary:
Create branch in TFS Create branch specific build definition
Create branch specific drop location for Octopack
Create branch specific Octopus Deployment Project (by cloning your ‘main’ deployment
Edit the newly cloned deployment, re-point the nuget feed location to your
branch specific output location, created in step 3
I am currently thinking of a way to nicely structure my web project with mercurial. I was thinking of having two branches default (for development & testing) and release (the finished code which gets published). I would develop and test on the default branch until I have a stable application running. Then I would merge into the release branch. When I push the code to my central repository (on the server where my web application lives) I would want the code to be automatically published.
Is this the right way to go and if yes can this automatic publishing of the release branch be achieved with hooks?
Have you considered the git-flow branching model? I would recommend it and also hgflow by yujiewu. The latter is an implementation of the git-flow idea for mercurial.
Instead of a "release" branch, you should name it 'stable' as several projects do.
Do you use Continuous integration yet? Maybe, you should. In Jenkins, you could create a post-build step to publish the release if everything went well. It's better than a changegroup hook.