I have following image configuration in my gitlab-ci.yml:
default:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
#image: curlimages/curl:latest
...
this works fine when I am deploying on https://gitlab.com/, however when you try to deploy my code along with above ci configuration I get following errors in my CICD:
This job is stuck because the project doesn't have any runners online assigned to it.
Go to project CI settings
My question is while using https://gitlab.com/ I didn't specifically assigned any runners to my project through settings. But now it seems I have supposed to do that.
Why is that?
And if this is necessary how can I do this?
When using gitlab.com -- the gitlab instance has shared runners configured available to run all untagged CI jobs.
On your own self-hosted gitlab you must either configure your own shared runners for your instance or register runners to your projects/groups.
You cannot use the gitlab.com shared runners on a self-hosted gitlab instance.
From scope of runners:
Shared Runners
Shared runners are available to every project in a GitLab instance.
Use shared runners when you have multiple jobs with similar requirements. Rather than having multiple runners idling for many projects, you can have a few runners that handle multiple projects.
If you are using a self-managed instance of GitLab:
Your administrator can install and register shared runners by going to your project’s Settings > CI/CD, expanding the Runners section, and clicking Show runner installation instructions. These instructions are also available in the documentation.
The administrator can also configure a maximum number of shared runner pipeline minutes for each group.
If you are using GitLab.com:
You can select from a list of shared runners that GitLab maintains.
The shared runners consume the pipelines minutes included with your account.
I suppose you technically would be able to use public GitLab runners for your self-hosted instance if you create an account on gitlab.com and setup CICD for external repos pointing to your self-hosted instance -- but your minutes would be a separate entitlement from your self-hosted license, among other serious limitations.
Related
I have setup the Gitlab self hosted instance having one runner registered as "linux runner manager" which is running jobs of Gitlab Pipelines on ECS fargate by using docker image as task, after job completed on Gitlab, that container will be destroyed.
I want to achieve Complete Autoscaling setup for runners of Gitlab where I have Linux, Macos & windows runner managers building these 3 different platforms pipelines based on docker containers inside ECS Fargate cluster.
I have achieved this setup in case of linux by following this official documentation of Gitlab.
Here is the desired rough diagram of what i have in mind
In current achieved setup I have these two instances, one gitlab master node and one linux manager instance which is explained in above diagram.
I have two files on runner manager
one is config.toml having connection gitlab and token stuff.
and the other one is fargate.toml having connection ecs fargate.
Lastly i have cluster ECS cluster having Fargate Arch.
When the job is being triggered on Gitlab Pipeline, it invoke container in ECS and create new task once job is completed it destroy the task, so in this way ECS cluster will have empty task bucket unless there is triggered pipeline active from gitlab.
The container image currently which I am using for building jobs on ECS is available here
I have gone through official documentation of ECS fargate as well
Seems like only window is supported.
I am wondering if whatever I am looking is even achievable in this case. i.e. is there any official docker files as gitlab runners for windows and macOS is available to support my current architecture? or any customization of files on runner managers end?
We are currently facing a conundrum with our multi-tenant project which contains various configuration files for each of our tenants and their associated environment. Our CI/CD Pipeline is split into two parts
An Upstream pipeline which analyses the new commit to master to
determine which tenants/environments have been changed. This triggers
the downstream pipeline with the correct environment variables via
the API
A downstream pipeline which executes scripts to deploy
changes to the tenants' environment based on the environment variables
passed through. This works well, however we have a Gitlab Runner per
environment to access the customers environment. We use this to avoid
hard-coding multiple credentials within our scripts or CI environment
variables.
Is there a way we can trigger this downstream pipeline with the specific Gitlab Runner? Our Gitlab Runners are tagged per environment so that we can use the passed environment variables to detect which runner it should be ran on.
I’ve had a look around the Gitlab CI, specific runners and shared runners (which ours are currently) but doesn’t seem to be supported.
tags: supports variable expansion. So, it is possible to pass variables in the API call when creating your downstream pipelines in order to control which runners are used.
I have some questions regarding Gitlab Server & Gitlab Runner.
Must GitLab Runner be installed to perform CI/CD deployments?
Must GitLab Server be installed for GitLab Runner?
Must GitLab Runner be installed on the same server as Gitlab if Gitlab Runner requires Gitlab server?
Gitlab Runners are a software component that can execute Gitlab CI jobs. Runners can operate on a stand alone server (or desktop/laptop), in docker, in kubernetes, and with minimal requirements. A runner must connect to the server to accept jobs from the server it is registered with, so a gitlab runner is dependent on a Gitlab server. It's actually sensible to have different runners executing jobs in the same pipeline.
A server may have many runners in many different network locations. About the only place you SHOULD NOT deploy a gitlab runner is on your gitlab server. Gitlab.com provides runners some for free and some for pay, but they are generally deployed on separate infrastructure, because this is just good design.
Gitlab Runners and Gitlab CI jobs and pipelines are good way to implement CI/CD deployments. They are not the only way, however. They are the way supported by Gitlab, and all things considered, I'd say they are a very good choice. Lots of other CI/CD tools exist, however, and different repositories on the same server can make different choices about how to implement their CI/CD pipeline.
Each developer on our team forks the production repository, then opens MRs from their fork to merge into the master branch of the production one.
We have a fixed number of test environments which are shared by all developers. Eventually we hope to move to ephemeral environments for each MR but at the moment this is not possible.
We utilize the Environments / Deployments functionality of Gitlab: https://docs.gitlab.com/ee/ci/environments/ and the production repository is the central location where we see what is deployed to each environment.
The problem is that when a developer opens a MR they frequently choose to manually deploy to one of our shared test environments. But since the pipeline for their branch runs in their fork it records the Environment / Deploy information there rather than in production. This makes it impossible for us to know who deployed what to each test environment as that information is recorded in random developer forks rather than in the centralized location.
We only deploy to production hosts from the production repository (it is disabled in developer forks) so that information is centralized and up to date. But it is a headache for developers to determine who has deployed something to one of the shared test environments and they frequently overwrite each others' changes by accident.
Is there any way for us to allow test deploys from developer branches, but centralize the information about what is deployed in each environment in the production repository?
My team has an Azure App Services Web App where we house three major components:
Our main Node.js server and API, which is at the root
A secondary API, which is in a virtual directory
Our front-end web app (also served from a Node.js server), which is in another virtual directory
Each of those three components is maintained in its own git repo in VSTS. Additionally, the Web App has three slots: dev, ppe, and prod.
We are trying to move our build processes out of Azure and into VSTS. What we'd like to be able to do is the following:
When there's a new commit to master in any of the three repos, create a dev build and deploy it directly to the appropriate virtual directory in the dev slot.
When a component is ready to be released - whether that means a new commit in a special RELEASE branch or manually triggering a release process - create a production build, deploy it to ppe and, on user approval, swap the ppe and prod slots.
The complication here is that, when any component is deployed to ppe, we also need to deploy the latest released versions of all three components to ppe, since Azure does not have the ability to swap virtual directories independently.
What I currently have is the following:
A build process for each of the three repositories, which is triggered on commits to master or RELEASE. It creates both a development build and a production build and publishes them.
A dev release process that is triggered on any new builds of master in any of the three repositories. It takes the latest dev build from master from all three repos and deploys them to their appropriate virtual directories in dev.
A production release process that is triggered on any new build of RELEASE in any of the three repositories. It takes the latest production build from RELEASE from all three repos, deploys them to the appropriate virtual directories in ppe and, on user approval, swaps ppe and prod.
This works, but it seems pretty clunky, has a lot of wasted work, and it doesn't feel like we're exactly taking advantage of the power of the VSTS build/release pipeline. Is there a better or more accepted way of doing this?
There is Filters based on the artifacts feature in environment triggers of release, so you can base on the build tag(s) to trigger corresponding environments.
Regarding build, you can set the build tag(s) per to current branch by calling logging commands through PowerShell task (##vso[build.addbuildtag]build tag)
Regarding ppe scenario, I recommend that you can create a new CI build definition just for ppe related branch and build all components (get others source code by calling git clone command through Command Line task or other tasks (e.g. PowerShell)), then publish results of them, after that deploy them to corresponding slots and Virtual directory in release.