I have some questions regarding Gitlab Server & Gitlab Runner.
Must GitLab Runner be installed to perform CI/CD deployments?
Must GitLab Server be installed for GitLab Runner?
Must GitLab Runner be installed on the same server as Gitlab if Gitlab Runner requires Gitlab server?
Gitlab Runners are a software component that can execute Gitlab CI jobs. Runners can operate on a stand alone server (or desktop/laptop), in docker, in kubernetes, and with minimal requirements. A runner must connect to the server to accept jobs from the server it is registered with, so a gitlab runner is dependent on a Gitlab server. It's actually sensible to have different runners executing jobs in the same pipeline.
A server may have many runners in many different network locations. About the only place you SHOULD NOT deploy a gitlab runner is on your gitlab server. Gitlab.com provides runners some for free and some for pay, but they are generally deployed on separate infrastructure, because this is just good design.
Gitlab Runners and Gitlab CI jobs and pipelines are good way to implement CI/CD deployments. They are not the only way, however. They are the way supported by Gitlab, and all things considered, I'd say they are a very good choice. Lots of other CI/CD tools exist, however, and different repositories on the same server can make different choices about how to implement their CI/CD pipeline.
Related
We are currently facing a conundrum with our multi-tenant project which contains various configuration files for each of our tenants and their associated environment. Our CI/CD Pipeline is split into two parts
An Upstream pipeline which analyses the new commit to master to
determine which tenants/environments have been changed. This triggers
the downstream pipeline with the correct environment variables via
the API
A downstream pipeline which executes scripts to deploy
changes to the tenants' environment based on the environment variables
passed through. This works well, however we have a Gitlab Runner per
environment to access the customers environment. We use this to avoid
hard-coding multiple credentials within our scripts or CI environment
variables.
Is there a way we can trigger this downstream pipeline with the specific Gitlab Runner? Our Gitlab Runners are tagged per environment so that we can use the passed environment variables to detect which runner it should be ran on.
I’ve had a look around the Gitlab CI, specific runners and shared runners (which ours are currently) but doesn’t seem to be supported.
tags: supports variable expansion. So, it is possible to pass variables in the API call when creating your downstream pipelines in order to control which runners are used.
I have following image configuration in my gitlab-ci.yml:
default:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
#image: curlimages/curl:latest
...
this works fine when I am deploying on https://gitlab.com/, however when you try to deploy my code along with above ci configuration I get following errors in my CICD:
This job is stuck because the project doesn't have any runners online assigned to it.
Go to project CI settings
My question is while using https://gitlab.com/ I didn't specifically assigned any runners to my project through settings. But now it seems I have supposed to do that.
Why is that?
And if this is necessary how can I do this?
When using gitlab.com -- the gitlab instance has shared runners configured available to run all untagged CI jobs.
On your own self-hosted gitlab you must either configure your own shared runners for your instance or register runners to your projects/groups.
You cannot use the gitlab.com shared runners on a self-hosted gitlab instance.
From scope of runners:
Shared Runners
Shared runners are available to every project in a GitLab instance.
Use shared runners when you have multiple jobs with similar requirements. Rather than having multiple runners idling for many projects, you can have a few runners that handle multiple projects.
If you are using a self-managed instance of GitLab:
Your administrator can install and register shared runners by going to your project’s Settings > CI/CD, expanding the Runners section, and clicking Show runner installation instructions. These instructions are also available in the documentation.
The administrator can also configure a maximum number of shared runner pipeline minutes for each group.
If you are using GitLab.com:
You can select from a list of shared runners that GitLab maintains.
The shared runners consume the pipelines minutes included with your account.
I suppose you technically would be able to use public GitLab runners for your self-hosted instance if you create an account on gitlab.com and setup CICD for external repos pointing to your self-hosted instance -- but your minutes would be a separate entitlement from your self-hosted license, among other serious limitations.
I've been trying to find a way to run integration tests on a remote server during an Azure pipeline process. In my situation we have the pipeline running in Azure and deploying to a local server. I am wondering if there is a way to also deploy integration tests to the same server and run them and report back to Azure in the same process?
You can use self-host agent to run your pipeline. Since Azure agents cannot communicate with your localDB, you can set up a self-hosted agent on your local machine. Your localDB is accesible to the self-hosted agent.
In order to run Integration Tests in your release pipeline. You can include your Test projects or test assembly dll files in the artifacts published in your build pipeline. So that your Integration test projects are accessible to the test tasks in your release pipeline.
To Include your test files in the artifacts. You can add a second publish build artifacts task in your Build Pipeline. Specify Path to publish to the location of your test files.
Run tests in release pipeline by adding the VsTest task or other test tasks in your release pipeline. The release pipeline will download your artifacts to folder $(System.DefaultWorkingDirectory).
Visual Studio Test task and Dot NetCore CLI task automatically publish test results to the pipeline, while tasks such as Ant, Maven, Gulp, Grunt, .NET Core and Xcode provide publishing results as an option within the task. Besides you can use Publish Test Results task.
Here are some articles you can refer to:
Integration tests in ASP.NET Core
Running UAT and Integration Tests During a VSTS Build
Run automated tests from test plans
Good question. This comes up if your integration infrastructure is behind a cooperate firewall, for example.
One solution is to use a self hosted agent on that very integration infrastructure.
Another straight forward approach is to scp your integration tests to your integration infrastructure, then ssh run them, and scp the test results back. There are Pipeline Tasks for both scp and ssh.
Note that the communication in these alternatives are reversed, i.e. Hosted Agend calls Pipeline and Pipeline calls Infrastructure. Your corporate security may prefer one over the other.
Each developer on our team forks the production repository, then opens MRs from their fork to merge into the master branch of the production one.
We have a fixed number of test environments which are shared by all developers. Eventually we hope to move to ephemeral environments for each MR but at the moment this is not possible.
We utilize the Environments / Deployments functionality of Gitlab: https://docs.gitlab.com/ee/ci/environments/ and the production repository is the central location where we see what is deployed to each environment.
The problem is that when a developer opens a MR they frequently choose to manually deploy to one of our shared test environments. But since the pipeline for their branch runs in their fork it records the Environment / Deploy information there rather than in production. This makes it impossible for us to know who deployed what to each test environment as that information is recorded in random developer forks rather than in the centralized location.
We only deploy to production hosts from the production repository (it is disabled in developer forks) so that information is centralized and up to date. But it is a headache for developers to determine who has deployed something to one of the shared test environments and they frequently overwrite each others' changes by accident.
Is there any way for us to allow test deploys from developer branches, but centralize the information about what is deployed in each environment in the production repository?
I'm trying pipelines in git lab community edition.
For what I can understand, from gitlab, the code and pipelines live in the same git repository.
In my scenario the pipelines are responsibility of devops team and code from develop team.
How, in git lab, is possible to prevent developers of changing the pipeline?
I understand it's possible to add devops team as maintainer to review pull requests, but this will create a dependency of devops teams in every change.
thanks
GitLab is not really designed for the scenario you describe. The general idea is that developers look after the CI configuration themselves.
You could try using the includes feature to store the bulk of the CI configuration in a separate repository.
In the application repository you would have a .gitlab-ci.yml file that pulls the CI configuration in from another repository using include-project:
include:
- project: 'my-group/my-ciproject'
ref: master
file: '/ci/.gitlab-ci-myappproject.yml'
Then in the my-group/my-ciproject repository you would have a file .gitlab-ci-myappproject.yml that contains the GitLab CI jobs configuration.
build:
script:
- dobuild
Only the DevOps team would have access to the my-group/my-ciproject repository so developers can't edit the CI config (although could mess with the .gitlab-ci.yml` file in the app repository).
Alternatively you could protect the master branch and have all changes approved before merging to master. Then developers would not be able to make changes to the CI without an approval.