GitlabRunner: How to not delete VM or Docker Container at the end of the pipeline? - gitlab

I am using Gitlab Runner with the VM executor (so that each of my pipeline are run in VM).
The thing is that Gitlab deletes my VM at the end of the pipeline but I don't want to.
I want to be able to configure Runner to not delete it so I can access my environment after the pipeline has been run.
I think I saw something about a configuration in the .toml of the gitlab runner but I can't find anymore...
Thanks!

Related

Skip a GitLab CICD Pipeline execution when runner is offline

I have a CICD pipeline that I use to execute a set of terraform scripts that manage the IPSets for multiple environments. The runners are deployed into the kubernetes cluster for each environment in AWS.
The pipeline is setup such that it the runner dedicated for each AWS environment running in their respective EKS kubernetes cluster will run terraform plan and apply on their respective folders that contains the terraform scripts.
The CICD pipeline contains a terraform plan job followed by a terraform apply job.
My problem is when one or more environment is scaled down (ie: runner deployment is scaled down to 0), the pipeline will get stuck and eventually fail as the state of the those runners are stuck in pending.
Is there a way to structure the CICD pipeline such that when a runner is stuck for X amount of time, those executions would stop/fail so as to not trigger the terraform apply job to execute while allowing executions for other environments to continue?

Azure DevOps - Automated Pipeline Creation

I'm new to Azure DevOps, and I was wondering if there was a way to automatically detected a .yml build file and create a pipeline without having to interact with the site.
I have tried creating a file called azure-pipelines.yml in the root of the repo, with no luck.
Is there anyway to automatically create pipelines? Like how Jenkins detects a Jenkinsfile?
No this is nott possible out of the box, because YAML file is not always pipeline definition. You my try to figure out if it is trully is, however you need to listen for repo changes and in fact you can do this via another pipeline ;) for instance as this:
check if commit has a new yaml file added
verify if the file is pipeline
create a pipeline using azure cli (for instance)
However, this would be quite a lot of work and then you need to create such pipeline in every repo you want to have this detection enabled.

How to pass environment variables during program runtime for docker container in azure

I have my connection string inside a .env file, which I don't commit into the git repo. And I have my app up and running on Azure.
So the way my app works is, when I push my code to Github, Azure Container Registry will build the image on the committed code, and then Azure App Service is going to pull and build a container for my app.
So my question is, how do I pass that connection string to the Docker container? What I could do is put the .env file into the git repo, but I don't think I should put it up there.
Thank you Rimaz Mohommed. Posting comment section discussion into Answer section to help other community users.
Couple of approaches
Add the Variables directly as part of the app settings and it should be available as part of your app environment.
Reference : https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment?view=azure-devops
You can pass/set your environment variables by Configure environment variables.
You can run this Azure CLI command as part of your devops pipeline (https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-cli?view=azure-devops)

Gitlab-CI: How do I run a job on a specific server?

I set up Gitlab and Gitlab-CI on a k8s cluster in AWS. I have jobs that use a lot of resources. I want to run these jobs on specific instances in AWS. How can this be done?
Kubernetes configuration
You need to add a node selector which enable you to assign pods on specific nodes
kubectl label nodes <node-name> gitlab=true
Gitlab Runner configuration
Specify a tag associated to the runner. In your case, uncheck the Run untagged jobs option.
Specify a node selector using the keyword node_selector :
[runners.kubernetes.node_selector]
gitlab = "true"
Check a more complete example of config.toml on gitlab website.
Gitlab CI configuration
Refer the tag of your runner in your .gitlab-ci.yml
job:
tags:
- big_server

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Resources