How would you go about installing and executing Helm on Gitlab CI/CD pipelines? - gitlab

I am working to create a scheduler on Gitlab to execute a pipeline that deploys multiple applications to Openshift using helm. I have the pods ready and the scheduler set up,but I am unable to run helm commands. The pipeline fails out with the following error.
++ echo '$ helm install chart-name helm/charts/--namespace dev # collapsed multi-line command' $ helm install chart-name helm/charts/ --namespace dev # collapsed multi-line command ++ helm install chart-name helm/charts/ --namespace dev bash: line 188: helm: command not found Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit status 1
This is my code in the **gitlab.ci.yaml **. I attempted to add a helm image but it didn't seem to work, I was expecting to have the image therefore able to call the helm commands.
onboard-dev:
stage: release
tags:
- my-tag
image:
name: alpine/helm
entrypoint: [""]
script:
- |
PATH=$PATH:$(pwd)/bin
oc login --token=$TOKEN --insecure-skip-tls-verify=true --server=$MY_SERVER
oc project dev
- |
helm install chart-name helm/charts --namespace dev
helm upgrade --install chart-name helm/charts
What is the best way to go about achieving this? Thanks in advance!

Related

How to setup one gitlab agent for all projects in gitlab group to deploy projects separately to the kuberenetes cluster

I have applied the gitlab agents separately to my kuberenetes cluster for each and every project inside the gitlab group by using helm command and separate namespaces to each project. As a example...
There are 2 projects inside my gitlab group.
1.mygroup/project1
2.mygroup/project2
And I used helm command like this...
For project 1 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace gitlab-agent-project-1 --create-namespace --set image.tag=v15.1.0 --set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
For project 2 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace
gitlab-agent-project-2 --create-namespace --set image.tag=v15.1.0
--set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
The only different between these two is namespace
So I am asking is this one is the best and correct way of doing this process... Cant we use one gitlab agent for all project inside the gitlab group and can't be use it for CICD kubernetes deployments separately??
Because there are pods initializing when I have applied separate agents for each one.If I have 100 projects and I have to provide 100 Pods IP addresses for those agents.
Yes, you can use one GitLab agent for all projects inside a GitLab group. Currently, I'm implementing this.
Tree project demonstrating:
Inside a GitLab agent project, you define .gitlab/agents/{agent-name}/config.yaml file.
Inside the config.yaml file, you set the ci_access to projects inside your GitLab group:
gitops:
# Manifest projects are watched by the agent. Whenever a project changes,
# GitLab deploys the changes using the agent.
manifest_projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent
default_namespace: gitlab-agent
ci_access:
projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/sample-go-service
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/api-test
From the project that needs access to the GitLab agent, you need to use-context in order to access gitlab-agent, then you can do the certain action you want. For example, this code from gitlab-ci.yml file in one project:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent:dev-agent-1
- kubectl apply -f functional-tester.yaml --namespace vinlab-testing

Installing nginx-ingress using Helm returns "Error: rendered manifests contain a resource that already exists"

I have a GitLab pipeline to deploy a Kubernetes cluster using Terraform on Azure.
The first time I used the pipeline everything went fine. Once I finished doing my tests I ran the destroy phase and everything was destroyed.
Yesterday I reran the pipeline to create the cluster, all the stages went well except the last that installs the nginx-ingress using helm.
install_nginx_ingress:
stage: install_dependencies
image: alpine/helm:3.1.1
script:
- helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- helm repo update
- >
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace default
--set controller.replicaCount=2
dependencies:
- apply
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $PHASE == "DEPLOY"
When this stage is executed, this is what I have in the GitLab console:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install nginx-ingress ingress-nginx/ingress-nginx --namespace default --set controller.replicaCount=2
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
What Is happening !?
Check this error line. This explain the issue.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Your nginx-ingress-ingress-nginx-controller does not have RBAC permission for get operation on poddisruptionbudgets resource.
Look like kubernetes/ingress-nginx chart has PodDisruptionBudget defined but the ClusterRole does not include any permission for poddisruptionbudgets resource.

GitLab CI Timeout with Kaniko and EKS

I am trying to follow the GitLab example code for using kaniko as outlined here. The only thing I have changed is that I am using the v1.7.0-debug tag instead of simply debug.
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.7.0-debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
My build job is stalling out at the following line:
Running with gitlab-runner 14.4.0 (4b9e985a)
on gitlab-runner-gitlab-runner-84d476ff5c-mkt4s HMty8QBu
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image gcr.io/kaniko-project/executor:v1.7.0-debug ...
Using attach strategy to execute scripts...
Preparing environment
00:03
Waiting for pod gitlab-runner/runner-hmty8qbu-project-31186441-concurrent-0bbt8x to be running, status is Pending
Running on runner-hmty8qbu-project-31186441-concurrent-0bbt8x via gitlab-runner-gitlab-runner-84d476ff5c-mkt4s...
Getting source from Git repository
00:01
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/...
Created fresh repository.
Checking out 4d05d22b as ci...
Skipping Git submodules setup
Executing "step_script" stage of the job script
It just stops at Executing "step_script" and never moves on. I've researched all over and read through as much documentation as I can find but am unable to troubleshoot this issue.
Setup
Amazon EKS version 1.21
GitLab Runner Helm Chart version 0.34.0
kaniko executor image v1.7.0-debug
This ended up being an issue with how the Kubernetes runner itself was configured inside of the runner configuration toml. The default container image we were using for our runners required a modification to the PATH environment variable so we were using the environment configuration setting to do this as outlined here. It seems that this PATH variable did not include the busybox shell defined in the kaniko debug image. We have since moved that PATH change inside our Docker image where it should've been in the first place and things are working as expected.

GitLab Container to GKE (Kubernetes) deployment

Hello I have a problem with GitLab CI/CD. I'm trying to deploy container to Kubernetes on GKE however I'm getting an error:
This job failed because the necessary resources were not successfully created.
I created a service account with kube-admin rights and created cluster via GUI of GitLab so its fully itegrated. But when I run the job it still doesn't work..
by the way I use kubectl get pods in gitlab-ci file just to test if kubernetes is repsonding.
stages:
- build
- deploy
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
deploy-prod:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl get pods
environment:
name: production
kubernetes:
namespace: test1
Any Ideas?
Thank you
namespace should be removed.
GitLab creates own namespace for every project

Helm installs charts but doesn't see them

I'v installed minikube and helm on my system run vm, and tried to deploy jenkins
MacBook-Pro% helm install stable/jenkins
NAME: quelling-dachshund
Error: getting deployed release "quelling-dachshund": release: "quelling-dachshund" not found
Seems like an error but Kubectl can see the deployment after this error first in Init:0/1 and then running- any ideas why it flops on the install part ?
btw:
$ helm list --all
Error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)```
$ helm list
Error: Get https://192.168.64.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: net/http: TLS handshake timeout
any idea how to resolve the error
Update ***
It all comes down to minikube error which I dont understand - btw. this is just after fresh minikube start
$ kubectl create serviceaccount --namespace kube-system tiller --insecure-skip-tls-verify=true
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller --insecure-skip-tls-verify=true
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /Users/rwalas/.helm.
Error: error installing: the server could not find the requested resource
$ rm -rf ~/.helm
$ helm init --service-account tiller --upgrade
Creating /Users/rwalas/.helm
Creating /Users/rwalas/.helm/repository
Creating /Users/rwalas/.helm/repository/cache
Creating /Users/rwalas/.helm/repository/local
Creating /Users/rwalas/.helm/plugins
Creating /Users/rwalas/.helm/starters
Creating /Users/rwalas/.helm/cache/archive
Creating /Users/rwalas/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource

Resources