How to setup one gitlab agent for all projects in gitlab group to deploy projects separately to the kuberenetes cluster - gitlab

I have applied the gitlab agents separately to my kuberenetes cluster for each and every project inside the gitlab group by using helm command and separate namespaces to each project. As a example...
There are 2 projects inside my gitlab group.
1.mygroup/project1
2.mygroup/project2
And I used helm command like this...
For project 1 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace gitlab-agent-project-1 --create-namespace --set image.tag=v15.1.0 --set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
For project 2 ->>
helm upgrade --install gitlab-runner gitlab/gitlab-agent --namespace
gitlab-agent-project-2 --create-namespace --set image.tag=v15.1.0
--set config.token=XXXXXXXX --set config.kasAddress=wss://kas.gitlab.com
The only different between these two is namespace
So I am asking is this one is the best and correct way of doing this process... Cant we use one gitlab agent for all project inside the gitlab group and can't be use it for CICD kubernetes deployments separately??
Because there are pods initializing when I have applied separate agents for each one.If I have 100 projects and I have to provide 100 Pods IP addresses for those agents.

Yes, you can use one GitLab agent for all projects inside a GitLab group. Currently, I'm implementing this.
Tree project demonstrating:
Inside a GitLab agent project, you define .gitlab/agents/{agent-name}/config.yaml file.
Inside the config.yaml file, you set the ci_access to projects inside your GitLab group:
gitops:
# Manifest projects are watched by the agent. Whenever a project changes,
# GitLab deploys the changes using the agent.
manifest_projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent
default_namespace: gitlab-agent
ci_access:
projects:
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/sample-go-service
- id: medai/vinlab/vinlab-testing/test-k8s-cicd/api-test
From the project that needs access to the GitLab agent, you need to use-context in order to access gitlab-agent, then you can do the certain action you want. For example, this code from gitlab-ci.yml file in one project:
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context medai/vinlab/vinlab-testing/test-k8s-cicd/test-gitlab-agent:dev-agent-1
- kubectl apply -f functional-tester.yaml --namespace vinlab-testing

Related

How would you go about installing and executing Helm on Gitlab CI/CD pipelines?

I am working to create a scheduler on Gitlab to execute a pipeline that deploys multiple applications to Openshift using helm. I have the pods ready and the scheduler set up,but I am unable to run helm commands. The pipeline fails out with the following error.
++ echo '$ helm install chart-name helm/charts/--namespace dev # collapsed multi-line command' $ helm install chart-name helm/charts/ --namespace dev # collapsed multi-line command ++ helm install chart-name helm/charts/ --namespace dev bash: line 188: helm: command not found Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit status 1
This is my code in the **gitlab.ci.yaml **. I attempted to add a helm image but it didn't seem to work, I was expecting to have the image therefore able to call the helm commands.
onboard-dev:
stage: release
tags:
- my-tag
image:
name: alpine/helm
entrypoint: [""]
script:
- |
PATH=$PATH:$(pwd)/bin
oc login --token=$TOKEN --insecure-skip-tls-verify=true --server=$MY_SERVER
oc project dev
- |
helm install chart-name helm/charts --namespace dev
helm upgrade --install chart-name helm/charts
What is the best way to go about achieving this? Thanks in advance!

Installing Argo Rollouts on Azure Kubernetes cluster

I'm using ArgoCD along with ArgoRollouts on my local cluster. Setting it up a local cluster is straight forward, download the binaries, set path for the binaries and execute kubectl argo rollouts version
However, I'm trying to install it on a new Azure Kubernetes cluster but unable to do, as per the installation steps mentioned, the binaries need to be downloaded and set as the Env path but it is failing at sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts - which is understood, but how do I overcome that?
I've not come across any other way to install ArgoRollouts. There are documents available on installing ArgoCD but not ArgoRollouts.
We use Kustomize to generate our manifests for Argo Rollouts. We also use have Argo CD manage Argo Rollouts as a separate application.
> cat kustomization.yml
resources:
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/install.yaml
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/dashboard-install.yaml
- https://raw.githubusercontent.com/argoproj/argo-rollouts/v1.2.1/manifests/notifications-install.yaml
images:
- name: quay.io/argoproj/argo-rollouts
newTag: v1.2.1
- name: quay.io/argoproj/kubectl-argo-rollouts
newTag: v1.2.1
namespace: argo-rollouts
If you want to install manually (ie Argo CD not managing it), then you can navigate to the kustomization directory and run kustomize build . | kubectl apply -f -

Installing nginx-ingress using Helm returns "Error: rendered manifests contain a resource that already exists"

I have a GitLab pipeline to deploy a Kubernetes cluster using Terraform on Azure.
The first time I used the pipeline everything went fine. Once I finished doing my tests I ran the destroy phase and everything was destroyed.
Yesterday I reran the pipeline to create the cluster, all the stages went well except the last that installs the nginx-ingress using helm.
install_nginx_ingress:
stage: install_dependencies
image: alpine/helm:3.1.1
script:
- helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- helm repo update
- >
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace default
--set controller.replicaCount=2
dependencies:
- apply
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $PHASE == "DEPLOY"
When this stage is executed, this is what I have in the GitLab console:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install nginx-ingress ingress-nginx/ingress-nginx --namespace default --set controller.replicaCount=2
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
What Is happening !?
Check this error line. This explain the issue.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Your nginx-ingress-ingress-nginx-controller does not have RBAC permission for get operation on poddisruptionbudgets resource.
Look like kubernetes/ingress-nginx chart has PodDisruptionBudget defined but the ClusterRole does not include any permission for poddisruptionbudgets resource.

Azure DevOps Build Agents in Kubernetes

We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.
Details:
Azure Devops Server
AKS- 1.19.11
Looking for
AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.
The scaling of pods should happen as the load from the ADO will be initiating
Is there any default MS provided image available currently for the build agents?
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Any suggestions highly appreciated
This article provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.
The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.
Add tools and customize the container
Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:
The start.sh script is called by the Dockerfile.
The start.sh script is the last command in the Dockerfile.
Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.
Note: Docker was replaced with containerd in Kubernetes 1.19, and Docker-in-Docker became unavailable. A few use cases to run docker inside a docker container:
One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.
Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins Docker-based dynamic agents for your CI/CD pipelines, docker in docker comes as a must-have functionality.
Sandboxed environments.
For experimental purposes on your local development workstation.
If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown here or run the agent in an alternative docker environment as shown here.
Else if you are deploying the self hosted agent on AKS, the azdevops-deployment Deployment at step 4, here, must be changed to:
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
The scaling of pods should happen as the load from the ADO will be initiating
You can use cluster-autoscaler and horizontal pod autoscaler. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [Reference]

Azure App service slot and swap deployment using circleci config.yml

Azure App service slot deployment using circleci config.yml
Need to add a step to deploy to production slot or staging slot then modify the config to swap the deployment
Description: When i run this config file then it deploys to production slot of azure app service by default , but i want to deploy to stage slot first and then do a swap .
below file is working fine but need some configuration changes so that i should be able to deploy to stage slot and then swap the slot to the production slot .
Using Circleci config.yml , below is my config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10.16.3
steps:
## Fetch all release tags
- checkout
- run:
name: Install Node.js dependencies with Npm
command: npm install
- run:
name: Test
command: CI=true npm run coverage
dev-deploy:
machine: true
steps:
- checkout
- run:
name: create / update infrastructure
command: |
docker login -u $REGISTRY_UN -p $REGISTRY_PW $REGISTRY_SERVER
docker run --rm -it -e TF_VAR_repo_branch=$CIRCLE_BRANCH -e vaultkey=$VAULT_KEY -v `pwd`:/dp/config dockerimage/dpdeployer:beta-1.0 .dp.yaml
workflows:
version: 2
build_and_test_publish:
jobs:
- build
# - hold: # <<< A job that will require manual approval in the CircleCI web application.
# type: approval # <<< This key-value pair will set your workflow to a status of "On Hold"
# requires: # We only run the "hold" job when test2 has succeeded
# - build
- dev-deploy:
requires:
- build
filters:
branches:
only : feature/appservice
Hmmm, this may be a good link to review: Deploy to Azure from CircleCI
But, I think it comes down to how you want to deploy your code to Azure App Service. There are a lot of different ways to do so. Checking your config, you are using Docker already. This link, https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image , talks about the steps for deploying your container as an Azure App Service.
The gist of it seems to be you need to configure your WebApp to pull from a docker registry per Azure app slot .
Then after a successful build, have circleci push/tag the docker image to that registry. Then Azure App Service will start up the new version of the app.
For jumping between Azure App service slots, you could have your circleci config push to different docker registry image tags. This would require setting up each Azure App Service slot with a slightly different config. For example ...
# Dev
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_DEV ...
# Staging
az webapp config container set --name <app-name> --resource-group <rg> --docker-custom-image-name <registry-name>/mydockerimage:$VERSION_FOR_STAGE ...
In your circleCI config, when you setup your pipeline between dev , stage and production jobs. Dev and Stage jobs would either do docker pushes or tagging for you. And the Production job does the swap for you for the final step. Something like this...
prod-deploy:
steps:
- run:
name: swap staging and product slots
command: az webapp deployment slot swap -g MyResourceGroup -n MyUniqueApp --slot staging --target-slot production
Also see: https://learn.microsoft.com/en-us/cli/azure/webapp/deployment/slot?view=azure-cli-latest#az-webapp-deployment-slot-swap
Hopefully this helps..and I did not misunderstand your question. 🤞
Yes, it worked!!! Thanks
Although as per our current deployment structure , We are using a deploy script and handling swapping from there and then deploying an application through CircleCI.

Resources