Trigger build in OpenShift on change in Gitlab container registry - gitlab

I am having a trouble to figure out how OpenShift can trigger deployment based on change in GitLab container registry. I have configured option in OpenShift called Deploy an existing Image from an Image Stream or Image registry. This works and I can see my pod up and running. When I tried updating docker image and pushed to GitLab container registry, I did not see any new deployment in OpenShift. Please guide.
Thanks!!

OpenShift can automatically trigger redeployment on an image update, but only if the image changes in an ImageStreamTag resource on the same cluster.
The image.openshift.io/triggers annotation can be configured in the deployment to reference an ImageStreamTag on the local cluster.
In the following example, the Deployment will be triggered by an update of the -n ci imagestreamtag mirror:latest tag.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-mirror:latest","namespace":"ci"},"fieldPath":"spec.template.spec.containers[?(#.name==\"mirror\")].image"}]'
name: base-4-10
namespace: app
spec:
...
To achieve the same thing, you can either:
Push directly to the cluster registry
Establish an ImageStream resource in your cluster and push your image there
instead of the GitLab container registry. Configure your Deployment to be reactive to changes to the tag you push the image to.
Periodically import the image from GitLab into an ImageStream
ImageStreams can be configured to periodically import external images. The import happens every 15 minutes.
Create a new ImageStream like app-images.
Tag your GitLab image into the ImageStream like: oc tag docker.io/app:latest app-images:app --scheduled
Configure your Deployment's image.openshift.io/triggers annotation to be reactive to the app-images:myapp ImageStreamTag.

Related

Azure App service Keeps pulling docker image from docker hub

I have a azure app service to host a docker image from out Azure Container Registry.
The full process is as follow:
Run Pipeline
Run Release pipeline
Azure app pulls the latest release from azure container registry
But what happen is that after Each realise, for some reason, the app service tries to pull the image from Docker Hubinstead of pulling from azure Container Registry.
Can somebody help to understand where is the issue here?
For your issue, I can guess the problem you made, you must set the image with the tag as, for example, nginx:latest. But if you push the image in the ACR and need to pull it from the ACR, you must set the image with the tag as myacr.azurecr.io/nginx:latest. In addition, you also need to configure the credential for your ACR.

How to update deployment image on Azure Kubernetes Service with same image tag via an Azure pipeline?

I am trying to update my deployment with latest image content on Azure Kubernetes Service every time some code is committed to github . I have made a stage in my build pipeline to build and push the image on docker hub which is working perfectly fine. however in my release pipeline the image is being used as an artifact and is being deployed to the Azure Kubernetes Service , but the problem is that the image on AKS in not updating according to the image pushed on Docker Hub with latest code.
Right now each time some commit happens i have to manually update the image on AKS via the Command
kubectl set image deployment/demo-microservice demo-microservice=customerandcontact:contact
My Yaml File
Can anyone tell the error/changes if any in my yaml file to automatically update the image on AKS.
When you relese a new image to the container registry under the same tag it does not mean anything to Kubernetes. If you run kubectl apply -f ... and the image name and tag remains the same, it still won't do anything as there is no configuration change. There are two options:
Give a new tag on each build and change the :contact to the new tag in the yaml and run kubectl apply
For dev environment only (do not do it in Stage or Prod) leave the same tag (usually a tag :latest is used) and after a new image is deployed to registry run kubectl delete pod demo-microservice. Since you've set image pull policy to Always, this will cause Kubernetes pull a new image from the registry and redeploy the pod.
The second approach is a workaround just for testing.
When you specify your image with the specific image tag Kubernetes will default container's imagePullPolicy to IfNotPresent, which means that image won't be pulled again, and previously pulled image will be deployed.
Kubernetes will change policy to Always only if tag is not present (which is effectively same as latest or if tag is set to latest explicitly.
Check what is actual imagePull policy on your Deployment template for particular container.
kubectl get pod demo-microservice -o yaml | grep imagePullPolicy -A 1
Try patching deployment
kubectl patch deployment demo-microservice -p
'{"spec": { "template" :
{ "spec" : { "containers" :
[{"name" : "demo-microservice",
"image" : "repo/image:tag",
"imagePullPolicy": "Always" }]}}}}'
Make sure that imagePullPolicy for the container in question is set to Always.

Kubernetes: Failed to pull image from private container registry

I'm using Azure for my Continuous Deployment, My secret name is "cisecret" using
kubectl create secret docker-registry cisecret --docker-username=XXXXX --docker-password=XXXXXXX --docker-email=SomeOne#outlook.com --docker-server=XXXXXXXXXX.azurecr.io
In my Visual Studio Online Release Task
kubectl run
Under Secrets section
Type of secret: dockerRegistry
Container Registry type: Azure Container Registry
Secret name: cisecret
My Release is successfully, but when proxy into kubernetes
Failed to pull image xxxxxxx unauthorized: authentication required.
Could this be due to your container name possibly? I had an issue where I wasn't properly prepending the ACR domain in front of the image name in my Kubernetes YAML which meant I wasn't pointed at the container registry / image and therefore my secret (which was working) appeared to be broken.
Can you post your YAML? Maybe there is something simple amiss since it seems you are on the right track from the secrets perspective.
I need to grant AKS access to ACR.
Please refer to the link here
How to pass image pull secret while using 'kubectl run' command?
This should help, you need to override the kubectl command with "imagepullsecrets":"cisecret".
Add the following in yaml file.
imagePullSecrets:
- name: acr-auth

How to Integrate GitLab-Ci w/ Azure Kubernetes + Kubectl + ACR for Deployments?

Our previous GitLab based CI/CD utilized an Authenticated curl request to a specific REST API endpoint to trigger the redeployment of an updated container to our service, if you use something similar for your Kubernetes based deployment this Question is for you.
More Background
We run a production site / app (Ghost blog based) on an Azure AKS Cluster. Right now we manually push our updated containers to a private ACR (Azure Container Registry) and then update from the command line with Kubectl.
That being said we previously used Docker Cloud for our orchestration and fully integrated re-deploying our production / staging services using GitLab-Ci.
That GitLab-Ci integration is the goal, and the 'Why' behind this question.
My Question
Since we previously used Docker Cloud (doh, should have gone K8s from the start) how should we handle the fact that GitLab-Ci was able to make use of Secrets created the Docker Cloud CLI and then authenticate with the Docker Cloud API to trigger actions on our Nodes (ie. re-deploy with new containers etc).
While I believe we can build a container (to be used by our GitLab-Ci runner) that contains Kubectl, and the Azure CLI, I know that Kubernetes also has a similar (to docker cloud) Rest API that can be found here (https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster) — specifically the section that talks about connecting WITHOUT Kubectl appears to be relevant (as does the piece about the HTTP REST API).
My Question to anyone who is connecting to an Azure (or potentially other managed Kubernetes service):
How does your Ci/CD server authenticate with your Kubernetes service provider's Management Server, and then how do you currently trigger an update / redeployment of an updated container / service?
If you have used the Kubernetes HTTP Rest API to re-deploy a service your thoughts are particularly value-able!
Kubernetes Resources I am Reviewing
How should I manage deployments with kubernetes
Kubernetes Deployments
Will update as I work through the process.
Creating the integration
I had the same problem of how to integrate the GitLab CI/CD with my Azure AKS Kubernetes cluster. I created this question because I was having some error when I tried to add my Kubernetes cluester info into GitLab.
How to integrate them:
Inside GitLab, go to "Operations" > "Kubernetes" menu.
Click on the "Add Kubernetes cluster" button on the top of the page
You will have to fill some form fields, to get the content that you have to put into these fields, connect to your Azure account from the CLI (you need Azure CLI installed on your PC) using az login command, and then execute this other command to get the Kubernetes cluster credentials: az aks get-credentials --resource-group <resource-group-name> --name <kubernetes-cluster-name>
The previous command will create a ~/.kube/config file, open this file, the content of the fields that you have to fill in the GitLab "Add Kubernetes cluster" form are all inside this .kube/config file
These are the fields:
Kubernetes cluster name: It's the name of your cluster on Azure, it's in the .kube/config file too.
API URL: It's the URL in the field server of the .kube/config file.
CA Certificate: It's the field certificate-authority-data of the .kube/config file, but you will have to base64 decode it.
After you decode it, it must be something like this:
-----BEGIN CERTIFICATE-----
...
some base64 strings here
...
-----END CERTIFICATE-----
Token: It's the string of hexadecimal chars in the field token of the .kube/config file (it might also need to be base 64 decoded?). You need to use a token belonging to an account with cluster-admin privileges, so GitLab can use it for authenticating and installing stuff on the cluster. The easiest way to achieve this is by creating a new account for GitLab: create a YAML file with the service account definition (an example can be seen here under Create a gitlab service account in the default namespace) and apply it to your cluster by means of kubectl apply -f serviceaccount.yml.
Project namespace (optional, unique): I leave it empty, don't know yet for what or where this namespace can be used.
Click in "Save" and it's done. Your GitLab project must be connected to your Kubernetes cluster now.
Deploy
In your deploy job (in the pipeline), you'll need some environment variables to access your cluster using the kubectl command, here is a list of all the variables available:
https://docs.gitlab.com/ee/user/project/clusters/index.html#deployment-variables
To have these variables injected in your deploy job, there are some conditions:
You must have added correctly the Kubernetes cluster into your GitLab project, menu "Operations" > "Kubernetes" and these steps that I described above
Your job must be a "deployment job", in GitLab CI, to be considered a deployment job, your job definition (in your .gitlab-ci.yml) must have an environment key (take a look at the line 31 in this example), and the environment name must match the name you used in menu "Operations" > "Environments".
Here are an example of a .gitlab-ci.yml with three stages:
Build: it builds a docker image and push it to gitlab private registry
Test: it doesn't do anything yet, just put an exit 0 to change it later
Deploy: download a stable version of kubectl, copy the .kube/config file to be able to run kubectl commands in the cluster and executes a kubectl cluster-info to make sure it is working. In my project I didn't finish to write my deploy script to really execute a deploy. But this kubectl cluster-info command is executing fine.
Tip: to take a look at all the environment variables and their values (Jenkins has a page with this view, GitLab CI doesn't) you can execute the command env in the script of your deploy stage. It helps a lot to debug a job.
I logged into our GitLab-Ci backend today and saw a 'Kubernetes' button — along with an offer to save $500 at GCP.
GitLab Kubernetes
URL to hit your repo's Kubernetes GitLab page is:
https://gitlab.com/^your-repo^/clusters
As I work through the integration process I will update this answer (but also welcome!).
Official GitLab Kubernetes Integration Docs
https://docs.gitlab.com/ee/user/project/clusters/index.html

Updating kubernetes-dashboard image in a Azure acs-k8s cluster is not getting reflected

Action:
Tried updating the kubernetes-dashboard in k8s hosted in azure acs with image gcrio.azureedge.net/google_containers/kubernetes-dashboard-amd64 from version v1.6.3 to v1.7.1 (latest).
Problem:
The image version, when edited either with kubectl or UI, is not getting reflected/updated.
Question:
Is there any way to update the image version ?
It will be getting updated but then reverted by the k8s addon manager. If you ssh into a master the templates for the addon services live in etc/kubernetes/addons.
To upgrade the image you can edit /etc/kubernetes/addons/kubernetes-dashboard-deployment.yaml and change image inside the Deployment spec.
Your change should be picked up in a few seconds.

Resources