I have switched from using tradition way of pushing helm chart to oci format.
But on then kubernetes cluster the charts are not getting installed and I am getting the below error:
chart pull error: chart pull error: failed to download chart for remote reference: temp.azurecr.io/helm/v1/repo/abc:1.0.1: not found
chart pull error: chart pull error: failed to download chart for remote reference:
My new repository is :
oci://abc.azurecr.io/charts
instead of
https://abc.azurecr.io/helm/v1
Also In kubernetes cluster the helm path is not getting updated to charts, I am not sure why it is still poiting to
/helm/v1 path
I tried to reproduce the same issue in my environment and got the below output
I have enabled the OCI experimental support for helm versions
export HELM_EXPERIMENTAL_OCI=1
helm version
I have created the container registry
I have created the sample chat and saved the chart into the local archive
cd ..
helm package .
I have given the some registry credentials and also created the service principal using this link
I have logged into the registry using below command
helm registry login Registry_name.azurecr.io
we have to give the username and password of the registry
I run the helm push command to push the chart to registry as a OCI format
helm push hello-world-0.1.0.tgz oci://registry_name.azurecr.io/helm
I have used the below command to show the properties of the repository
az acr repository show \
--name repository_name \
--repository helm/hello-world
To check the data stored in the repository
az acr manifest list-metadata \
--registry myregistry890 \
--name helm/hello-world
I have pulled the chat using below command
helm pull oci://repository_name.azurecr.io/helm/hello-world --version 0.1.0
Related
I'm trying to install an nginx ingress controller into an Azure Kubernetes Service cluster using helm. I'm following this Microsoft guide. It's failing when I use helm to try to install the ingress controller, because it needs to pull a "kube-webhook-certgen" image from a local Azure Container Registry (which I created and linked to the cluster), but the kubernetes pod that's initially scheduled in the cluster fails to pull the image and shows the following error when I use kubectl describe pod [pod_name]:
failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
This section describes using helm to create an ingress controller.
The guide describes creating an Azure Container Registry, and link it to a kubernetes cluster, which I've done successfully using:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-name>
I then import the required 3rd party repositories successfully into my 'local' Azure Container Registry as detailed in the guide. I checked that the cluster has access to the Azure Container Registry using:
az aks check-acr --name MyAKSCluster --resource-group myResourceGroup --acr letsencryptdemoacr.azurecr.io
I also used the Azure Portal to check permissions on the Azure Container Registry and the specific repository that has the issue. It shows that both the cluster and repository have the ACR_PULL permission)
When I run the helm script to create the ingress controller, it fails at the point where it's trying to create a kubernetes pod named nginx-ingress-ingress-nginx-admission-create in the ingress-basic namespace that I created. When I use kubectl describe pod [pod_name_here], it shows the following error, which prevents creation of the ingress controller from continuing:
Failed to pull image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": [rpc error: code = NotFound desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to resolve reference "letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
This is the helm script that I run in a linux terminal:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-basic --set controller.replicaCount=1 --set controller.nodeSelector."kubernetes\.io/os"=linux --set controller.image.registry=$ACR_URL --set controller.image.image=$CONTROLLER_IMAGE --set controller.image.tag=$CONTROLLER_TAG --set controller.image.digest="" --set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux --set controller.admissionWebhooks.patch.image.registry=$ACR_URL --set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE --set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG --set defaultBackend.nodeSelector."kubernetes\.io/os"=linux --set defaultBackend.image.registry=$ACR_URL --set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE --set defaultBackend.image.tag=$DEFAULTBACKEND_TAG --set controller.service.loadBalancerIP=$STATIC_IP --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL
I'm using the following relevant environment variables:
$ACR_URL=letsencryptdemoacr.azurecr.io
$PATCH_IMAGE=jettech/kube-webhook-certgen
$PATCH_TAG=v1.5.1
How do I fix the authorization?
It seems like the issue is caused by the new ingress-nginx/ingress-nginx helm chart release. I have fixed it by using version 3.36.0 instead of the latest (4.0.1).
helm upgrade -i nginx-ingress ingress-nginx/ingress-nginx \
--version 3.36.0 \
...
Azure support identified and provided a solution to this and essentially confirmed that the documentation in the Microsoft tutorial is at best now outdated against the current Helm release for the ingress controller.
The full error message I was getting was similar to the following, which indicates that the first error encountered is actually that the image is NotFound. The message about Unauthorized is actually misleading. The issue appears to be that the install references 'digests' for a couple of the images required by the install (basically the digest is a unique identifier for the image). The install appears to have been using digests of the docker images from the original location, and not the digest of my copy of the images that I imported into the Azure Container Registry. This obviously then doesn't work, as the digests of the images the install is trying to pull don't match the digests of the images that are imported to my Azure Container Registry.
Failed to pull image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen:v1.5.1#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': [rpc error: code = NotFound desc = failed to pull and unpack image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to resolve reference 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068: not found, rpc error: code = Unknown desc = failed to pull and unpack image 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to resolve reference 'letsencryptdemoacr.azurecr.io/jettech/kube-webhook-certgen#sha256:f3b6b39a6062328c095337b4cadcefd1612348fdd5190b1dcbcb9b9e90bd8068': failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized]
The generated digest for the images that I'd imported into my local Azure Container Registry needed to be specified as additional arguments to the helm install:
--set controller.image.digest="sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" \
--set controller.admissionWebhooks.patch.image.digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" \
I then had a 2nd issue where I was getting CrashLoopBackoff for the ingress controller pod. I fixed this by re-importing a different version of the ingress controller image than the one referenced in the tutorial, as follows:
set environment variable used to identify the tag to pull for the ingress controller image
CONTROLLER_TAG=v1.0.0
delete the ingress repository from the Azure Container Registry (I did this manually via the portal), then re-import it using the following (the values of other variables should be as specified in the Microsoft tutorial):
az acr import --name $REGISTRY_NAME --source $CONTROLLER_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG
Make sure you guys set all the digests to empty
--set controller.image.digest=""
--set controller.admissionWebhooks.patch.image.digest=""
--set defaultBackend.image.digest=""
Basically, this will pull the image <your-registry>.azurecr.io/ingress-nginx/controller:<version> without the #digest:<digest>
The other problem, if you use the latest chart version, the deployment will crash into CRASHLOOPBACKOFF status. Checking the live log of the pod, you will see the problem with flags, eg Unknown flag --controller-class. To resolve this problem, you could specify the -version flag in the helm install command to use the version 3.36.0. All deployment problems should be resolved.
Faced the same issue on AWS and using a older version of the helm chart helped.
I used the version 3.36.0 and it worked fine.
I'm using the Community edition of Gitlab. Now, with 14.1 I also want use the Helm registry. Everything works fine, I can push my helm package to the helm package registry, download the tgz and so on, but when I use
helm repo add --username "<my-username>" --password "<access-token>" helm-chart https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts
It says
Error: looks like "https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts" is not a valid chart repository or cannot be reached: failed to fetch https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts/index.yaml : 404 Not Found
I already checked the project-id (265) but it's correct and I also push my chart to
curl --request POST --user gitlab-ci-token:$CI_JOB_TOKEN --form "chart=#$PACKAGE_FILE" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/api/stable/charts"
So I wonder what's going wrong here. I can push to the helm registry but cannot retrieve/find the index.yaml. Any idea?
Try to find index.yaml file there: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/stable/index.yaml
Issue itself
Got an Azure Container registry as both image and chart storage. Assume it myacr.azurecr.io with 8 different charts pushed. As far as I read before Azure ACR is capable of storing charts and compatible with Helm 3 (version 3.5.2).
The following steps to reproduce are simple.
helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username myusername -password admin123 - repo added. OK.
helm chart save ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - chart saved. OK
helm push ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - pushed. Available in Azure portal. OK.
helm repo update - what could go wrong here? As expected. OK
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "myacr" chart repository
Update Complete. ⎈Happy Helming!⎈
helm search repo -l - I see everything from ingress-nginx and jetstack but nothing from myacr in the list.
Yet if I do pull and export everything works fine - chart is in place
What I tried
renaming repo name to helm/{app} according to some theories in the web - fail
reconfiguring chart with full descriptions e.t.c. according to ingress-nginx - fail
executing helm search repo -l --devel to see all possible chart versions - no luck
"Swithing off and on again" - removing and adding repo again with different combinations - fail
explicit slang language on every attempt - warms up a bit but doesn't solve the issue
The questions are
Is Azure ACR fully compatible with Helm 3?
Is there any specific workaround to make it compatible with Helm 3?
Does search functionality have any requirements to chart structure or version?
Is Azure ACR fully compatible with Helm 3?
Yes, it's fully compatible with Helm 3.
Is there any specific workaround to make it compatible with Helm 3?
Nothing needs to be done because the first question is yes.
Does search functionality have any requirements to chart structure or
version?
You need to first to add the repo to your local helm with the command az acr helm repo add --name myacr or helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username xxxxx --password xxxxxx, and then you get the output like this running the command helm search repo -l:
And the local repo looks like this:
I have setup an Azure Kubernetes Service and manually successfully deployed multiple Helm charts.
I now want to setup up a CD pipeline using GitHub Actions and Helm to deploy (that is install and upgrade) a Helm chart whenever the Action is triggers.
Up until now I only found Actions that use kubectl for deployment, which I don't want to use, because there are some secrets provided in the manifests that I don't want to check into version control, hence the decision for Helm as it can fill these secrets with values provided as environmental variables when running the helm install command:
# without Helm
...
clientId: secretValue
# with Helm
...
clientId: {{ .Values.clientId }}
The "secret" would be provided like this: helm install --set clientId=secretValue.
Now the question is how can I achieve this using GitHub Actions? Are there any "ready-to-use" solutions available that I just haven't found or do I have to approach this in a completely different way?
Seems like I made things more complicated than I needed.
I ended up with writing a simple GitHub Action based on the alpine/helm docker image and was able to successfully setup the CD pipeline into AKS.
I'm trying to change the tag of a Docker image using a Docker task on an Azure DevOps pipeline, without success.
Consider the following Docker image hosted on an Azure container registry:
My task is configured as follows:
$(DockerImageName) value is agents/standard-linux-docker2:310851
I'm trying to change the Docker image tag (e.g. to latest) but so far I wasn't able to make it work. I've also tried to set the arguments as well, without success.
Task fails with the following error message:
Error response from daemon: No such image: agents/standard-linux-docker2:310851
/usr/bin/docker failed with return code: 1
What am I missing here?
Try using Azure CLI Task. Run the following command and select the options in the image.
az acr import --name xxxxxacr --source xxxxxacr.azurecr.io/xxx/xxx-api:stage --image xxxxyyyyyyy/yyyyyyyy-api:prod --force