Issue itself
Got an Azure Container registry as both image and chart storage. Assume it myacr.azurecr.io with 8 different charts pushed. As far as I read before Azure ACR is capable of storing charts and compatible with Helm 3 (version 3.5.2).
The following steps to reproduce are simple.
helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username myusername -password admin123 - repo added. OK.
helm chart save ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - chart saved. OK
helm push ./my-chart/ myacr.azurecr.io/helm/my-chart:1.0.0 - pushed. Available in Azure portal. OK.
helm repo update - what could go wrong here? As expected. OK
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "myacr" chart repository
Update Complete. ⎈Happy Helming!⎈
helm search repo -l - I see everything from ingress-nginx and jetstack but nothing from myacr in the list.
Yet if I do pull and export everything works fine - chart is in place
What I tried
renaming repo name to helm/{app} according to some theories in the web - fail
reconfiguring chart with full descriptions e.t.c. according to ingress-nginx - fail
executing helm search repo -l --devel to see all possible chart versions - no luck
"Swithing off and on again" - removing and adding repo again with different combinations - fail
explicit slang language on every attempt - warms up a bit but doesn't solve the issue
The questions are
Is Azure ACR fully compatible with Helm 3?
Is there any specific workaround to make it compatible with Helm 3?
Does search functionality have any requirements to chart structure or version?
Is Azure ACR fully compatible with Helm 3?
Yes, it's fully compatible with Helm 3.
Is there any specific workaround to make it compatible with Helm 3?
Nothing needs to be done because the first question is yes.
Does search functionality have any requirements to chart structure or
version?
You need to first to add the repo to your local helm with the command az acr helm repo add --name myacr or helm repo add myacr https://myacr.azurecr.io/helm/v1/repo --username xxxxx --password xxxxxx, and then you get the output like this running the command helm search repo -l:
And the local repo looks like this:
Related
I have switched from using tradition way of pushing helm chart to oci format.
But on then kubernetes cluster the charts are not getting installed and I am getting the below error:
chart pull error: chart pull error: failed to download chart for remote reference: temp.azurecr.io/helm/v1/repo/abc:1.0.1: not found
chart pull error: chart pull error: failed to download chart for remote reference:
My new repository is :
oci://abc.azurecr.io/charts
instead of
https://abc.azurecr.io/helm/v1
Also In kubernetes cluster the helm path is not getting updated to charts, I am not sure why it is still poiting to
/helm/v1 path
I tried to reproduce the same issue in my environment and got the below output
I have enabled the OCI experimental support for helm versions
export HELM_EXPERIMENTAL_OCI=1
helm version
I have created the container registry
I have created the sample chat and saved the chart into the local archive
cd ..
helm package .
I have given the some registry credentials and also created the service principal using this link
I have logged into the registry using below command
helm registry login Registry_name.azurecr.io
we have to give the username and password of the registry
I run the helm push command to push the chart to registry as a OCI format
helm push hello-world-0.1.0.tgz oci://registry_name.azurecr.io/helm
I have used the below command to show the properties of the repository
az acr repository show \
--name repository_name \
--repository helm/hello-world
To check the data stored in the repository
az acr manifest list-metadata \
--registry myregistry890 \
--name helm/hello-world
I have pulled the chat using below command
helm pull oci://repository_name.azurecr.io/helm/hello-world --version 0.1.0
I'm using the Community edition of Gitlab. Now, with 14.1 I also want use the Helm registry. Everything works fine, I can push my helm package to the helm package registry, download the tgz and so on, but when I use
helm repo add --username "<my-username>" --password "<access-token>" helm-chart https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts
It says
Error: looks like "https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts" is not a valid chart repository or cannot be reached: failed to fetch https://gitlab.somedomain.tld/api/v4/projects/265/packages/helm/api/stable/charts/index.yaml : 404 Not Found
I already checked the project-id (265) but it's correct and I also push my chart to
curl --request POST --user gitlab-ci-token:$CI_JOB_TOKEN --form "chart=#$PACKAGE_FILE" "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/api/stable/charts"
So I wonder what's going wrong here. I can push to the helm registry but cannot retrieve/find the index.yaml. Any idea?
Try to find index.yaml file there: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/helm/stable/index.yaml
I have setup an Azure Kubernetes Service and manually successfully deployed multiple Helm charts.
I now want to setup up a CD pipeline using GitHub Actions and Helm to deploy (that is install and upgrade) a Helm chart whenever the Action is triggers.
Up until now I only found Actions that use kubectl for deployment, which I don't want to use, because there are some secrets provided in the manifests that I don't want to check into version control, hence the decision for Helm as it can fill these secrets with values provided as environmental variables when running the helm install command:
# without Helm
...
clientId: secretValue
# with Helm
...
clientId: {{ .Values.clientId }}
The "secret" would be provided like this: helm install --set clientId=secretValue.
Now the question is how can I achieve this using GitHub Actions? Are there any "ready-to-use" solutions available that I just haven't found or do I have to approach this in a completely different way?
Seems like I made things more complicated than I needed.
I ended up with writing a simple GitHub Action based on the alpine/helm docker image and was able to successfully setup the CD pipeline into AKS.
I've deployed Spark Operator to GKE using the Helm Chart to a custom namespace:
helm install --name sparkoperator incubator/sparkoperator --namespace custom-ns --set sparkJobNamespace=custom-ns
and confirmed the operator running in the cluster with helm status sparkoperator.
However when I'm trying to run the Spark Pi example kubectl apply -f examples/spark-pi.yaml I'm getting the following error:
the path "examples/spark-pi.yaml" does not exist
There are few things that I probably still don't get:
Where is actually examples/spark-pi.yaml located after deploying the operator?
What else should I check and what other steps should I take to make the example work?
Please find the spark-pi.yaml file here.
You should copy it to your filesystem, customize it if needed, and provide a valid path to it with kubectl apply -f path/to/spark-pi.yaml.
kubectl apply needs a yaml file either locally on the system where you are running kubectl command or it can be a http/https endpoint hosting the file.
In our company I use Azure ML and I have the following issue. I specify a conda_requirements.yaml file with the PyTorch estimator class, like so (... are placeholders so that I do not have to type everything out):
from azureml.train.dnn import PyTorch
est = PyTorch(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., conda_dependencies_file_path=’conda_requirements.yaml’, environment_variables=..., framework_version=’1.1’)
The conda_requirements.yaml (shortened version of the pip part) looks like this:
dependencies:
- conda=4.5.11
- conda-package-handling=1.3.10
- python=3.6.2
- cython=0.29.10
- scikit-learn==0.21.2
- anaconda::cloudpickle==1.2.1
- anaconda::cffi==1.12.3
- anaconda::mxnet=1.1.0
- anaconda::psutil==5.6.3
- anaconda::pip=19.1.1
- anaconda::six==1.12.0
- anaconda::mkl==2019.4
- conda-forge::openmpi=3.1.2
- conda-forge::pycparser==2.19
- tensorboard==1.13.1
- tensorflow==1.13.1
- pip:
- torch==1.1.0
- torchvision==0.2.1
This successfully builds on Azure. Now in order to reuse the resulting docker image in that case, I use the custom_docker_image parameter to pass to the
from azureml.train.estimator import Estimator
est = Estimator(source_directory=’.’, script_params=..., compute_target=..., entry_script=..., custom_docker_image=’<container registry name>.azurecr.io/azureml/azureml_c3a4f...’, environment_variables=...)
But now Azure somehow seems to rebuild the image again and when I run the experiment it cannot install torch. So it seems to only install the conda dependencies and not the pip dependencies, but actually I do not want Azure to rebuild the image. Can I solve this somehow?
I attempted to somehow build a docker image from my Docker file and then add to the registry. I can do az login and according to https://learn.microsoft.com/en-us/azure/container-registry/container-registry-authentication I then should also be able to do an acr login and push. This does not work.
Even using the credentials from
az acr credential show –name <container registry name>
and then doing a
docker login <container registry name>.azurecr.io –u <username from credentials above> -p <password from credentials above>
does not work.
The error message is authentication required even though I used
az login
successfully. Would also be happy if someone could explain that to me in addition to how to reuse docker images when using Azure ML.
Thank you!
AzureML should actually cache your docker image once it was created. The service will hash the base docker info and the contents of the conda.yaml file and will use that as the hash key -- unless you change any of that information, the docker should come from the ACR.
As for the custom docker usage, did you set the parameter user_managed=True? Otherwise, AzureML will consider your docker to be a base image on top of which it will create the conda environment per your yaml file.
There is an example of how to use a custom docker image in this notebook:
https://github.com/Azure/MachineLearningNotebooks/blob/4170a394edd36413edebdbab347afb0d833c94ee/how-to-use-azureml/training-with-deep-learning/how-to-use-estimator/how-to-use-estimator.ipynb