How to upgrade Helm Chart - azure

I have Nginx-ingress AKS clusters which are created using helm chart. But those AKS clusters are reporting critical as the beta.kubernetes.io/os is configured with linuxcode value. I tried to fix this using the helm upgrade command. But I don't know the helm chart location.
Is there any way I can update it without reinstalling the Nginx-ingress controller?
This is what I get from chart values.yaml by runing helm get values nginx-ingress:
'''USER-SUPPLIED VALUES:
controller:
nodeSelector:
beta.kubernetes.io/os: linux
replicaCount: 2
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
loadBalancerIP: 11.1.0.220
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linuxcode'''
And when I tried this helm upgrade command:
helm upgrade nginx-ingress nginx-ingress-1.41.3 -n dev -f values.yml
But it fails with error message:
**Error: failed to download "nginx-ingress-1.41.3" (hint: running `helm repo update` may help)**

Related

Getting the Error: could not find the tiller while checking the helm version

I am trying to install the helm in the kubernetes, I have installed the helm successfully.
When I check the helm version it is showing the below error
`helm version
Client: $version .version{SemVer:V2XXX",Git commit:"XXXXXXXXXXXXXXXXX",GitTreeState: "clean"}
Error:could not find the tiller`
When I executed the Init command it is showing Tiller is already installed in the cluster
helm init --history-max 200 --service-account tiller
$HELM_HOME has been configured at home/user/.helm
warning: Tiller is already installed in the cluster
When I check the logs for the pod I am able to see below error
`Type Reason Age From Message
Waring: FailedCreate 11m (x25 over 132m) replicaset-controller error creating: pod "tiller-deploy-xxxxx" is forbidden: errorlooking up service account :tiller not found"`
How to resolve this issue any idea?
I tried to reproduce the same issue in my environment and got the below results
When I check the helm version I got the same Error
When I do the init command it is showing the same error like its already exist
helm init --history-max 200 --service-account tiller
I am getting this error because of I am not having the service account
To resolve this issue I have created the yaml file with service account as shown
I have created the service account using below script, this script i have taken from the SO link and made the changes as per requirements
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"tiller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"kind":"ServiceAccount","name":"tiller","namespace":"kube-system"}]}
creationTimestamp: "XXXXXXX"
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
I have deployed this service account using below command
kubectl apply -f filename.yaml
Deleted the replica set and recreated new replica sets again
kubectl -n kube-system delete replicaset replica-name
After deleting the replica set it automatically recreates the new one
kubectl -n kube-system get replicaset
When I check the helm version I am able to see as shown below
Are you sure tiller service account is created?
Try create the service account and giv it the required permissions
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
After that initialize Helm again and see if the error goes away
helm init --history-max 200 --service-account tiller

Installing nginx-ingress using Helm returns "Error: rendered manifests contain a resource that already exists"

I have a GitLab pipeline to deploy a Kubernetes cluster using Terraform on Azure.
The first time I used the pipeline everything went fine. Once I finished doing my tests I ran the destroy phase and everything was destroyed.
Yesterday I reran the pipeline to create the cluster, all the stages went well except the last that installs the nginx-ingress using helm.
install_nginx_ingress:
stage: install_dependencies
image: alpine/helm:3.1.1
script:
- helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- helm repo update
- >
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace default
--set controller.replicaCount=2
dependencies:
- apply
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $PHASE == "DEPLOY"
When this stage is executed, this is what I have in the GitLab console:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install nginx-ingress ingress-nginx/ingress-nginx --namespace default --set controller.replicaCount=2
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
What Is happening !?
Check this error line. This explain the issue.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Your nginx-ingress-ingress-nginx-controller does not have RBAC permission for get operation on poddisruptionbudgets resource.
Look like kubernetes/ingress-nginx chart has PodDisruptionBudget defined but the ClusterRole does not include any permission for poddisruptionbudgets resource.

Not able to install the nginx-ingress on azure kubernetes cluster

I am trying to install the ingress on a new azure kuberenetes cluster but it is giving following error:-
helm install germanyingress ingress-nginx --namespace test --set controller.replicaCount=2 --set controller.scope.enabled=true --set controller.service.loadBalancerIP="*******" --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true"
WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/stable"
Error: failed to download "ingress-nginx" (hint: running `helm repo update` may help)
I already tried many ways but no luck.
The warning message is very clear, you're using a Helm repo that is deprecated.
Remove it using
helm repo remove germanyingress
Add the Kubernetes one
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
or the one from Nginx
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update

How to annotate pods of an Ingress Controller using Helm in Kubernetes (AKS)?

I'm trying to automatically annotate pods (edit: ingress controller pods) to set a custom logs parser in Scalyr when running helm chart packed containers on Azure AKS. I can annotate the service automatically, but fail to annotate pods automatically. Using kubectl to manually do this:
kubectl annotate pod nginx-ingress-ingress-nginx-controller-yyy-xxx --overwrite log.config.scalyr.com/attributes.parser=<my_scalyr_parser_name>
is fine, but when my pods would terminate, then I'll lose my annotations and Scalyr might be missing some logs. Or are ingress nginx pods IDDQD (immortal)? So I'm trying to automate this somehow.
I have tried adding it to values.yaml
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
log.config.scalyr.com/attributes.parser: "<my_scalyr_parser_name>"
but it just lands in metadata annotations in ingress.yaml
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "myservice.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
and this results in annotations of the service. However, I need annotation on pods for Scalyr to use my custom parser, not in the service.
Another approach would be to do it by hard when installing nginx-ingress:
helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.replicaCount=3 --set-string controller.pod.annotations.'log\.config\.scalyr\.com/attributes\.parser'="<my_scalyr_parser_name>"
--set-string controller.service.annotations.'service\.beta\.kubernetes\.io/azure-load-balancer-internal'="true"
there when I'm setting controller.service.annotations I get annotation on the service, but controller.pod.annotations are ignored (and I found controller.pod.annotations in nginx documentation).
So what else could I do?
You should be able to do it with the values.yaml, in a similar way you tried for ingress:
controller:
podAnnotations:
log.config.scalyr.com/attributes.parser: "<my_scalyr_parser_name>"
For some reason, the key of the variable is controller.podAnnotations NOT controller.pod.annotations

Setting of Helm chart to deploy Nodejs service which pushed to Azure Container Registry (ACR)

I written a Nodejs service , and build it by docker . Then i pushed it into Azure Container Registry .
I used Helm to pull Repository from ACR and then deploy to AKS but service not run .
Please tell me some advise.
The code of Helm Value . I thing i have to setting type and port of service.
replicaCount: 1
image:
repository: tungthtestcontainer.azurecr.io/demonode
tag: latest
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: NodePort
port: 8082
internalPort: 8082
ingress:
enabled: false
annotations: {}
hosts:
- host: chart-example.local
paths: []
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
To figure out what happens in that situations it doesn't matter that is helm or a yaml directly with kubectl apply o if it's Azure or another provider I recommend you follow the next steps:
Check the status of the release on helm you can see the status every time you want using helm status <release-name>, try to see if the pots are correctly created and the services are also ok.
Check the deployment with kubectl describe deployment <deployment-name>
Check the pod with kubectl describe pod <pod-name>
Check the pod logs with kubectl logs -f <pod-name>
With that, you should be able to find the source problem.

Resources