I'v installed minikube and helm on my system run vm, and tried to deploy jenkins
MacBook-Pro% helm install stable/jenkins
NAME: quelling-dachshund
Error: getting deployed release "quelling-dachshund": release: "quelling-dachshund" not found
Seems like an error but Kubectl can see the deployment after this error first in Init:0/1 and then running- any ideas why it flops on the install part ?
btw:
$ helm list --all
Error: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)```
$ helm list
Error: Get https://192.168.64.4:8443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: net/http: TLS handshake timeout
any idea how to resolve the error
Update ***
It all comes down to minikube error which I dont understand - btw. this is just after fresh minikube start
$ kubectl create serviceaccount --namespace kube-system tiller --insecure-skip-tls-verify=true
serviceaccount/tiller created
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller --insecure-skip-tls-verify=true
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /Users/rwalas/.helm.
Error: error installing: the server could not find the requested resource
$ rm -rf ~/.helm
$ helm init --service-account tiller --upgrade
Creating /Users/rwalas/.helm
Creating /Users/rwalas/.helm/repository
Creating /Users/rwalas/.helm/repository/cache
Creating /Users/rwalas/.helm/repository/local
Creating /Users/rwalas/.helm/plugins
Creating /Users/rwalas/.helm/starters
Creating /Users/rwalas/.helm/cache/archive
Creating /Users/rwalas/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/<user>/.helm.
Error: error installing: the server could not find the requested resource
Related
I am working to create a scheduler on Gitlab to execute a pipeline that deploys multiple applications to Openshift using helm. I have the pods ready and the scheduler set up,but I am unable to run helm commands. The pipeline fails out with the following error.
++ echo '$ helm install chart-name helm/charts/--namespace dev # collapsed multi-line command' $ helm install chart-name helm/charts/ --namespace dev # collapsed multi-line command ++ helm install chart-name helm/charts/ --namespace dev bash: line 188: helm: command not found Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit status 1
This is my code in the **gitlab.ci.yaml **. I attempted to add a helm image but it didn't seem to work, I was expecting to have the image therefore able to call the helm commands.
onboard-dev:
stage: release
tags:
- my-tag
image:
name: alpine/helm
entrypoint: [""]
script:
- |
PATH=$PATH:$(pwd)/bin
oc login --token=$TOKEN --insecure-skip-tls-verify=true --server=$MY_SERVER
oc project dev
- |
helm install chart-name helm/charts --namespace dev
helm upgrade --install chart-name helm/charts
What is the best way to go about achieving this? Thanks in advance!
I am trying to install the helm in the kubernetes, I have installed the helm successfully.
When I check the helm version it is showing the below error
`helm version
Client: $version .version{SemVer:V2XXX",Git commit:"XXXXXXXXXXXXXXXXX",GitTreeState: "clean"}
Error:could not find the tiller`
When I executed the Init command it is showing Tiller is already installed in the cluster
helm init --history-max 200 --service-account tiller
$HELM_HOME has been configured at home/user/.helm
warning: Tiller is already installed in the cluster
When I check the logs for the pod I am able to see below error
`Type Reason Age From Message
Waring: FailedCreate 11m (x25 over 132m) replicaset-controller error creating: pod "tiller-deploy-xxxxx" is forbidden: errorlooking up service account :tiller not found"`
How to resolve this issue any idea?
I tried to reproduce the same issue in my environment and got the below results
When I check the helm version I got the same Error
When I do the init command it is showing the same error like its already exist
helm init --history-max 200 --service-account tiller
I am getting this error because of I am not having the service account
To resolve this issue I have created the yaml file with service account as shown
I have created the service account using below script, this script i have taken from the SO link and made the changes as per requirements
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"tiller"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"kind":"ServiceAccount","name":"tiller","namespace":"kube-system"}]}
creationTimestamp: "XXXXXXX"
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
I have deployed this service account using below command
kubectl apply -f filename.yaml
Deleted the replica set and recreated new replica sets again
kubectl -n kube-system delete replicaset replica-name
After deleting the replica set it automatically recreates the new one
kubectl -n kube-system get replicaset
When I check the helm version I am able to see as shown below
Are you sure tiller service account is created?
Try create the service account and giv it the required permissions
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
After that initialize Helm again and see if the error goes away
helm init --history-max 200 --service-account tiller
I have a GitLab pipeline to deploy a Kubernetes cluster using Terraform on Azure.
The first time I used the pipeline everything went fine. Once I finished doing my tests I ran the destroy phase and everything was destroyed.
Yesterday I reran the pipeline to create the cluster, all the stages went well except the last that installs the nginx-ingress using helm.
install_nginx_ingress:
stage: install_dependencies
image: alpine/helm:3.1.1
script:
- helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- helm repo update
- >
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace default
--set controller.replicaCount=2
dependencies:
- apply
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH && $PHASE == "DEPLOY"
When this stage is executed, this is what I have in the GitLab console:
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
"ingress-nginx" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install nginx-ingress ingress-nginx/ingress-nginx --namespace default --set controller.replicaCount=2
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Cleaning up project directory and file based variables
ERROR: Job failed: command terminated with exit code 1
What Is happening !?
Check this error line. This explain the issue.
Unable to continue with install: could not get information about the resource: poddisruptionbudgets.policy "nginx-ingress-ingress-nginx-controller" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "poddisruptionbudgets" in API group "policy" in the namespace "default"
Your nginx-ingress-ingress-nginx-controller does not have RBAC permission for get operation on poddisruptionbudgets resource.
Look like kubernetes/ingress-nginx chart has PodDisruptionBudget defined but the ClusterRole does not include any permission for poddisruptionbudgets resource.
I am trying to install the ingress on a new azure kuberenetes cluster but it is giving following error:-
helm install germanyingress ingress-nginx --namespace test --set controller.replicaCount=2 --set controller.scope.enabled=true --set controller.service.loadBalancerIP="*******" --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"="true"
WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/stable"
Error: failed to download "ingress-nginx" (hint: running `helm repo update` may help)
I already tried many ways but no luck.
The warning message is very clear, you're using a Helm repo that is deprecated.
Remove it using
helm repo remove germanyingress
Add the Kubernetes one
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
or the one from Nginx
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
I have minikube and kubectl installed:
$ minikube version
minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
I have then followed the instructions from https://helm.sh/docs/using_helm/:
I have downloaded https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
I have run
$ tar -xzvf Downloads/helm-v2.13.1-linux-amd64.tar.gz linux-amd64/
linux-amd64/LICENSE
linux-amd64/tiller
linux-amd64/helm
linux-amd64/README.md
But now, if I check my helm version, I get this:
$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
I have tried running helm init, but get the following:
$ helm init
$HELM_HOME has been configured at /home/SERILOCAL/<my-username>/.helm.
Error: error installing: the server could not find the requested resource
How can I get helm to initialise correctly?
The current helm version does not work with kubernetes version 1.16.0
You can downgrade kubernetes to version 1.15.3
minikube start --kubernetes-version 1.15.3
helm init
or use my solution to fix it at version 1.16.0
You have to create tiller Service Account and ClusterRoleBinding.
You can simply do that by using those commands:
kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
And simply create tiller
helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
I met with the same problem, #shawndodo showed me this https://github.com/helm/helm/issues/6374#issuecomment-533427268
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
you can try this one.
(Posted on this question)