My metrics-server was sudden not working and got below information:
$ kubectl get apiservices |egrep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server False (MissingEndpoints)
I tried to implement below but still not okay:
$ git clone https://github.com/kubernetes-incubator/metrics-server.git
$ cd metrics-server
$ kubectl apply -f deploy/1.8+/
Please advise, thanks.
I solved this issue the following:
Download metrics-server:
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.0/components.yaml
Remove metrics server:
kubectl delete -f components.yaml
Edit downloaded file and add - --kubelet-insecure-tls flag:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
Create service once again:
kubectl apply -f components.yaml
in this case the solution was to upgrade kubernetes version for nodes to reapply metrics server
Also, upgrading to the latest (0.4.1) version of metrics-server probably fixes similar issues (like False (MissingEndpoints)):
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml;
Related
I am working to create a scheduler on Gitlab to execute a pipeline that deploys multiple applications to Openshift using helm. I have the pods ready and the scheduler set up,but I am unable to run helm commands. The pipeline fails out with the following error.
++ echo '$ helm install chart-name helm/charts/--namespace dev # collapsed multi-line command' $ helm install chart-name helm/charts/ --namespace dev # collapsed multi-line command ++ helm install chart-name helm/charts/ --namespace dev bash: line 188: helm: command not found Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit status 1
This is my code in the **gitlab.ci.yaml **. I attempted to add a helm image but it didn't seem to work, I was expecting to have the image therefore able to call the helm commands.
onboard-dev:
stage: release
tags:
- my-tag
image:
name: alpine/helm
entrypoint: [""]
script:
- |
PATH=$PATH:$(pwd)/bin
oc login --token=$TOKEN --insecure-skip-tls-verify=true --server=$MY_SERVER
oc project dev
- |
helm install chart-name helm/charts --namespace dev
helm upgrade --install chart-name helm/charts
What is the best way to go about achieving this? Thanks in advance!
We have been tasked with setting up a container-based Jenkins deployment, and there is strong pressure to do this in AKS. Our Jenkins needs to be able to build other containers. Normally I'd handle this with a docker-in-docker approach by mounting /var/run/docker.sock & /usr/bin/docker into my running container.
I do not know if this is possible in AKS or not. Some forum posts on GitHub suggest that host-mounting is possible but broken in the latest AKS relase. My limited experimentation with a Helm chart was met with this error:
Error: release jenkins4 failed: Deployment.apps "jenkins" is invalid:
[spec.template.spec.initContainers[0].volumeMounts[0].name: Required
value, spec.template.spec.initContainers[0].volumeMounts[0].name: Not
found: ""]
The change I made was to update the volumeMounts: section of jenkins-master-deployment.yaml and include the following:
-
type: HostPath
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
Is what I'm trying to do even possible based on AKS security settings, or did I just mess up my chart?
If it's not possible to mount the docker socket into a container in AKS, that's fine, I just need a definitive answer.
Thanks,
Well, we did this a while back for VSTS (cloud TFS, now called Azure DevOps) build agents, so it should be possible. The way we did it is also with mounting the docker.sock
The relevant part for us was:
... container spec ...
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
I have achieved the requirement using following manifests.
Our k8s manifest file carries this securityContext under pod definition.
securityContext:
privileged: true
In our Dockerfile we were installing Docker-inside-Docker like this way
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install curl wget -y
RUN apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release -y
RUN mkdir -p /etc/apt/keyrings
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
RUN echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
RUN apt-get update
RUN apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
# last two lines of Dockerfile
COPY ./agent_startup.sh .
RUN chmod +x /agent_startup.sh
CMD ["/usr/sbin/init"]
CMD ["./agent_startup.sh"]
Content of agent_startup.sh file
#!/bin/bash
echo "DOCKER STARTS HERE"
service --status-all
service docker start
service docker start
docker version
docker ps
echo "DOCKER ENDS HERE"
sleep 100000
Sample k8s file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: build-agent
labels:
app: build-agent
spec:
replicas: 1
selector:
matchLabels:
app: build-agent
template:
metadata:
labels:
app: build-agent
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: build-agent
image: myecr-repo.azurecr.io/buildagent
securityContext:
privileged: true
When Dockerized agent pool was up, docker daemon was running inside docker container.
My Kubectl version
PS D:\Temp\temp> kubectl.exe version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.22.6
WARNING: version difference between client (1.25) and server (1.22) exceeds the supported minor version skew of +/-1
pod shell output:
root#**********-bcd967987-52wrv:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
**Disclaimer: Our kubernetes cluster version is 1.22 and base image is Ubuntu-18.04 and tested only to check if docker-inside-docker is running and not registered with Azure DevOps. You can modify startup script according to your need **
I have uploaded my image on ACR. When I try to deploy it using a deployment.yaml with kubectl commands, the kubectl get pods command shows ErrImageNeverPull in the pods.
Also, I am not using minikube. Is it necessary to use minikube for this?
I am a beginner in azure/kubernetes.
I've also used imagePullPolicy: Never in the yaml file. It's not working even without this and shows ImagePullBackOff.
As Payal Jindal mentioned in the comment:
It worked fine. There was a problem with my docker installation.
Problem is now resolved. The way forward is to set the image pull policy to IfNotPresent or Always.
spec:
containers:
- imagePullPolicy: Always
I have configured a Kubernetes cluster on Microsoft Azure and installed a Grafana helm chart on it.
In a directory on my local computer, I have a custom Grafana plugin that I developed in the past and I would like to install it in Grafana running on the Cloud.
Is there a way to do that?
You can use an initContainer like this:
initContainers:
- name: local-plugins-downloader
image: busybox
command:
- /bin/sh
- -c
- |
#!/bin/sh
set -euo pipefail
mkdir -p /var/lib/grafana/plugins
cd /var/lib/grafana/plugins
for url in http://192.168.95.169/grafana-piechart-panel.zip; do
wget --no-check-certificate $url -O temp.zip
unzip temp.zip
rm temp.zip
done
volumeMounts:
- name: storage
mountPath: /var/lib/grafana
You need to have an emptyDir volume called storage in the pod, this is the default if you use the helm chart.
Then it needs to be mounted on the grafana's container. You also need to make sure that the grafana plugin directory is /var/lib/grafana/plugins
I have minikube and kubectl installed:
$ minikube version
minikube version: v1.4.0
commit: 7969c25a98a018b94ea87d949350f3271e9d64b6
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
I have then followed the instructions from https://helm.sh/docs/using_helm/:
I have downloaded https://get.helm.sh/helm-v2.13.1-linux-amd64.tar.gz
I have run
$ tar -xzvf Downloads/helm-v2.13.1-linux-amd64.tar.gz linux-amd64/
linux-amd64/LICENSE
linux-amd64/tiller
linux-amd64/helm
linux-amd64/README.md
But now, if I check my helm version, I get this:
$ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find tiller
I have tried running helm init, but get the following:
$ helm init
$HELM_HOME has been configured at /home/SERILOCAL/<my-username>/.helm.
Error: error installing: the server could not find the requested resource
How can I get helm to initialise correctly?
The current helm version does not work with kubernetes version 1.16.0
You can downgrade kubernetes to version 1.15.3
minikube start --kubernetes-version 1.15.3
helm init
or use my solution to fix it at version 1.16.0
You have to create tiller Service Account and ClusterRoleBinding.
You can simply do that by using those commands:
kubectl --namespace kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
And simply create tiller
helm init --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
I met with the same problem, #shawndodo showed me this https://github.com/helm/helm/issues/6374#issuecomment-533427268
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's#apiVersion: extensions/v1beta1#apiVersion: apps/v1#' | kubectl apply -f -
you can try this one.
(Posted on this question)