app nodejs in kubernetes cluster dont stay runing - CrashLoopBackOff - node.js

I have a small application in nodejs to do tests with kubernetes, but it seems that the application does not keep running
I put all the code that I developed to test, in the GitHub
I'm run kubectl create -f deploy.yaml
Works, but..
[webapp#srvapih ex-node]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7b89bd4755-4lc6k 1/1 Running 0 5s
api-7b89bd4755-7x964 0/1 ContainerCreating 0 5s
api-7b89bd4755-dv299 1/1 Running 0 5s
api-7b89bd4755-w6tzj 0/1 ContainerCreating 0 5s
api-7b89bd4755-xnm8l 0/1 ContainerCreating 0 5s
[webapp#srvapih ex-node]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
api-7b89bd4755-4lc6k 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-7x964 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-dv299 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-w6tzj 0/1 CrashLoopBackOff 1 11s
api-7b89bd4755-xnm8l 0/1 CrashLoopBackOff 1 11s
Events for describe pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 6m48s (x5 over 8m14s) kubelet, srvweb05.beirario.intranet Container image "node:8-alpine" already present on machine
Normal Created 6m48s (x5 over 8m14s) kubelet, srvweb05.beirario.intranet Created container
Normal Started 6m48s (x5 over 8m12s) kubelet, srvweb05.beirario.intranet Started container
Normal Scheduled 6m9s default-scheduler Successfully assigned default/api-7b89bd4755-4lc6k to srvweb05.beirario.intranet
Warning BackOff 3m2s (x28 over 8m8s) kubelet, srvweb05.beirario.intranet Back-off restarting failed container

All I can say here - you are providing a task that finish with command: ["/bin/sh","-c", "node", "servidor.js"].
Instead of this you should provide command in that way so it never completes.
Describe your pods shows that container in the pod has been completed successfully with exit code 0
Containers:
ex-node:
Container ID: docker://836ffd771b3514fd13ae3e6b8818a7f35807db55cf8f756e962131823a476675
Image: node:8-alpine
Image ID: docker-pullable://node#sha256:8e9987a6d91d783c56980f1bd4b23b4c05f9f6076d513d6350fef8fe09ed01fd
Port: 3000/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
node
servidor.js
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 08 Mar 2019 14:29:54 +0000
Finished: Fri, 08 Mar 2019 14:29:54 +0000

you may use "process.stdout.write" method in your code ,This will cause the k8s session to be lost. Do not print anything in stdout!

Try to use pm2 https://pm2.io/docs/runtime/integration/docker/. It starts your nodejs app as a background process.

Related

kubernetes networking: pod cannot reach nodes

I have kubernetes cluster with 3 masters and 7 workers. I use Calico as cni. When I deploy Calico, the calico-kube-controllers-xxx fails because it cannot reach 10.96.0.1:443.
2020-06-23 13:05:28.737 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", DatastoreType:"kubernetes"}
W0623 13:05:28.740128 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2020-06-23 13:05:28.742 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-06-23 13:05:38.742 [ERROR][1] client.go 261: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2020-06-23 13:05:38.742 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
this is the situation in the kube-system namespace:
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-77d6cbc65f-6bmjg 0/1 CrashLoopBackOff 56 4h33m
calico-node-94pkr 1/1 Running 0 36m
calico-node-d8vc4 1/1 Running 0 36m
calico-node-fgpd4 1/1 Running 0 37m
calico-node-jqgkp 1/1 Running 0 37m
calico-node-m9lds 1/1 Running 0 37m
calico-node-n5qmb 1/1 Running 0 37m
calico-node-t46jb 1/1 Running 0 36m
calico-node-w6xch 1/1 Running 0 38m
calico-node-xpz8k 1/1 Running 0 37m
calico-node-zbw4x 1/1 Running 0 36m
coredns-5644d7b6d9-ms7gv 0/1 Running 0 4h33m
coredns-5644d7b6d9-thwlz 0/1 Running 0 4h33m
kube-apiserver-k8s01 1/1 Running 7 34d
kube-apiserver-k8s02 1/1 Running 9 34d
kube-apiserver-k8s03 1/1 Running 7 34d
kube-controller-manager-k8s01 1/1 Running 7 34d
kube-controller-manager-k8s02 1/1 Running 9 34d
kube-controller-manager-k8s03 1/1 Running 8 34d
kube-proxy-9dppr 1/1 Running 3 4d
kube-proxy-9hhm9 1/1 Running 3 4d
kube-proxy-9svfk 1/1 Running 1 4d
kube-proxy-jctxm 1/1 Running 3 4d
kube-proxy-lsg7m 1/1 Running 3 4d
kube-proxy-m257r 1/1 Running 1 4d
kube-proxy-qtbbz 1/1 Running 2 4d
kube-proxy-v958j 1/1 Running 2 4d
kube-proxy-x97qx 1/1 Running 2 4d
kube-proxy-xjkjl 1/1 Running 3 4d
kube-scheduler-k8s01 1/1 Running 7 34d
kube-scheduler-k8s02 1/1 Running 9 34d
kube-scheduler-k8s03 1/1 Running 8 34d
Besides, also coredns cannot get internal kubernetes service.
Within a node, if I run wget -S 10.96.0.1:443, I receive a response.
wget -S 10.96.0.1:443
--2020-06-23 13:12:12-- http://10.96.0.1:443/
Connecting to 10.96.0.1:443... connected.
HTTP request sent, awaiting response...
HTTP/1.0 400 Bad Request
2020-06-23 13:12:12 ERROR 400: Bad Request.
But, if I run wget -S 10.96.0.1:443 in a pod, I receive a timeout error.
Also, i cannot ping nodes from pods.
Cluster pod cidr is 192.168.0.0/16.
I resolve recreating the cluster with different pod cidr

NodeJs api container crashing in kubernetes

As part of the CICD pipeline I deploy my web api to kubernetes, the most recent branch I'm working on keeps crashing.
I have made sure the app runs locally for all the configurations, also the CICD pipeline on the master branch succeeds. I'm assuming is some change I introduced is making the app fail but I can't see any problem on the logs.
This is my DOCKERFILE
FROM node:12
WORKDIR /usr/src/app
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
ENV PORT 5000
EXPOSE $PORT
CMD [ "npm", "start" ]
this is what I get when I run kubectl describe on the corresponding pod
Controlled By: ReplicaSet/review-refactor-e-0jmik1-7f75c45779
Containers:
auto-deploy-app:
Container ID: docker://8d6035b8ee0938262ea50e2f74d3ab627761fdf5b1811460b24f94a74f880810
Image: registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751
Image ID: docker-pullable://registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints#sha256:de1e4478867f54a76f1c82374dcebb1d40b3eb0cde24caf936a21a4d16471312
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 27 Jul 2019 19:18:07 +0100
Finished: Sat, 27 Jul 2019 19:18:49 +0100
Ready: False
Restart Count: 7
Liveness: http-get http://:5000/ delay=15s timeout=15s period=10s #success=1 #failure=3
Readiness: http-get http://:5000/ delay=5s timeout=3s period=10s #success=1 #failure=3
Environment Variables from:
review-refactor-e-0jmik1-secret Secret Optional: false
Environment:
DATABASE_URL: postgres://:#review-refactor-e-0jmik1-postgres:5432/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mvvfv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-mvvfv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mvvfv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m52s default-scheduler Successfully assigned metadata-service-13359548/review-refactor-e-0jmik1-7f75c45779-jfw22 to gke-qa2-default-pool-4dc045be-g8d9
Normal Pulling 9m51s kubelet, gke-qa2-default-pool-4dc045be-g8d9 pulling image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Normal Pulled 9m45s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Successfully pulled image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751"
Warning Unhealthy 8m58s kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: Get http://10.48.1.34:5000/: dial tcp 10.48.1.34:5000: connect: connection refused
Warning Unhealthy 8m28s (x6 over 9m28s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Started 8m23s (x3 over 9m42s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Started container
Warning Unhealthy 8m23s (x6 over 9m23s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Killing container with id docker://auto-deploy-app:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 8m23s (x2 over 9m3s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Container image "registry.gitlab.com/hidden-fox/metadata-service/refactor-endpoints:5e986c65d41743d9d6e6ede441a1cae316b3e751" already present on machine
Normal Created 8m23s (x3 over 9m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Created container
Warning BackOff 4m42s (x7 over 5m43s) kubelet, gke-qa2-default-pool-4dc045be-g8d9 Back-off restarting failed container
I expect the app to get deployed to kubernetes but instead I see a CrashLoopBackOff error on kubernetes.
I also don't see any application specific errors in the logs.
I figured it out. I had to add an endpoint mapped to the root url, apparently as part of the CD it gets ping and if there is no response then the job fails.

test image from azure container registry

I created a simple Docker image from a "Hello World" java application.
This is my Dockerfile
FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
I pushed the image (java-app) to Azure Container Registry.
$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app
I want to deploy it
amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1
deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
service "dockerproject" exposed
and see the pods, the pod is crashed
amhg$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dockerproject-b6799d879-pt5rx 0/1 CrashLoopBackOff 8 19m
Is there a way to "test"/run the image from the central registry, how come it crashes?
HERE DESCRIBE POD
amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h
Name: dockerproject-64fbf7649-spc7h
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 11:53:58 +0200
Labels: pod-template-hash=209693205
run=dockerproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
Status: Running
IP: 10.244.0.38
Controlled By: ReplicaSet/dockerproject-64fbf7649
Containers:
dockerproject:
Container ID: docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
Image: acontainerregistry.azurecr.io/java-app:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/java-app#sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 43m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
Normal Created 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Started container
Warning FailedSync 8m (x161 over 43m) kubelet, aks-nodepool1-39744669-0 Error syncing pod
Warning BackOff 3m (x184 over 43m) kubelet, aks-nodepool1-39744669-0 Back-off restarting failed container
When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.
In your details about the pod I see this:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.
That's why Kubernetes tries to restart the pod all the time.
The only thing I can suggest - find a reason why the application stops and fix it.

Performance metrics in Kubernetes Dashboard missing in Azure Kubernetes deployment

Update 2: I was able to get the statistics by using grafana and influxDB. However, I find this overkill. I want to see the current status of my cluster, not persee the historical trends. Based on the linked image, it should be possible by using the pre-deployed Heapster and the Kubernetes Dashboard
Update 1:
With the command below, I do see resource information. I guess the remaining part of the question is why it is not showing up (or how I should configure it to show up) in the kubernetes dashboard, as shown in this image: https://docs.giantswarm.io/img/dashboard-ui.png
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-agentpool0-41204139-0 36m 1% 682Mi 9%
k8s-agentpool0-41204139-1 33m 1% 732Mi 10%
k8s-agentpool0-41204139-10 36m 1% 690Mi 10%
[truncated]
I am trying to monitor performance in my Azure Kubernetes deployment. I noticed it has Heapster running by default. I did not launch this one, but do want to leverage it if it is there. My question is: how can I access it, or is there something wrong with it? Here are the details I can think of, let me know if you need more.
$ kubectl cluster-info
Kubernetes master is running at https://[hidden].uksouth.cloudapp.azure.com
Heapster is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
tiller-deploy is running at https://[hidden].uksouth.cloudapp.azure.com/api/v1/namespaces/kube-system/services/tiller-deploy:tiller/proxy
I set up a proxy:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
Point my browser to
localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/workload?namespace=default
I see the kubernetes dashboard, but do notice that I do not see the performance graphs that are displayed at https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. I also do not see the admin section.
I then point my browser to localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy and get
404 page not found
Inspecting the pods:
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
heapster-2205950583-43w4b 2/2 Running 0 1d
kube-addon-manager-k8s-master-41204139-0 1/1 Running 0 1d
kube-apiserver-k8s-master-41204139-0 1/1 Running 0 1d
kube-controller-manager-k8s-master-41204139-0 1/1 Running 0 1d
kube-dns-v20-2000462293-1j20h 3/3 Running 0 16h
kube-dns-v20-2000462293-hqwfn 3/3 Running 0 16h
kube-proxy-0kwkf 1/1 Running 0 1d
kube-proxy-13bh5 1/1 Running 0 1d
[truncated]
kube-proxy-zfbb1 1/1 Running 0 1d
kube-scheduler-k8s-master-41204139-0 1/1 Running 0 1d
kubernetes-dashboard-732940207-w7pt2 1/1 Running 0 1d
tiller-deploy-3007245560-4tk78 1/1 Running 0 1d
Checking the log:
$kubectl logs heapster-2205950583-43w4b heapster --namespace=kube-system
I0309 06:11:21.241752 19 heapster.go:72] /heapster --source=kubernetes.summary_api:""
I0309 06:11:21.241813 19 heapster.go:73] Heapster version v1.4.2
I0309 06:11:21.242310 19 configs.go:61] Using Kubernetes client with master "https://10.0.0.1:443" and version v1
I0309 06:11:21.242331 19 configs.go:62] Using kubelet port 10255
I0309 06:11:21.243557 19 heapster.go:196] Starting with Metric Sink
I0309 06:11:21.344547 19 heapster.go:106] Starting heapster on port 8082
E0309 14:14:05.000293 19 summary.go:389] Node k8s-agentpool0-41204139-32 is not ready
E0309 14:14:05.000331 19 summary.go:389] Node k8s-agentpool0-41204139-56 is not ready
[truncated the other agent pool messages saying not ready]
E0309 14:24:05.000645 19 summary.go:389] Node k8s-master-41204139-0 is not ready
$kubectl describe pod heapster-2205950583-43w4b --namespace=kube-system
Name: heapster-2205950583-43w4b
Namespace: kube-system
Node: k8s-agentpool0-41204139-54/10.240.0.11
Start Time: Fri, 09 Mar 2018 07:11:15 +0100
Labels: k8s-app=heapster
pod-template-hash=2205950583
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"heapster-2205950583","uid":"ac75e772-2360-11e8-9e1c-00224807...
scheduler.alpha.kubernetes.io/critical-pod=
Status: Running
IP: 10.244.58.2
Controlled By: ReplicaSet/heapster-2205950583
Containers:
heapster:
Container ID: docker://a9205e7ab9070a1d1bdee4a1b93eb47339972ad979c4d35e7d6b59ac15a91817
Image: k8s-gcrio.azureedge.net/heapster-amd64:v1.4.2
Image ID: docker-pullable://k8s-gcrio.azureedge.net/heapster-amd64#sha256:f58ded16b56884eeb73b1ba256bcc489714570bacdeca43d4ba3b91ef9897b20
Port: <none>
Command:
/heapster
--source=kubernetes.summary_api:""
State: Running
Started: Fri, 09 Mar 2018 07:11:20 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 121m
memory: 464Mi
Requests:
cpu: 121m
memory: 464Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from heapster-token-txk8b (ro)
heapster-nanny:
Container ID: docker://68e021532a482f32abec844d6f9ea00a4a8232b8d1004b7df4199d2c7d3a3b4c
Image: k8s-gcrio.azureedge.net/addon-resizer:1.7
Image ID: docker-pullable://k8s-gcrio.azureedge.net/addon-resizer#sha256:dcec9a5c2e20b8df19f3e9eeb87d9054a9e94e71479b935d5cfdbede9ce15895
Port: <none>
Command:
/pod_nanny
--cpu=80m
--extra-cpu=0.5m
--memory=140Mi
--extra-memory=4Mi
--threshold=5
--deployment=heapster
--container=heapster
--poll-period=300000
--estimator=exponential
State: Running
Started: Fri, 09 Mar 2018 07:11:18 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 50m
memory: 90Mi
Requests:
cpu: 50m
memory: 90Mi
Environment:
MY_POD_NAME: heapster-2205950583-43w4b (v1:metadata.name)
MY_POD_NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from heapster-token-txk8b (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
heapster-token-txk8b:
Type: Secret (a volume populated by a Secret)
SecretName: heapster-token-txk8b
Optional: false
QoS Class: Guaranteed
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: <none>
Events: <none>
I have seen in the past that if you restart the dashboard pod it starts working. Can you try that real fast and let me know?

kube-dns stays in ContainerCreating status

I have 5 machines running Ubuntu 16.04.1 LTS. I want to set them up as a Kubernetes Cluster. Iḿ trying to follow this getting started guide where they're using kubeadm.
It all worked fine until step 3/4 Installing a pod network. I've looked at there addon page to look for a pod network and chose the flannel overlay network. Iǘe copied the yaml file to the machine and executed:
root#up01:/home/up# kubectl apply -f flannel.yml
Which resulted in:
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
So i thought that it went ok, but when I display all the pod stuff:
root#up01:/etc/kubernetes/manifests# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system dummy-2088944543-d5f50 1/1 Running 0 50m
kube-system etcd-up01 1/1 Running 0 48m
kube-system kube-apiserver-up01 1/1 Running 0 50m
kube-system kube-controller-manager-up01 1/1 Running 0 49m
kube-system kube-discovery-1769846148-jvx53 1/1 Running 0 50m
kube-system kube-dns-2924299975-prlgf 0/4 ContainerCreating 0 49m
kube-system kube-flannel-ds-jb1df 2/2 Running 0 32m
kube-system kube-proxy-rtcht 1/1 Running 0 49m
kube-system kube-scheduler-up01 1/1 Running 0 49m
The problem is that the kube-dns keeps in the ContainerCreating state. I don't know what to do.
It is very likely that you missed this critical piece of information from the guide:
If you want to use flannel as the pod network, specify
--pod-network-cidr 10.244.0.0/16 if you’re using the daemonset manifest below.
If you omit this kube-dns will never leave the ContainerCreating STATUS.
Your kubeadm init command should be:
# kubeadm init --pod-network-cidr 10.244.0.0/16
and not
# kubeadm init
Did you try restarting NetworkManager ...? it worked for me.. Plus, it also worked when I also disabled IPv6.

Resources