Spark submit on Kubernetes cloud engine using only one node, one cpu requested - apache-spark

I have set up a cluster with 4 nodes each having 2 CPUs so 8 in total.
My code end up running only on one CPU and no matter the settings the requested CPUs is always 1 in the pod description and the execution time stays the same. Tried the spark examples and same thing applies.
Spark submit script I use:
./bin/spark-submit \
--master k8s://https://34.64.87.144 \
--deploy-mode cluster \
--name spark-counter \
--class DataCounter \
--driver-java-options "-Dlog4j.configuration=file:////opt/spark/data/log4j.properties" \
--conf spark.executor.cores=4 \
--conf spark.kubernetes.executor.request.cores=3.6 \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.kubernetes.driver.pod.name=spark-counter \
--conf spark.kubernetes.container.image=asia.gcr.io/profound-media-298808/spark-base:latest \
local:///opt/spark/data/spark_counter-1.0.jar /opt/spark/data/input1
and the description of the pod
spark-role=driver
Annotations: <none>
Status: Succeeded
IP: 10.32.4.37
IPs:
IP: 10.32.4.37
Containers:
spark-kubernetes-driver:
Container ID: containerd://bd31b8112159145169ab1b6397af8bc2f10cee5429b11c8025f2359ab5194882
Image: asia.gcr.io/profound-media-298808/spark-base:latest
Image ID: asia.gcr.io/profound-media-298808/spark-base#sha256:6aaf817da5606a39bf2aeea769c4ec2d62c7986d06109cb4a38f4f7157702ff1
Ports: 7078/TCP, 7079/TCP, 4040/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
driver
--properties-file
/opt/spark/conf/spark.properties
--class
DataCounter
spark-internal
/opt/spark/data/input1
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 18 Dec 2020 07:13:45 +0000
Finished: Fri, 18 Dec 2020 07:14:28 +0000
Ready: False
Restart Count: 0
Limits:
memory: 1408Mi
Requests:
cpu: 1
memory: 1408Mi
Environment:
SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
SPARK_LOCAL_DIRS: /var/data/spark-f4832458-7450-4d96-b7ed-c672d5ec0eda
SPARK_CONF_DIR: /opt/spark/conf
Mounts:
/opt/spark/conf from spark-conf-volume (rw)
/var/data/spark-f4832458-7450-4d96-b7ed-c672d5ec0eda from spark-local-dir-1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from spark-token-5zk2c (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
spark-local-dir-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
spark-conf-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: spark-counter-1608275621712-driver-conf-map
Optional: false
spark-token-5zk2c:
Type: Secret (a volume populated by a Secret)
SecretName: spark-token-5zk2c
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m44s default-scheduler Successfully assigned default/spark-counter to gke-cluster-1-default-pool-f51b7df5-g048
Warning FailedMount 7m43s (x2 over 7m43s) kubelet MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "spark-counter-1608275621712-driver-conf-map" not found
Normal Pulled 7m42s kubelet Container image "asia.gcr.io/profound-media-298808/spark-base:latest" already present on machine
Normal Created 7m42s kubelet Created container spark-kubernetes-driver
Normal Started 7m42s kubelet Started container spark-kubernetes-driver
I have no additional configuration files set up, using the spark image builder script as a base to build the image used only adding my own jar and data to it. Should I have something more?
How do I set up my cluster to utilize all nodes?

Related

Kubernetes nfs-subdir-external-provisioner stuck in ContainerCreating / Unable to attach or mount volumes: unmounted volumes=[nfs-client-root]

I'm trying to install a nfs-client-provisioner and run a mongdb with it.
Unfortunately, the nfs-client-provisioner hangs in ContainerCreating and says "Warning FailedMount 3m35s (x13 over 37m) kubelet Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[nfs-client-root kube-api-access-lr9tl]: timed out waiting for the condition
".
The nfs server is configured on the same VPS machine (Debian 10).
I am able to mount and write files on the nfs server from a second vps with debian 10.
The cluster is setup with K0s
I have an error with the helm chart and manual installation.
Any help is apechiatet1
For Some more info see below:
Helm version:
version.BuildInfo{Version:"v3.8.2", GitCommit:"6e3701edea09e5d55a8ca2aae03a68917630e91b", GitTreeState:"clean", GoVersion:"go1.17.5"}
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+k0s", GitCommit:"5ab78974affb1a76f1e5687aaa8b02aeac4380b8", GitTreeState:"clean", BuildDate:"2022-03-24T22:59:27Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
k0s version: v1.23.5+k0s.0
worker added with:
token=$(k0s token create --role=worker)
docker run -d --name k0s-worker1 --hostname k0s-worker1 --privileged -v /var/lib/k0s docker.io/k0sproject/k0s:latest k0s worker $token
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k0s-worker9 Ready <none> 42m v1.23.5+k0s
v2202204173709187201 Ready control-plane 43m v1.23.5+k0s
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 40m
/ect/exports
/data/nfs-storage *(rw,sync,no_root_squash,no_subtree_check,insecure)
Output with
sudo k0s kubectl describe pod nfs-client-provisioner-6889579fdb-t7j74
Name: nfs-client-provisioner-6889579fdb-t7j74
Namespace: default
Priority: 0
Node: k0s-worker9/172.17.0.2
Start Time: Tue, 26 Apr 2022 08:45:49 +0200
Labels: app=nfs-client-provisioner
pod-template-hash=6889579fdb
Annotations: kubernetes.io/psp: 00-k0s-privileged
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nfs-client-provisioner-6889579fdb
Containers:
nfs-client-provisioner:
Container ID:
Image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: 47.122.181.39
NFS_PATH: /data/nfs-storage
Mounts:
/persistentvolumes from nfs-client-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lr9tl (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 47.122.181.39
Path: /data/nfs-storage
ReadOnly: false
kube-api-access-lr9tl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned default/nfs-client-provisioner-6889579fdb-t7j74 to k0s-worker9
Warning FailedMount 2m42s (x6 over 16m) kubelet Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[nfs-client-root kube-api-access-lr9tl]: timed out waiting for the condition
Warning FailedMount 24s (x2 over 7m14s) kubelet Unable to attach or mount volumes: unmounted volumes=[nfs-client-root], unattached volumes=[kube-api-access-lr9tl nfs-client-root]: timed out waiting for the condition
command using helm:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=47.122.181.39 \
--set nfs.path=/data/nfs-storage
without helm:
from https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/v4.0.2/deploy
kubectl create -f rbac.yaml
kubectl create -f class.yaml
kubectl create -f deployment.yaml
class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 47.122.181.39
- name: NFS_PATH
value: /data/nfs-storage
volumes:
- name: nfs-client-root
nfs:
server: 47.122.181.39
path: /data/nfs-storage
K0s cluster setup:
sudo curl -sSLf https://get.k0s.sh | sudo sh
sudo k0s install controller --enable-worker
sudo k0s start
sudo cp /var/lib/k0s/pki/admin.conf ~/admin.conf
export KUBECONFIG=~/admin.conf
token=$(k0s token create --role=worker)
docker run -d --name k0s-worker9 --hostname k0s-worker9 --privileged -v /var/lib/k0s docker.io/k0sproject/k0s:latest k0s worker $token
Have you tried:
nfs.mountOptions = {
nfsvers = 4
}
I also used this provisioner. It works only with nfs4. And also check mount point of your NFS server if it's root / or not. If your provisioner is configured and ready try to recreate pvc.

nodeSelector constraint ignored on AKS on mixed node pools (windows/linux)?

Tried installing a basic Nginx ingress using Helm by running the following command:
helm install nginx-ingress --namespace ingress-basic ingress-nginx/ingress-nginx \
--set controller.service.loadBalancerIP='52.232.109.226' \
--set controller.nodeSelector."beta\.kubernetes\.io/os"='linux' \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"='linux' \
--set controller.replicaCount=1 \
--set rbac.create=true
Shortly after installing I noticed the pod was scheduled onto a Windows node instead of a Linux node:
wesley#Azure:~$ kubectl get pods -n ingress-basic -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-ingress-nginx-admission-create-jcp6x 0/1 ContainerCreating 0 18s <none> akswin000002 <none> <none>
wesley#Azure:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
aks-agentpool-59412422-vmss000000 Ready agent 5h32m v1.17.11 10.240.0.4 <none> Ubuntu 16.04.7 LTS 4.15.0-1096-azure docker://19.3.12
aks-linuxpool-59412422-vmss000000 Ready agent 5h32m v1.17.11 10.240.0.128 <none> Ubuntu 16.04.7 LTS 4.15.0-1096-azure docker://19.3.12
akswin000000 Ready agent 5h28m v1.17.11 10.240.0.35 <none> Windows Server 2019 Datacenter 10.0.17763.1397 docker://19.3.11
akswin000001 Ready agent 5h28m v1.17.11 10.240.0.66 <none> Windows Server 2019 Datacenter 10.0.17763.1397 docker://19.3.11
akswin000002 Ready agent 5h28m v1.17.11 10.240.0.97 <none> Windows Server 2019 Datacenter 10.0.17763.1397 docker://19.3.11
Running a describe on the nginx pod revealed that the field Node-selectors remains to be set to <none>.
Name: nginx-ingress-ingress-nginx-admission-create-jcp6x
Namespace: ingress-basic
Priority: 0
PriorityClassName: <none>
Node: akswin000002/10.240.0.97
Start Time: Fri, 16 Oct 2020 20:09:36 +0000
Labels: app.kubernetes.io/component=admission-webhook
app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.40.2
controller-uid=d03091cd-8138-4923-a369-afeca669099c
helm.sh/chart=ingress-nginx-3.7.1
job-name=nginx-ingress-ingress-nginx-admission-create
Annotations: <none>
**Status: Pending**
IP:
Controlled By: Job/nginx-ingress-ingress-nginx-admission-create
Containers:
create:
Container ID:
Image: docker.io/jettech/kube-webhook-certgen:v1.3.0
Image ID:
Port: <none>
Host Port: <none>
Args:
create
--host=nginx-ingress-ingress-nginx-controller-admission,nginx-ingress-ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
--namespace=$(POD_NAMESPACE)
--secret-name=nginx-ingress-ingress-nginx-admission
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
POD_NAMESPACE: ingress-basic (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-ingress-nginx-admission-token-8x6ct (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-ingress-nginx-admission-token-8x6ct:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-ingress-nginx-admission-token-8x6ct
Optional: false
QoS Class: BestEffort
**Node-Selectors: <none>**
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 5m (x5401 over 2h) kubelet, akswin000002 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 31s (x5543 over 2h) kubelet, akswin000002 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "nginx-ingress-ingress-nginx-admission-create-jcp6x": Error response from daemon: container a734c23d20338d7fed800752c19f5e94688fd38fe82c9e90fc14533bae90c6bc encountered an error during hcsshim::System::CreateProcess: failure in a Windows system call: The user name or password is incorrect. (0x52e) extra info: {"CommandLine":"cmd /S /C pauseloop.exe","User":"2000","WorkingDirectory":"C:\\","Environment":{"PATH":"c:\\Windows\\System32;c:\\Windows"},"CreateStdInPipe":true,"CreateStdOutPipe":true,"CreateStdErrPipe":true,"ConsoleSize":[0,0]}
I expected the pod to be scheduled onto a Linux node instead. Does anyone have a clue why this is happening? I saw no taints or anything and this is just a newly spin up cluster. The only workaround for now seems to be to scale the windows node back to 0. Install nginx ingress and then scale up the windows nodes again.
Kubernetes version: 1.17.11
Works with below. Added admissionWebhooks.patch.nodeSelector.
https://learn.microsoft.com/en-us/azure/aks/ingress-basic#create-an-ingress-controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-basic \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux

Airflow Kubernetes executor , Tasks in queued state and pod says invalid image and uses local executor

strong textI have installed airflow in docker and using kubernetes executor unable to trigger dags. Dag runs are in queued state. KubernetesExecutor is creating pod but it says invalid image. If i describe the pod it uses local executor instead of kubernetes executor. Please help
Attaching log file for reference
**kubectl describe pod tablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Name: swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc
Namespace: default
Priority: 0
Node: 10.73.96.181
Start Time: Mon, 11 May 2020 18:22:15 +0530
Labels: airflow-worker=5888feda-6aee-49c8-a94b-39cbe5758062
airflow_version=1.10.10
dag_id=Swed-schema-tables-creation
execution_date=2020-05-11T12_52_09.829627_plus_00_00
kubernetes_executor=True
task_id=Schema_Tables_Creation
try_number=1
Annotations: <none>
Status: Pending
IP: 172.17.0.46
IPs:
IP: 172.17.0.46
Containers:
base:
Container ID:
Image: :
Image ID:
Port: <none>
Host Port: <none>
Command:
airflow
run
Swed-schema-tables-creation
Schema_Tables_Creation
2020-05-11T12:52:09.829627+00:00
--local
--pool
default_pool
-sd
/root/airflow/dags/User_Creation_dag.py
State: Waiting
Reason: InvalidImageName
Ready: False
Restart Count: 0
Environment:
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql://airflowkube:airflowkube#10.73.96.181:5434/airflowkube
Mounts:
/root/airflow/dags from airflow-dags (ro)
/root/airflow/logs from airflow-logs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-64cxg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
airflow-dags:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/dags
HostPathType:
airflow-logs:
Type: HostPath (bare host directory volume)
Path: /data/Naveen/Airflow/Logs
HostPathType:
default-token-64cxg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-64cxg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/swedschematablescreationschematablescreation-ecabd38a66664a33b6645a72ef056edc to evblfnclnullnull1538
Warning Failed 2m15s (x12 over 4m28s) kubelet, evblfnclnullnull1538 Error: InvalidImageName
Warning InspectFailed 2m (x13 over 4m28s) kubelet, evblfnclnullnull1538 Failed to apply default image tag ":": couldn't parse image reference ":": invalid reference format**
It looks like there is no image defined for the workers. You can set this in the airflow.cfg. Be sure to set these worker_container_repository and worker_container_tag. These are null by default which results in the behaviour you're experiencing.
A working example:
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: "apache/airflow"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: "1.10.10-python3.6"
AIRFLOW__KUBERNETES__RUN_AS_USER: "50000"
The airflow docs have some more detailed info on the available properties: https://airflow.apache.org/docs/stable/configurations-ref.html#kubernetes

Azure Container Registry image pulls are very slow with image size ~150 MBs

When deploying an image to an AKS instance, the image pull from the ACR (Premium SKU) is very slow, even for "small" images around ~150 MBs in size.
Both the AKS resource and the ACR resource are in the Canada East region.
Here is an example:
root#076fff2831b2:/tmp# kubectl describe pod application-service-59bcf96874-pvrmb
Name: application-service-59bcf96874-pvrmb
Namespace: default
Priority: 0
Node: aks-41067869-1/10.255.13.163
Start Time: Tue, 11 Feb 2020 18:15:53 -0500
Labels: app.kubernetes.io/instance=application-service
app.kubernetes.io/name=application-service
pod-template-hash=59bcf96874
Annotations: <none>
Status: Running
IP: 10.255.13.175
IPs: <none>
Controlled By: ReplicaSet/application-service-59bcf96874
Containers:
application-service:
Container ID: docker://0e86526a293d9055d482a09f043f0be68c594244fe4216f8fb190bc2caf6b65b
Image: myacr01.azurecr.io/microservices/application-service:0.0.6
Image ID: docker-pullable://myacr01.azurecr.io/microservices/application-service#sha256:cfbb3ffa7adc52da9cc0b8d7f78376076ea712025b59df8e406c559d369f4085
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 11 Feb 2020 18:35:00 -0500
Finished: Tue, 11 Feb 2020 18:35:00 -0500
Ready: False
Restart Count: 5
Liveness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
PORT: 3000
undefined: undefined
Mounts:
/kvmnt from application-service-kv-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from application-service-token-9jk8j (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
application-service-kv-volume:
Type: FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
Driver: azure/kv
FSType:
SecretRef: &LocalObjectReference{Name:kvcreds,}
ReadOnly: false
Options: map[keyvaultname:testIt2 keyvaultobjectnames:APPLICATION-SVC-SQLDB-CS;INGESTION-CONSUMER-EHB-CS;INGESTION-PRODUCER-EHB-CS keyvaultobjecttypes:secret;secret;secret tenantid:REMOVED usepodidentity:false usevmmanagedidentity:false]
application-service-token-9jk8j:
Type: Secret (a volume populated by a Secret)
SecretName: application-service-token-9jk8j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20m default-scheduler Successfully assigned default/application-service-59bcf96874-pvrmb to aks-41067869-1
Normal Pulling 20m kubelet, aks-41067869-1 Pulling image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Pulled 4m39s kubelet, aks-41067869-1 Successfully pulled image "myacr01.azurecr.io/microservices/application-service:0.0.6"
Normal Started 3m36s (x4 over 4m33s) kubelet, aks-41067869-1 Started container application-service
Warning BackOff 3m4s (x11 over 4m30s) kubelet, aks-41067869-1 Back-off restarting failed container
Normal Pulled 2m52s (x4 over 4m32s) kubelet, aks-41067869-1 Container image "myacr01.azurecr.io/microservices/application-service:0.0.6" already present on machine
Normal Created 2m51s (x5 over 4m33s) kubelet, aks-41067869-1 Created container application-service
Some details were modified/removed for privacy reasons.
However, the thing to note is the ~15m needed to go from a state of "Pulling" to "Pulled" for an image from an ACR.
This issue is occurring daily. The Azure Insights blade of the AKS instance shows a maximum of 26% node CPU and 14.32% node memory utilization over the last 7 days.
How we can go about troubleshooting this further to determine the possible causes of delays?
Any help is greatly appreciated.
Thanks!

test image from azure container registry

I created a simple Docker image from a "Hello World" java application.
This is my Dockerfile
FROM java:8
COPY . /var/www/java
WORKDIR /var/www/java
RUN javac HelloWorld.java
CMD ["java", "HelloWorld"]
I pushed the image (java-app) to Azure Container Registry.
$ az acr repository list --name AContainerRegistry --output tableResult
----------------
java-app
I want to deploy it
amhg$ kubectl run dockerproject --image=acontainerregistry.azurecr.io/java-app:v1
deployment.apps "dockerproject" created
amhg$ kubectl expose deployments dockerproject --port=80 --type=LoadBalancer
service "dockerproject" exposed
and see the pods, the pod is crashed
amhg$ kubectl get pods
NAME READY STATUS RESTARTS AGE
dockerproject-b6799d879-pt5rx 0/1 CrashLoopBackOff 8 19m
Is there a way to "test"/run the image from the central registry, how come it crashes?
HERE DESCRIBE POD
amhg$ kubectl describe pod dockerproject-64fbf7649-spc7h
Name: dockerproject-64fbf7649-spc7h
Namespace: default
Node: aks-nodepool1-39744669-0/10.240.0.4
Start Time: Thu, 19 Apr 2018 11:53:58 +0200
Labels: pod-template-hash=209693205
run=dockerproject
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"dockerproject-64fbf7649","uid":"946610e4-43b7-11e8-9537-0a58ac1...
Status: Running
IP: 10.244.0.38
Controlled By: ReplicaSet/dockerproject-64fbf7649
Containers:
dockerproject:
Container ID: docker://1f2a7a6870a37e4d6b53fc834b0d4d3b681e9faaacc3772177a918e66357404e
Image: acontainerregistry.azurecr.io/java-app:v1
Image ID: docker-pullable://acontainerregistry.azurecr.io/java-app#sha256:eaf6fe53a59de287ad76a18de2c7f05580b1f25153624161aadcc7b8ef47b0c4
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
Ready: False
Restart Count: 13
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vkpjm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vkpjm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vkpjm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned dockerproject2-64fbf7649-spc7h to aks-nodepool1-39744669-0
Normal SuccessfulMountVolume 43m kubelet, aks-nodepool1-39744669-0 MountVolume.SetUp succeeded for volume "default-token-vkpjm"
Normal Pulled 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Container image "acontainerregistry.azurecr.io/java-app:v1" already present on machine
Normal Created 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Created container
Normal Started 43m (x4 over 43m) kubelet, aks-nodepool1-39744669-0 Started container
Warning FailedSync 8m (x161 over 43m) kubelet, aks-nodepool1-39744669-0 Error syncing pod
Warning BackOff 3m (x184 over 43m) kubelet, aks-nodepool1-39744669-0 Back-off restarting failed container
When you run an application in the Pod, Kubernetes expects that it will work all the time as a daemon until you will stop it somehow.
In your details about the pod I see this:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 19 Apr 2018 12:35:22 +0200
Finished: Thu, 19 Apr 2018 12:35:23 +0200
It means that your application exited with code 0 (which means "all is ok") right after start. So, the image was successfully downloaded (registry is OK) and run, but the application exited.
That's why Kubernetes tries to restart the pod all the time.
The only thing I can suggest - find a reason why the application stops and fix it.

Resources