relation between kubernetes and cron - linux

I am operating kubernetes.
There are many terminating pods.
And So many crond daemons are in place in VM.
both /var/log/messages and /var/log/crond are empty.
I don't know why crond daemon is occurred so many?
500 Crond daemons are excecuting.
ps -ef | grep crond | wc -l
648
and load average is 16
I want to know relations between crond and pod terminating on kubernetes.
How Could I dertermine ?
I checked /etc/rsyslog.conf - it's normal.

By default cron emails the program output to the user who owns a particular crontab, therefore you can check whether any of the emails have been delivered within default path /var/spool/mail.
When you have a long-running or continuous script that could never be finished in cron, it can produce a multiple cron processes appearing in the process list, therefore it might be useful to get a list with a tree view on `crontab specific parent/child processes:
pstree -ap| grep crond
I assume that you have a large CPU utilization on your VM, which can potentially degrade the overall performance and affect Kubernetes engine. Although, Kubernetes provides a comprehensive mechanism for managing Compute resources, it distributes resources allocated on a specific Node within Pods which are consuming CPU and RAM on that Node.
To check general resource utilization on a particular Node, you can use this command:
kubectl describe node <node-name>
To check a Pod terminating reason, you can use similar command as in the above example:
kubectl describe pod <pod_name>
However, when you require to dig deeper into the troubleshooting action on your Kubernetes cluster, I would recommend to look at the official Guide.

Related

kube-api server high cpu [migrated]

This question was migrated from Stack Overflow because it can be answered on Server Fault.
Migrated 20 days ago.
I want to know how I can check why one of my ctrl node and kubernetes consumes more cpu than the others.
I have a cluster with 3 ctrl nodes and 4 worker nodes.
I have an nginx load balancer with the least_conn algorithm to distribute the requests to the ctrl nodes.
Monitoring the resources with the top command, I observe that of the three ctrl nodes, the kube api server process always in the first ctrl node gives me a cpu usage above 100%, unlike the other ctrl nodes where the kube-api server uses less than 20%.
I want to know why?
And how can I see that same representation of consumption, be it pod, containers. nodes in grafana
After finding what happens in your cluster using kubctl top node and kubectl top pod, you can further diagnose what is happening with kubectl logs $pod -c $container on the pod.
At this point, it is up to the container to provide information on what it is doing, so ideally, you would collect metrics in the pods to get a quick insight into what is happening on your cluster using e.g. Grafana. You can also have a look at the resources assigned to your pod using kubectl get pod $pod -o jsonpath='{.spec.containers[].resources}'.
In your case, the log messages of the kubernetes apiserver should give you a hint. Probably, something (another container/pod maybe) is clogging up your API server.

How can I OOM kill a pod manually in Kubernetes

I'm trying to manually OOM Kill pods for testing purposes, does anyone know how I can achieve this?
You can run stress-ng in the pod. With this tool you can also stress CPU, I/O altogether if you need.

Slurm jobs queued but not running

I'm trying to install slurm on Virtualbox running Ubuntu. We're using it to run long-running jobs via a web interface and we use slurm to queue and run the jobs. I'm using VirtualBox to create a sandbox for development.
I've set slurm up, but when I queue a job and run squeue I get:
$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
2 debug test.sh pchandle PD 0:00 1 (Nodes required for job are DOWN, DRAINED or reserved for jobs in higher priority partitions)
When I run it on my actual hardware, the jobs run successfully.
The output of sinfo is:
$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
debug* up infinite 0 n/a
Yes, it says nodes are 0, but the output is the same on my actual hardware, and jobs run fine. Any suggestions on why it's saying 0 nodes?
Is this an issue with my setup, or is simply not possible to run slurm on VirtualBox due to the hardware limitations? I'm running 4 CPUs. The only obvious difference I can see is that threads per core is only 1 (there are 2 on my local hardware).
Is there anyway to debug why the nodes aren't running jobs? Or why there are no nodes available?
It turned out to be a configuration error.
In the config file /etc/slurm-llnl/slurm.conf, I'd left the configuration NodeName as the default NodeName=localhost[0-31]. Since I am running on a single host it should have been set to NodeName=localhost for a single node on the same machine.
Slurm Single Instance had a description of what the values should be set to, which helped me find the answer.
Install Slurm on a stand alone Ubuntu had the instructions I originally followed.

Kubernetes Pods not using CPU more than 1m

My cluster is in AKS with 5 Nodes of size Standard_D4s_v3 and with K8s version 1.14.8.
As soon as a pod is started/restarted it shows Running (kubectl get pods) and up until the pods are in Running state the CPU usage shows 150m or as much as they require.
But when I top it (kubectl top po) after a pod has moved to Running state, the specific pod shows only 1m CPU usage, but Memory usage is where they should be and the service is down as well.
Kubectl logs -f (pod_name) returns nothing but I can ssh into the pods(kubectl exec -it ....)
It's totally normal behavior, if You create pod it needs more CPU resources to create it, once it's created it doesn't need that much resources anymore.
You can always use cpu/memory limits and resources, more about it with examples how to do it here
Pod CPU/Memory requests define a set amount of CPU and memory that the pod needs on a regular basis.
When the Kubernetes scheduler tries to place a pod on a node, the pod requests are used to determine which node has sufficient resources available for scheduling.
Not setting a pod request will default it to the limit defined.
It is very important to monitor the performance of your application to adjust these requests. If insufficient requests are made, your application may receive degraded performance due to over scheduling a node. If requests are overestimated, your application may have increased difficulty getting scheduled.
Pod CPU/Memory limits are the maximum amount of CPU and memory that a pod can use. These limits help define which pods should be killed in the event of node instability due to insufficient resources. Without proper limits set pods will be killed until resource pressure is lifted.
Pod limits help define when a pod has lost of control of resource consumption. When a limit is exceeded, the pod is prioritized for killing to maintain node health and minimize impact to pods sharing the node.
Not setting a pod limit defaults it to the highest available value on a given node.
Don't set a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
Again, it is very important to monitor the performance of your application at different times during the day or week. Determine when the peak demand is, and align the pod limits to the resources required to meet the application's max needs.

Possible OOM in GCP container – how to debug?

I have celery running in a docker container on GCP with Kubernetes. Its workers have recently started to get kill -9'd – this looks like it has something to do with OOMKiller. There are no OOM events in kubectl get events, which is something to be expected if these events only appear when a pod has trespassed resources.limits.memory value.
So, my theory is that celery process getting killed is a work of linux' own OOMKiller. This doesn't make sense though: if so much memory is consumed that OOMKiller enters the stage, how is it possible that this pod was scheduled in the first place? (assuming that Kubernetes does not allow scheduling of new pods if the sum of resources.limits.memory exceeds the amount of memory available to the system).
However, I am not aware of any other plausible reason for these SIGKILLs than OOMKiller.
An example of celery error (there is one for every worker):
[2017-08-12 07:00:12,124: ERROR/MainProcess] Process 'ForkPoolWorker-7' pid:16 exited with 'signal 9 (SIGKILL)'
[2017-08-12 07:00:12,208: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',)
Containers can be OOMKilled for two reasons.
If they exceed the memory limits of set for them. Limits are specified on a per container basis and if the container uses more memory than the limit it will be OOMKilled. From the process's point of view this is the same as if the system ran out of memory.
If the system runs out of memory. There are two kinds of resource specifications in Kubernetes: requests and limits. Limits specify the maximum amount of memory the container can use before being OOMKilled. Requests are used to schedule Pods and default to the limits if not specified. Requests must be less than or equal to container limits. That means that containers could be overcommitted on nodes and OOMKilled if multiple containers are using more memory than their respective requests at the same time.
For instance, if both process A and process B have request of 1GB and limit of 2GB, they can both be scheduled on a node that has 2GB of memory because requests are what is used for scheduling. Having requests less than the limit generally means that the container can burst up to 2GB but will usually use less than 1GB. Now, if both burst above 1GB at the same time the system can run out of memory and one container will get OOMKilled while still being below the limit set on the container.
You can debug whether the container is being OOMKilled by examining the containerStatuses field on the Pod.
$ kubectl get pod X -o json | jq '.status.containerStatuses'
If the pod was OOMKilled it will usually say something to that effect in the lastState field. In your case it looks like it may have been an OOM error based on issues filed against celery (like this one).

Resources