How get kubernetes pod metrics periodically and append them to file - linux

I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it.
I was currently manually using the linux top command over the kubernetes pod.
Is there any way given a kubernetes pod to fetch the CPU/Memory usage every X minutes and append it to a file ?

You can install the metrics server and then write a small bash script that in a loop that calls some combo of kubectl top pods --all-namespaces and outputs to a file. Another option if you want something with some more meat is to run Prometheus and/or kube-state-metrics and set it up to scrape metrics from all pods in the system.

Related

kube-api server high cpu [migrated]

This question was migrated from Stack Overflow because it can be answered on Server Fault.
Migrated 20 days ago.
I want to know how I can check why one of my ctrl node and kubernetes consumes more cpu than the others.
I have a cluster with 3 ctrl nodes and 4 worker nodes.
I have an nginx load balancer with the least_conn algorithm to distribute the requests to the ctrl nodes.
Monitoring the resources with the top command, I observe that of the three ctrl nodes, the kube api server process always in the first ctrl node gives me a cpu usage above 100%, unlike the other ctrl nodes where the kube-api server uses less than 20%.
I want to know why?
And how can I see that same representation of consumption, be it pod, containers. nodes in grafana
After finding what happens in your cluster using kubctl top node and kubectl top pod, you can further diagnose what is happening with kubectl logs $pod -c $container on the pod.
At this point, it is up to the container to provide information on what it is doing, so ideally, you would collect metrics in the pods to get a quick insight into what is happening on your cluster using e.g. Grafana. You can also have a look at the resources assigned to your pod using kubectl get pod $pod -o jsonpath='{.spec.containers[].resources}'.
In your case, the log messages of the kubernetes apiserver should give you a hint. Probably, something (another container/pod maybe) is clogging up your API server.

Spark on Kubernetes: Is it possible to keep the crashed pods when a job fails?

I have the strange problem that a Spark job ran on Kubernetes fails with a lot of "Missing an output location for shuffle X" in jobs where there is a lot of shuffling going on. Increasing executor memory does not help. The same job run on just a single node of the Kubernetes cluster in local[*] mode runs fine however so I suspect it has to do with Kubernetes or underlying Docker.
When an executor dies, the pods are deleted immediately so I cannot track down why it failed. Is there an option that keeps failed pods around so I can view their logs?
You can view the logs of the previous terminated pod like this:
kubectl logs -p <terminated pod name>
Also use spec.ttlSecondsAfterFinished field of a Job as mentioned here
Executors are deleted by default on any failures and you cannot do anything with that unless you customize Spark on K8s code or use some advanced K8s tooling.
What you can do (and most probably is the easiest approach to start with) is configuring some external log collectors, eg. Grafana Loki which can be deployed with 1 click to any K8s cluster, or some ELK stack components. These will help you to persist logs even after pods are deleted.
There is a deleteOnTermination setting in the spark application yaml. See the spark-on-kubernetes README.md.
deleteOnTermination - (Optional)
DeleteOnTermination specify whether executor pods should be deleted in case of failure or normal termination. Maps to spark.kubernetes.executor.deleteOnTermination that is available since Spark 3.0.

Kubernetes Pods not using CPU more than 1m

My cluster is in AKS with 5 Nodes of size Standard_D4s_v3 and with K8s version 1.14.8.
As soon as a pod is started/restarted it shows Running (kubectl get pods) and up until the pods are in Running state the CPU usage shows 150m or as much as they require.
But when I top it (kubectl top po) after a pod has moved to Running state, the specific pod shows only 1m CPU usage, but Memory usage is where they should be and the service is down as well.
Kubectl logs -f (pod_name) returns nothing but I can ssh into the pods(kubectl exec -it ....)
It's totally normal behavior, if You create pod it needs more CPU resources to create it, once it's created it doesn't need that much resources anymore.
You can always use cpu/memory limits and resources, more about it with examples how to do it here
Pod CPU/Memory requests define a set amount of CPU and memory that the pod needs on a regular basis.
When the Kubernetes scheduler tries to place a pod on a node, the pod requests are used to determine which node has sufficient resources available for scheduling.
Not setting a pod request will default it to the limit defined.
It is very important to monitor the performance of your application to adjust these requests. If insufficient requests are made, your application may receive degraded performance due to over scheduling a node. If requests are overestimated, your application may have increased difficulty getting scheduled.
Pod CPU/Memory limits are the maximum amount of CPU and memory that a pod can use. These limits help define which pods should be killed in the event of node instability due to insufficient resources. Without proper limits set pods will be killed until resource pressure is lifted.
Pod limits help define when a pod has lost of control of resource consumption. When a limit is exceeded, the pod is prioritized for killing to maintain node health and minimize impact to pods sharing the node.
Not setting a pod limit defaults it to the highest available value on a given node.
Don't set a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
Again, it is very important to monitor the performance of your application at different times during the day or week. Determine when the peak demand is, and align the pod limits to the resources required to meet the application's max needs.

Kubernetes: shutdown node between jobs runs

I run a Kubernetes CronJob every week, on a Kubernetes cluster that have a single node in it. It runs on Google Compute Engine.
I would like to shutdown the node completely between two jobs, for billing purposes (we pay the price as if the machine was used for the whole week but it is actually useful a few hours)
Is it possible boot the node, run the job, then shutdown the node?
The cluster autoscaler can help with this:
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler

I/O monitoring on Kubernetes / CoreOS nodes

I have a Kubernetes cluster. Provisioned with kops, running on CoreOS workers. From time to time I see a significant load spikes, that correlate with I/O spikes reported in Prometheus from node_disk_io_time_ms metric. The thing is, I seem to be unable to use any metric to pinpoint where this I/O workload actually originates from. Metrics like container_fs_* seem to be useless as I always get zero values for actual containers, and any data only for whole node.
Any hints on how can I approach the issue of locating what is to be blamed for I/O load in kube cluster / coreos node very welcome
If you are using nginx ingress you can configure it with
enable-vts-status: "true"
This will give you a bunch of prometheus metrics for each pod that has on ingress. The metric names start with nginx_upstream_
In case it is the cronjob creating the spikes, install node-exporter daemonset and check the metrics container_fs_

Resources