Take this scenario:
I want to delete every running pod automatically using the Commandline without having to type kubectl delete pod <pod_name> -n <namespace> for each pod.
You can use awk to filter pod names based on their STATUS==RUNNING. Below code will delete all(in Running state) the pods from $NAMESPACE namespace.
kubectl get pod -n $NAMESPACE|awk '$3=="Running"{print $1}'
Example:
for pod in $(kubectl get pod -n $NAMESPACE |awk '$3=="Running"{print $1}'); do
kubectl delete pod -n $NAMESPACE $pod
done
OR
You may use jsonpath,
NAMESPACE=mynamespace
for pod in $(kubectl get pod -n $NAMESPACE -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'); do
kubectl delete pod -n $NAMESPACE "$pod"
done
NOTE: Above code will cause deletion of all the pods in $NAMESPACE variable.
Example:
kubectl get pod -n mynamespace
NAME READY STATUS RESTARTS AGE
foo-mh6j7 0/1 Completed 0 5d3h
nginx 1/1 Running 2 7d10h
mongo 2/2 Running 12 57d
busybox 1/1 Running 187 61d
jsonpath query to print all pods in Running state:
kubectl get pod -n mynamespace -o jsonpath='{.items[?(#.status.phase=="Running")].metadata.name}{"\n"}'
nginx mongo busybox
Although, you have not asked for ready state, but following query can be used to list pods in ready state.
kubectl get pod -n mynamespace -o jsonpath='{range .items[*]}{.status.containerStatuses[*].ready.true}{.metadata.name}{ "\n"}{end}'
foo-mh6j7
nginx
mongo
busybox
Similarly, this can be done via grep:
kubectl get pod -n $NAMESPACE |grep -P '\s+([1-9]+)\/\1\s+'
NOTE: Either of the solution will not prevent pods from getting respawned if they are created via replicaset or deployment or statefulset etc. This means, they will get deleted and respawned.
You could filter and delete running pods by:
kubectl delete pods -n <NAMESPACE> --field-selector=status.phase=Running
Here is a shellscript I made to achieve the task,
i=0 && for pod in $(kubectl get pods | grep 'Running')
do
if [ `expr $i % 5` == 0 ]
then kubectl delete pod $pod
fi
i=`expr $i + 1`
done
I found that when we loop over kubectl get pods | grep 'Running', every 5th word is a pod name.
So I basically wrote the script to take every 5th-word from the loop and execute whatever command I want on it.
Still, this feels like a naive approach. Feel free to share a better one.
To directly answer the question from the topic summary, for people who found this question, but want delete actually every pod, not only running ones.
Simply do:
kubectl delete pods -n <NAMESPACE> --all
Related
I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.
When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.
See datadog helm-charts
# agents.volumes -- Specify additional volumes to mount in the dd-agent container
volumes:
- hostPath:
path: ./configs
name: openmetrics_config
# agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
volumeMounts:
- name: openmetrics_config
mountPath: /etc/datadog-agent/conf.d/openmetrics.d
readOnly: true
What I've tried
I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl rollout restart deployment datadog-cluster-agent -n datadog
This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.
But fear not.
There are few ways of doing it that and it all depending on your needs. They are:
Terraformed config
Terraformed mount
Terraformed copy config action
Terraformed config
To have config file terraformed means:
to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)
To achieve this create config map for every config:
resource "kubernetes_config_map" "config" {
metadata {
name = "some_name"
namespace = "some_namespace"
}
data = {
"config.conf" = file(var.path_to_config)
}
}
and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be
set {
name = "agents.volumeMounts"
value = [{
"mountPath": "/where/to/mount"
"name": kubernetes_config_map.config.metadata.0.name
}]
}
In example above I used single config and single volume for simplification, but for_each should be enough.
Terraformed mount
Another variant is that you don't want terraform to track configurations, then what you want to do is:
Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
Set set {...} like in previous section.
Terraformed copy config action
Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners
Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.
To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.
I am writing a GitLab-ci.yaml file for my application and in that I have to run the kubectl command to get all pods after I ran the command I got the names of the pods which I need but ow the thing is I have to run kubectl cp command and need to copy a file into all three pods but I don't know the way to perform this action.
If anyone knows the way to do this activity then please reply.
Thanks
job:
script:
- |
for pod in $(kubectl get po -o jsonpath='{.items[*].metadata.name} -n your-namespace')
do
kubectl cp $(pwd)/some-file.txt your-namespace/$pod:/
done
deploy:
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context path/to/agent/repository:agent-name
- kubectl get pods
I have 2 nodes that I'm running development pods on. I'd like to be able to echo only the node that a pod is running on based on the name.
I can use kubectl get pod -o=custom-columns=NAME:.metadata.name,NODE:spec.nodeName -n my-namespace to pull back all the names and nodes for all pods in that namespace, but I'd like to filter just the nodename for specific pods. Using grep on the pod name works, but I'm not sure if its possible to only show the node when filtering based off a single pod name.
Option-1: Using custom-columns
kubectl get pod mypod -o custom-columns=":.spec.nodeName" --no-headers
Option-2: Using jsonpath
kubectl get pod mypod -o jsonpath='{.spec.nodeName}'
Can anyone help in making me understand if it's possible to deploy an empty pod inside a node in k8's for basic network debugging.PS : I should be able to exec this pod after its deployed.
You can just use kubectl with generator.
# Create an idle pod
$ kubectl run --generator=run-pod/v1 idle-pod -i --tty --image ubuntu -- bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Exec into idle pod
$ kubectl exec -i --tty idle-pod bash
root#idle-pod:/# # Debug whatever you want inside the idle container
root#idle-pod:/# exit
$
# Delete the idle pod
$ kubectl delete pod idle-pod
pod "idle-pod" deleted
$
Just deploy a pod with a container you need and command which do nothing.
Save that spec to a yaml file:
apiVersion: v1
kind: Pod
metadata:
name: empty
spec:
containers:
- name: empty
image: alpine
command: ["cat"]
And then apply that yaml by kubectl apply -f $filename
I don't have access to the namespace openebs and maya-apiserver. Can I run mayactl on my nodes to get the same information? If yes, how does mayactl know which PVCs/PVs I have access to? How does it protect other volumes from accidental deletion via mayactl volume delete?
You can do it from maya-apiserver pod. You can do it with the below command in the master node.
kubectl exec -it <pod name> -n openebs bash
Once you are inside the pod, you can run required mayactl command
Else you can run the command directly as per below format.
kubectl exec -it <pod name> -n openebs <required mayactl command>