How to filter the running nodes - linux

I want to filter the running nodes list . I tried below command but its only showing running status.I need to filter with it name..Any help ?
[root#techsl]# kubectl get nodes -o jsonpath='{range .items[*]}{#.metadata.name}:{range #.status.`enter code here`conditions[*]}{#.type}={#.status};{end}{end}'| tr ';' "\n" | grep "Ready=True"

Something like this is easier:
kubectl get nodes | grep -v NotReady | awk '{print $1}' | tail -n2
server1
server3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
server1 Ready master 106d v1.14.9
server2 NotReady <none> 106d v1.14.9
server3 Ready <none> 106d v1.14.9

kubectl get nodes -o jsonpath="{range .items[*]}{#.metadata.name}: {range #.status.conditions[4]}{#.type}; {end}{end}";
kubernetes-1-17-master: Ready; kubernetes-1-17-worker: Ready;
The way you do:
kubectl get nodes -o jsonpath="{range .items[*]}{#.metadata.name}:{range #.status.conditions[4]}{#.type}={#.status}; {end}{end}" | grep "Ready=True"
kubernetes-1-17-master:Ready=True; kubernetes-1-17-worker:Ready=True;

Related

"bash" not found when running xargs in kubernetes

I am trying to use/understand xargs for printing various details for some pods I have running in kubernetes across different namespaces. E.g. this command gives:
$ kubectl get pods -A | grep Error | awk '{print $2 " -n=" $1}'
my-pod-kf8xch6y-qc6ms-k6ww2 -n=my-ns
my-pod-kf8xlg64-g0ss7-mdv1x -n=my-ns
my-pod-kldslg64-polf7-msdw3 -n=another-ns
which is correct/expected.
When I add xargs to the above command I get:
$ kubectl get pods -A | grep Error | awk '{print $2 " -n=" $1}' | xargs kubectl $1 get pod $0 -oyaml | grep phase
Error from server (NotFound): pods "bash" not found
phase: Failed
phase: Failed
Which is actually the expected output but I don't understand
Error from server (NotFound): pods "bash" not found
why is bash passed to xargs?
We suggest to avoid xargs complications and combine maximum into single awk script.
kubectl get pods -A | awk '/Error/{system("kubectl get pod "$2" -n="$1" -oyaml")}'|grep phase
Or get even more detailed results:
kubectl get pods -A | awk '/Error/{system("kubectl get pod "$2" -n="$1" -oyaml")}'|grep -E "(^ name|^ phase)

how get and use a docker container id from part of its name in a terminal pipe request?

I am trying to combine the following commands:
docker ps | grep track
that will give me
6b86b28a27b0 dev/jobservice/worker-jobtracking:3.5.0-SNAPSHOT "/tini -- /startup/s…" 25 seconds ago Up 2 seconds (health: starting)
jobservice_jobTrackingWorker_1
So then, I grab the id and use it in the next request as:
docker logs 6b8 | grep -A 3 'info'
So far, the easiest way I could find was to send those commands separately, but i wonder if there would be a simple way to do it.
I think that the main issue here is that I am trying to find the name of the container based on part of its name.
So, to resume, I would like to find and store the id of a container based on its name then use it to explore its logs.
Thanks!
Perhaps there are cleaner ways to do it, but this works.
To get the ID of a partially matching container name:
$ docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1
Then you can use it in another bash command:
$ docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1)
Or wrap it in a function:
$ function dlog() { docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "$1" | cut -d " " -f1); }
Which can then be used as:
$ dlog partial
In a nutshell the pure bash approach to achieve what you want:
With sudo:
sudo docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs sudo docker logs
Without sudo:
docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs
Now let's break it down...
Let's see what containers I have running in my laptop for the Elixir programming language:
command:
sudo docker ps | grep -i elixir -
output:
0a19c6e305a2 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "iex -S mix phx.serv…" 7 days ago Up 7 days 127.0.0.1:2000-2001->2000-2001/tcp Projects_todo-tasks_app
65ef527065a8 exadra37/st3-3211-elixir:latest "bash" 7 days ago Up 7 days SUBL3_1600981599
232d8cfe04d5 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "mix phx.server" 8 days ago Up 8 days 127.0.0.1:4000-4001->4000-4001/tcp Staging_todo-tasks_app
Now let's find their ids:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}'
output:
0a19c6e305a2
65ef527065a8
232d8cfe04d5
Let's get the first container ID:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1
NOTE: replace head -1 with head -2 to get the second line in the output...
output:
0a19c6e305a2
Let's see the logs for the first container in the list
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1 | xargs sudo docker logs
NOTE: replace head -1 with tail -1 to get the logs for the last container in the list.
output:
[info] | module=WebIt.Live.Calendar.Socket function=mount/1 line=14 | Mount Calendar for date: 2020-09-30 23:29:38.229174Z
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=40 | Tzdata polling for update.
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=44 | Tzdata polling shows the loaded tz database is up to date.
Combining the different replies, I used:
function dlog() { docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs | grep -i -A 4 "$2";}
to get the best of both worlds. So I can have a function that will have me type 4 letters instead of 2 commands and with no case sensitivity
I can then use dlog keyword to get my logs.
I hardcoded track and -A 4 as I usually use that query but I could have passed them as arguments to add on modularity (my goal here was really simplicity)
Thanks for your help!

Redirect logs of pods in my k8s to a file with pod name

I was trying to redirect logs of pods in a k8s into a file of their name.
kubectl get pods | awk '{print $1}' | tail -2 | xargs -I {} kubectl logs {} > {}
This is the result.
demo#demo1:~/log$ ls
{}
What I need is, if this is the pod details
demo#demo1:~/log$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 3d23h
pod2 1/1 Running 0 3d23h
The expected result is
demo#demo1:~/log$ ls
pod1 pod2
files pod1 & pod2 will have logs of respective pods.
kubectl get pods | awk '{print $1}' | tail -n +2 | xargs -I{} sh -c 'kubectl logs $1 > $1' -- {}
Courtesy to this answer
for i in $(kubectl get po -oname | awk -F'/' '{print $2}'); do kubectl logs $i > $i; done

Grep the PIDs of simulator and kill the same using kubectl

I need help with the command where I am trying to grep the PIDs of ecm simulator and kill the same using kubectl :
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "ps -ef | grep ecm | grep node | awk '{print $2}' "
Output of the above command:
root 9857 0 0 07:11 ? 00:00:00 bash -c /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js> /tmp/simulatorEcmResponse.txt
root 9863 9857 0 07:11 ? 00:00:00 /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js
Expected output is:
9857
9863
Then further I need to kill the PIDs:
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "ps -ef | grep ecm | grep node | awk '{print $2}' | xargs kill -9"
When I am executing the same within the service pod it's working but it's giving issues when I am doing via kubectl from outside.
Could anyone please let me know what I am doing wrong here?
NOTE: There are 2 PIDs which needs to be killed from the below output:
eric-service-0:/ # ps -ef | grep ecm | grep node
root 9857 0 0 07:11 ? 00:00:00 bash -c /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js> /tmp/simulatorEcmResponse.txt
root 9863 9857 0 07:11 ? 00:00:00 /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js
EDIT:
Output of the command as asked by #Cyrus below:
Posting this this as Community Wiki answer for better visibility. Solution has been provided in comments by #Cyrus.
In Short, OP wanted to Kill/interrupt some process using their PID's. OP wanted to do it from cluster level on specific pod/container which included ecm simulator.
To do it, commands below were used:
exec - execute a command in a container
-- bash - run bash inside container
ps -ef - list all process on the system
grep - serch specific pattern
awk - pattern scanning and processing language.
xargs - build and execute command lines from standard input
kill - send a signal to a process
In MANUAL you can find some information about ps flags:
To see every process on the system using standard syntax:
ps -e
ps -ef
ps -eF
ps -ely
however each flag will still give another output, like below:
-e
PID TTY TIME CMD
-ef
UID PID PPID C STIME TTY TIME CMD
Cyrus advised to use following command:
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "pgrep -f 'node.*ecm'"
bash -c - If the -c option is present, then commands are read from the first non-option argument command_string.
Also explain in comment:
pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout. From man pgrep. node.*ecm is a regex.

Issue in removing docker images remotely on CentOS

I've installed a Kubernetes cluster using Rancher on 5 different CentOS nodes (let's say node1, node2, ..., node5). For our CI run, we need to clean up stale docker images before each run. I created a script that runs on node1, and password-less ssh is enabled from node1 to rest of the nodes. The relevant section of the script looks something like below:
#!/bin/bash
helm ls --short --all | xargs -L1 helm delete --purge
echo "Deleting old data and docker images from Rancher host node."
rm -rf /var/lib/hadoop/* /opt/ci/*
docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
hosts=(node2 node3 node4 node5)
for host in ${hosts[*]}
do
echo "Deleting old data and docker images from ${host}"
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
ssh root#${host} rm -rf /var/lib/hadoop/* /opt/ci/*
done
echo "All deletions are complete! Proceeding with installation."
sleep 2m
Problem is, while the docker rmi command inside the for loop runs for all other 4 nodes, I get the error Error: No such image: <image-id> for each of the images. But if I execute same command on that node, it succeeds. I'm not sure what's the issue here. Any help is appreciated.
The problem is that the only the first command in the ssh pipe is executed remotely:
ssh root#${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
Shell understand that it is
ssh ssh-arguments | grep grep-arguments | awk awk-arguments | xarg xarg-arguments
And the result is that the only docker images is executed remotely. Then the output from the remote docker images is transferred to the local machine where it is filtered by grep and awk and then docker rmi is executed on local machine.
It is necessary to add quotes to inform shell that everything at command line is a ssh argument:
ssh root#${host} "docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f"

Resources