I am trying to use/understand xargs for printing various details for some pods I have running in kubernetes across different namespaces. E.g. this command gives:
$ kubectl get pods -A | grep Error | awk '{print $2 " -n=" $1}'
my-pod-kf8xch6y-qc6ms-k6ww2 -n=my-ns
my-pod-kf8xlg64-g0ss7-mdv1x -n=my-ns
my-pod-kldslg64-polf7-msdw3 -n=another-ns
which is correct/expected.
When I add xargs to the above command I get:
$ kubectl get pods -A | grep Error | awk '{print $2 " -n=" $1}' | xargs kubectl $1 get pod $0 -oyaml | grep phase
Error from server (NotFound): pods "bash" not found
phase: Failed
phase: Failed
Which is actually the expected output but I don't understand
Error from server (NotFound): pods "bash" not found
why is bash passed to xargs?
We suggest to avoid xargs complications and combine maximum into single awk script.
kubectl get pods -A | awk '/Error/{system("kubectl get pod "$2" -n="$1" -oyaml")}'|grep phase
Or get even more detailed results:
kubectl get pods -A | awk '/Error/{system("kubectl get pod "$2" -n="$1" -oyaml")}'|grep -E "(^ name|^ phase)
Related
I am trying to combine the following commands:
docker ps | grep track
that will give me
6b86b28a27b0 dev/jobservice/worker-jobtracking:3.5.0-SNAPSHOT "/tini -- /startup/s…" 25 seconds ago Up 2 seconds (health: starting)
jobservice_jobTrackingWorker_1
So then, I grab the id and use it in the next request as:
docker logs 6b8 | grep -A 3 'info'
So far, the easiest way I could find was to send those commands separately, but i wonder if there would be a simple way to do it.
I think that the main issue here is that I am trying to find the name of the container based on part of its name.
So, to resume, I would like to find and store the id of a container based on its name then use it to explore its logs.
Thanks!
Perhaps there are cleaner ways to do it, but this works.
To get the ID of a partially matching container name:
$ docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1
Then you can use it in another bash command:
$ docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "partial" | cut -d " " -f1)
Or wrap it in a function:
$ function dlog() { docker logs $(docker ps --format "{{.ID}} {{.Names}}" | grep "$1" | cut -d " " -f1); }
Which can then be used as:
$ dlog partial
In a nutshell the pure bash approach to achieve what you want:
With sudo:
sudo docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs sudo docker logs
Without sudo:
docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs
Now let's break it down...
Let's see what containers I have running in my laptop for the Elixir programming language:
command:
sudo docker ps | grep -i elixir -
output:
0a19c6e305a2 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "iex -S mix phx.serv…" 7 days ago Up 7 days 127.0.0.1:2000-2001->2000-2001/tcp Projects_todo-tasks_app
65ef527065a8 exadra37/st3-3211-elixir:latest "bash" 7 days ago Up 7 days SUBL3_1600981599
232d8cfe04d5 exadra37/phoenix-dev:1.5.3_elixir-1.10.3_erlang-23.0.2_git "mix phx.server" 8 days ago Up 8 days 127.0.0.1:4000-4001->4000-4001/tcp Staging_todo-tasks_app
Now let's find their ids:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}'
output:
0a19c6e305a2
65ef527065a8
232d8cfe04d5
Let's get the first container ID:
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1
NOTE: replace head -1 with head -2 to get the second line in the output...
output:
0a19c6e305a2
Let's see the logs for the first container in the list
command:
sudo docker ps | grep -i elixir - | awk '{print $1}' | head -1 | xargs sudo docker logs
NOTE: replace head -1 with tail -1 to get the logs for the last container in the list.
output:
[info] | module=WebIt.Live.Calendar.Socket function=mount/1 line=14 | Mount Calendar for date: 2020-09-30 23:29:38.229174Z
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=40 | Tzdata polling for update.
[debug] | module=Tzdata.ReleaseUpdater function=poll_for_update/0 line=44 | Tzdata polling shows the loaded tz database is up to date.
Combining the different replies, I used:
function dlog() { docker ps | grep -i track - | awk '{print $1}' | head -1 | xargs docker logs | grep -i -A 4 "$2";}
to get the best of both worlds. So I can have a function that will have me type 4 letters instead of 2 commands and with no case sensitivity
I can then use dlog keyword to get my logs.
I hardcoded track and -A 4 as I usually use that query but I could have passed them as arguments to add on modularity (my goal here was really simplicity)
Thanks for your help!
I was trying to redirect logs of pods in a k8s into a file of their name.
kubectl get pods | awk '{print $1}' | tail -2 | xargs -I {} kubectl logs {} > {}
This is the result.
demo#demo1:~/log$ ls
{}
What I need is, if this is the pod details
demo#demo1:~/log$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 3d23h
pod2 1/1 Running 0 3d23h
The expected result is
demo#demo1:~/log$ ls
pod1 pod2
files pod1 & pod2 will have logs of respective pods.
kubectl get pods | awk '{print $1}' | tail -n +2 | xargs -I{} sh -c 'kubectl logs $1 > $1' -- {}
Courtesy to this answer
for i in $(kubectl get po -oname | awk -F'/' '{print $2}'); do kubectl logs $i > $i; done
I want to filter the running nodes list . I tried below command but its only showing running status.I need to filter with it name..Any help ?
[root#techsl]# kubectl get nodes -o jsonpath='{range .items[*]}{#.metadata.name}:{range #.status.`enter code here`conditions[*]}{#.type}={#.status};{end}{end}'| tr ';' "\n" | grep "Ready=True"
Something like this is easier:
kubectl get nodes | grep -v NotReady | awk '{print $1}' | tail -n2
server1
server3
kubectl get nodes
NAME STATUS ROLES AGE VERSION
server1 Ready master 106d v1.14.9
server2 NotReady <none> 106d v1.14.9
server3 Ready <none> 106d v1.14.9
kubectl get nodes -o jsonpath="{range .items[*]}{#.metadata.name}: {range #.status.conditions[4]}{#.type}; {end}{end}";
kubernetes-1-17-master: Ready; kubernetes-1-17-worker: Ready;
The way you do:
kubectl get nodes -o jsonpath="{range .items[*]}{#.metadata.name}:{range #.status.conditions[4]}{#.type}={#.status}; {end}{end}" | grep "Ready=True"
kubernetes-1-17-master:Ready=True; kubernetes-1-17-worker:Ready=True;
In one of my tools is needed the PID of specyfic process in system. I try do this by following command:
parasit#host:~/# ps -ef | grep beam.smp |grep -v grep |awk '{ print $2 }' |head -n1
11982
Works fine, but when i try use the same command in script in the vast majority of cases got PID of grep instead of target process (beam.smp in this case) despite of 'grep -v grep`.
parasit#host:~/# cat getPid.sh
#!/bin/bash
PROC=$1
#GET PID
CMD="ps -ef | grep $PROC |grep -v grep |awk '{ print \$2 }' |head -n1"
P=`eval $CMD`
parasit#host:~/# bash -x ./getPid.sh beam.smp
+ PROC=beam.smp
+ CMD='ps -ef |grep beam.smp |grep -v grep |awk '\''{ print $2 }'\'' |head -n1'
++ eval ps -ef '|grep' beam.smp '|grep' -v grep '|awk' ''\''{' print '$2' '}'\''' '|head' -n1
+++ head -n1
+++ awk '{ print $2 }'
+++ grep -v grep
+++ grep beam.smp
+++ ps -ef
+ P=2189
Interestingly, it is not deterministic, I know it sounds strange, but sometimes it works OK, and sometimes no, I have no idea what it depends on.
How it is possibile? Is there any better method to get rid of "grep" from results?
BR
Parasit
pidof -s is made for that (-s: single ID is returned):
pidof -s "beam.smp"
However, pidof also returns defunct (zombie, dead) processes. So here's a way to get PID of the first alive-and-running process of a specified command:
# function in bash
function _get_first_pid() {
ps -o pid=,comm= -C "$1" | \
sed -n '/'"$1"' *$/{s:^ *\([0-9]*\).*$:\1:;p;q}'
}
# example
_get_first_pid "beam.smp"
-o pid=,comm=: list only PID and COMMAND columns; ie. only list what we need to check; if all are listed then it is more difficult to process later on
-C "$1": of the command specified in -C; ie. only find the process of that specific command, not everything
sed: print only PID for first line that do not have "defunct" or anything after the base command name
I am looking for a way to log and graphically display cpu and RAM usage of linux processes over time. Since I couldn't find a simple tool to so (I tried zabbix and munin but installation failed) I started writing a shell script to do so
The script file parses the output of top command through awk and logs into a csv file. It
Figures out the pid of the processes through ps command
Uses top and awk to log cpu and memory usage.
Here is how the script looks like
#!/bin/sh
#A script to log the cpu and memory usage of linux processes namely - redis, logstash, elasticsearch and kibana
REDIS_PID=$(ps -ef | grep redis | grep -v grep | awk '{print $2}')
LOGSTASH_PID=$(ps -ef | grep logstash | grep -v grep | awk '{print $2}')
ELASTICSEARCH_PID=$(ps -ef | grep elasticsearch | grep -v grep | awk '{print $2}')
KIBANA_PID=$(ps -ef | grep kibana | grep -v grep | awk '{print $2}')
LOG_FILE=/var/log/user/usage.log
echo $LOG_FILE
top -b | awk -v redis="$REDIS_PID" -v logstash="$LOGSTASH_PID" '/redis|logstash/ {print $1","$9","$10","$12}'
How do I
Print the resource usage for multiple processes. Specifying multiple
variables in the awk pattern is not working. It prints the usage for
the first pid (redis in the above script)
Print current timestamp when printing the resource details (through date +"%T")
Print the process name along with the resource usage. Redis, Logstash, ElasticSearch or Kibana in the above case
Redirect the above commands output to a log file. I tried > $LOG_FILE but it didn't work.
Thoughts/Inputs?
Thanks in advance.
To figure out PIDs you can simplify your script greatly using pgrep:
REDIS_PID=$(pgrep -f redis)
LOGSTASH_PID=$(pgrep -f logstash)
ELASTICSEARCH_PID=$(pgrep -f elasticsearch)
KIBANA_PID=$(pgrep -f kibana)
EDIT: Sorry had to leave for some work and couldn't provide the full answer.
In order to capture top's output use following script:
while :; do
top -n 1 -b | awk -v redis="$REDIS_PID" -v logstash="$LOGSTASH_PID"
'$1 == redis || $1 == logstash {print $1","$9","$10","$12}' >> $LOG_FILE
sleep 3
done