I have thousands of INFO reaped unknown pid XXXXX in my docker logs when I do a docker logs -f, is there any way to suppress these messages via a regex for these logs?
Example Format [DATE] [TIME] INFO reaped unknown pid XXXX
You could use grep -v -e '<REGEXP>'
Example:
docker logs <CONTAINER_ID> -f 2>&1 | grep -v -e "2019.*INFO"
Related
I am trying to get the process id in PID and then get the cpu and memory usage with all the process id that the grep command has listed but I am facing an error. Any help would be appreciated
#!/bin/bash
PID=`ps -eaf | grep firefox | grep -v grep | awk '{print $2}'`
usage= `ps -p $PID -o %cpu,%mem`
error:
error: process ID list syntax error
Usage:
ps [options]
Try 'ps --help <simple|list|output|threads|misc|all>'
or 'ps --help <s|l|o|t|m|a>'
for additional help text.
For more details see ps(1).
To do one at a time
#!/bin/bash
for PID in `ps -eaf | grep firefox | grep -v grep | awk '{print $2}'` ; do
usage=`ps -p $PID -o %cpu,%mem`
done
This typically happens when firefox is not running: the PID does not exist and you get following behaviour:
Prompt>ps -p -o %cpu,%mem // as firefox does not run, $PID is empty
error: process ID list syntax error
Usage:
ps [options]
Try 'ps --help <simple|list|output|threads|misc|all>'
or 'ps --help <s|l|o|t|m|a>'
for additional help text.
For more details see ps(1).
The suggested command to get csv list of pid pgrep -d, -f firefox .
Or collect pids_list into variable:
pid_list=$(pgrep -d, -f firefox)
Now use ps command to extract information about $pid_list
ps -p $pid_list -o %cpu,%mem
Or in one line:
ps -p $(pgrep -d, -f firefox) %cpu,%mem
So I am trying to get the combined output of all the container logs with its container name in a log file
docker logs --tail -3 with container name >> logst.log file.
docker logs takes a single command so you would have to run them for each container. I guess I could do something along:
docker ps --format='{{.Names}}' | xargs -P0 -d '\n' -n1 sh -c 'docker logs "$1" | sed "s/^/$1: /"' _
docker ps --format='{{.Names}}' - print container names
xargs - for input
-d '\n' - for each line
-P0 - execute in parallel with any count of parallel jobs
remove this option if you don't intent to do docker logs --follow
it may cause problems, consider adding stdbuf -oL and sed -u to unbuffer the streams
-n1 - pass one argument to the underyling process
sh -c 'script' _ - execute script for each line with line passed as first positional argument
docker logs "$1" - get the logs
sed 's/^/$1: /' - prepend the name of the docker name to each log line
But a way better and industrial grade solution would be to forward docker logs to journalctl or other logging solution and use that utility to aggregate and filter logs.
Got it.
for i in docker ps -a -q --format "table {{.ID}}"; do { docker ps -a -q --format "table {{.ID}}\t{{.Names}}\n" | grep "$i" & docker logs --timestamps --tail 1 "$i"; } >> logs.log; done
logs.log is a generic file name.
I need help with the command where I am trying to grep the PIDs of ecm simulator and kill the same using kubectl :
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "ps -ef | grep ecm | grep node | awk '{print $2}' "
Output of the above command:
root 9857 0 0 07:11 ? 00:00:00 bash -c /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js> /tmp/simulatorEcmResponse.txt
root 9863 9857 0 07:11 ? 00:00:00 /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js
Expected output is:
9857
9863
Then further I need to kill the PIDs:
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "ps -ef | grep ecm | grep node | awk '{print $2}' | xargs kill -9"
When I am executing the same within the service pod it's working but it's giving issues when I am doing via kubectl from outside.
Could anyone please let me know what I am doing wrong here?
NOTE: There are 2 PIDs which needs to be killed from the below output:
eric-service-0:/ # ps -ef | grep ecm | grep node
root 9857 0 0 07:11 ? 00:00:00 bash -c /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js> /tmp/simulatorEcmResponse.txt
root 9863 9857 0 07:11 ? 00:00:00 /tmp/simulator/node-v8.11.3-linux-x64/bin/node /tmp/simulator/ecm_mod.js
EDIT:
Output of the command as asked by #Cyrus below:
Posting this this as Community Wiki answer for better visibility. Solution has been provided in comments by #Cyrus.
In Short, OP wanted to Kill/interrupt some process using their PID's. OP wanted to do it from cluster level on specific pod/container which included ecm simulator.
To do it, commands below were used:
exec - execute a command in a container
-- bash - run bash inside container
ps -ef - list all process on the system
grep - serch specific pattern
awk - pattern scanning and processing language.
xargs - build and execute command lines from standard input
kill - send a signal to a process
In MANUAL you can find some information about ps flags:
To see every process on the system using standard syntax:
ps -e
ps -ef
ps -eF
ps -ely
however each flag will still give another output, like below:
-e
PID TTY TIME CMD
-ef
UID PID PPID C STIME TTY TIME CMD
Cyrus advised to use following command:
kubectl exec eric-service-0 -n cicd --kubeconfig /root/admin.conf -- bash -c "pgrep -f 'node.*ecm'"
bash -c - If the -c option is present, then commands are read from the first non-option argument command_string.
Also explain in comment:
pgrep looks through the currently running processes and lists the process IDs which match the selection criteria to stdout. From man pgrep. node.*ecm is a regex.
I am using grep to remove a lot of log noise generated e.g. by NewRelic. I do so using the following command:
heroku logs --force-colors -t -a myApp -s app | grep --color=never web.1
Unfortunately the useful coloring of the logs gets lost somewhere, and the output is uncolored.
The --force-colors flag should force the heroku logs command to output colors even when pipping the output elsewhere. the --color=never flag is supposed to force grep not to use their own coloring scheme.
I have tried all possible combinations with absence or presence of these two color flags, to no avail. Does anybody have a suggestion on how to solve this issue?
I have found a solution here:
script -q /dev/null heroku logs --force-colors -t -a myApp -s app | grep --color=never web.1
The color flags are no even necessary so this works as well:
script -q /dev/null heroku logs -t -a myApp -s app | grep web.1
I am looping through folders with Java applications and getting the config file for each.
app1/config.yml
app2/config.yml
etc.
I then pull the port from this config file by using:
port= cat app1/config.yml | grep 90 | cut -d: -f2
I want to use the port to kill the application, I did find this code that does half of what I want it to do:
kill $(sudo lsof -t -i:4990)
I want to use the variable stored in port to execute the kill command, but I can't get it to work, what is the correct way to use the command, I have tried multiple ways:
kill $(sudo lsof -t -i:$port)
kill $(sudo lsof -t -i:port)
kill $(sudo lsof -t -i:"$port")
kill $(sudo lsof -t -i:'$port')
But none of these work, I keep getting errors.
Any help would be appreciated
You're not setting port correctly, you left out the $(...) around the command.
port=$(cat app1/config.yml | grep 90 | cut -d: -f2)
kill $(sudo lsof -t -i:$port)