How to get PID from remote executed command? - linux

If I do the following in Bash, then I get the PID of the remotely started mbuffer, and even though mbuffer is still running, I get the terminal back, which is what I want.
read -r pid < <(ssh 10.10.10.47 'nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
echo $pid
Now I would like to do the same in Perl, so I try
use Capture::Tiny 'capture';
my ($stdout, $stderr, $exit) = capture {
system("read -r pid < <(ssh 10.10.10.47 'nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!'); echo \$pid");
};
print "stdout $stdout\n";
print "stderr $stderr\n";
print "exit $exit\n";
Here I would have expected that $stdout would have given me the PID from the last echo command, but I got nothing.
Question
How do I get the PID of the remotely executed mbuffer in Perl, and so the Perl script isn't waiting for mbuffer to exit before continuing?

The problem seams to be that it is not possible to execute two commands in one system() or maybe it is, but not possible to get the output from the last command.
Creating a local helper script solved the problem.
#!/usr/bin/bash
# Redirection of stdin and stderr to files (preventing them from holding
# handles that connect, eventually, to the terminal).
read -r pid < <(ssh $1 "/usr/gnu/bin/nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest$2 </dev/null 2>/tmp/mtest.err & echo \$!")
echo $pid
and in Perl
my ($stdout, $stderr, $exit) = capture {
system("/comp/mbuffer-zfs-listen.sh 10.10.10.47 11");
};

Related

How to stop supervisor process with piped command

I want to send server logs to the telegram bot. Here's my supervisor config:
[program:telegram-log-nginx]
process_name=%(program_name)s_%(process_num)02d
command=bash -c 'tail -f /var/log/nginx/error.log | /usr/share/telegram_log.sh nginx'
autostart=true
autorestart=true
numprocs=1
When I stop supervisor
supervisorctl stop telegram-log-nginx:*
the process is still running:
ps aux | grep telegram
www-data 32151 0.0 0.0 21608 3804 ? S 20:53 0:00 /bin/bash /usr/share/telegram_log.sh nginx
Is there a proper way to stop all processes?
telegram_log.sh
#!/bin/bash
CHATID="chat"
KEY="key"
SERVICE=$1
TIME="10"
URL="https://api.telegram.org/bot$KEY/sendMessage"
while IFS= read -r line; do
read -r -d '' TEXT <<- EOM
Service: $SERVICE
$line
EOM
curl -s --max-time $TIME -d "chat_id=$CHATID&disable_web_page_preview=1&text=$TEXT" $URL >/dev/null
done
├─supervisord,1101 /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
│ ├─php,643187 /var/www/web/artisan queue:work
│ ├─php,643188 /var/www/web/artisan queue:work
│ ├─php,643189 /var/www/web/artisan queue:work
├─systemd,640839 --user
│ └─(sd-pam),640841
├─systemd-journal,406
├─systemd-logind,1102
├─systemd-resolve,807
├─systemd-timesyn,684
│ └─{systemd-timesyn},689
├─systemd-udevd,440
├─tail,643203 -f /var/log/nginx/error.log
├─telegram_log.sh,643204 /usr/share/telegram_log.sh nginx
Assuming that you have a new enough version of bash that process substitutions update $!, you can have your parent script store the PIDs of both its direct children and signal them explicitly during shutdown:
#!/usr/bin/env bash
# make our stdin come directly from tail -f; record its PID
exec < <(exec tail -f /var/log/nginx/error.log); tail_pid=$!
# start telegram_log.sh in the background inheriting our stdin; record its PID
/usr/share/telegram_log.sh nginx & telegram_script_pid=$!
# close our stdin to ensure that we don't keep the tail alive -- only
# telegram_log.sh should have a handle on it
exec </dev/null
# define a cleanup function that shuts down both subprocesses
cleanup() { kill "$tail_pid" "$telegram_script_pid"; }
# tell the shell to call the cleanup function when receiving a SIGTERM, or exiting
trap cleanup TERM EXIT
# wait until telegram_log.sh exits and exit with the same status
wait "$telegram_script_pid"
This means your config file might become something more like:
command=bash -c 'exec < <(exec tail -f /var/log/nginx/error.log); tail_pid=$!; /usr/share/telegram_log.sh nginx & telegram_script_pid=$!; exec </dev/null; cleanup() { kill "$tail_pid" "$telegram_script_pid"; }; trap cleanup TERM EXIT; wait "$telegram_script_pid"'
#CharlesDuffy has provided the answer
bash -c 'tail -f /var/log/nginx/error.log | /usr/share/telegram_log.sh nginx'
should be
bash -c 'exec < <(exec tail -f /var/log/nginx/error.log); exec /usr/share/telegram_log.sh nginx'

Don't kill created processes, which created by ps - linux

give some advice, please.
I am trying to kill processes remotely (ssh to hostname), find some processes and kill them. But I have a condition: Do not kill java process, sshd and gnome.
Here is example (I just do echo except kill):
#/bin/sh -x.
HOSTFILE=$1
vars=`cat $HOSTFILE`
for i in $vars; do
ssh "$i" /bin/bash <<'EOF'
echo $(hostname)
ps aux | grep -e '^sys_ctl'| grep -v "java" | grep -v "sshd" | \
grep -v "gnome" | awk '{print $2$11}'| for i in `xargs echo`; do echo $i; done;
EOF
done
The result is:
host1:
21707/bin/bash
21717ps
21718grep
21722awk
21723/bin/bash
21724xargs
host2:
15241/bin/bash
15251ps
15252grep
15256awk
15257/bin/bash
15258xargs
89740-bash
98467sleep
98469sleep
98471sleep
98472sleep
98474sleep
98475sleep
I want to kill (output), only sleep processes, not grep,awk,bash,xargs,ps
Can you suggest something elegant?
why not just : kill $(pgrep -f sleep)
or : pkill -f sleep

How to assign to a bash variable an ssh remote command the pid while capturing its

Introduction
My question is very similar to this one, except that I'd like the output from the command to be redirected to a local file instead of a remote one.
The questioner was asking for a way to retrieve the process ID with a command similar to this one, where the mbuffer command wouldn't cause hanging:
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
The answerer responded with the following command to resolve the problem
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Which is really helpful but still places files on the remote machine, not the local one.
My Attempts
The following is my attempt to capture a log of the output of $command:
read -r PID < <(ssh $remote 'nohup $command >&2 & echo $!' 2> $log)
Which sets PID to the process ID properly but doesn't produce a log.
Question
How can I capture a log on my local machine of the stdout of my $command while still assigning PID to the process ID of $command?
Another approach:
{ read -r pid;
# Do whatever you want with $pid of the process on remote machine
cat > my_local_system_log_file
} <(ssh 10.10.10.46 "mkfifo /tmp/mtest; mbuffer -4 -v 0 -q -I 8023 &> /tmp/mtest & echo $!; cat /tmp/mtest");
Basically, the first line is PID & further lines are logs from the process.

Working around sudo in shell script child process

So the reason I am asking this is because I'm running two programs simultaneously that are persistent, on the child process a programm is running that requires sudo rights.
#!/bin/bash
echo "Name the file:"
read filename
while [[ 1 -lt 2 ]]
do
if [ -f /home/max/dump/$filename.eth ]; then
echo "File already exist."
read filename
else
break
fi
done
#Now calling a new terminal for dumping
gnome-terminal --title="tcpdump" -e "sh /home/max/dump/dump.sh $filename.eth"
ping -c 1 0 > /dev/null **Waiting for tcpdump to create file**
#Packet analysis program is being executed
Script dump.sh
#!/bin/bash
filename=$1
echo password | sudo tcpdump -i 2 -s 60000 -w /home/max/dump/$filename -U
host 192.168.3.2
#Sudo still asks me for my password though password is piped into stdin

Close gnome-terminal with specific title from another script/shell command

I need to close a specific gnome-terminal window having a unique name from any other bash/shell script.
Eg:
$] gnome-terminal --title "myWindow123" -x "watch ls /tmp"
...
...
gnome-terminal opened in the name "myWindow123"
All I need is to kill that terminal from my script. Is there expect kind of script support in bash also?
As a contestant for the ugliest hack of the day:
sh$ TERMPID=$(ps -ef |
grep gnome-terminal | grep myWindow123 |
head -1 | awk '{ print $2 }')
sh$ kill $TERMPID
A probably better alternative would be to record the PID of the terminal at launch time, and then kill by that pid:
sh$ gnome-terminal --title "myWindow123" -x "watch ls /tmp"
sh$ echo $! > /path/to/my.term.pid
...
...
# Later, in a terminal far, far away
sh$ kill `cat /path/to/my.term.pid`
In the script that starts the terminal:
#!/bin/bash
gnome-terminal --title "myWindow123" --disable-factory -x watch ls /tmp &
echo ${!} > /var/tmp/myWindow123.pid
In the script that shall slay the terminal:
#!/bin/bash
if [ -f /var/tmp/myWindow123.pid ]; then
kill $(cat /var/tmp/myWindow123.pid && rm /var/tmp/myWindow123.pid)
fi
It's a bit of an ugly hack, but you can create a wrapper script that takes a nonce as an argument, and then kill that.
cat > ~/wrapper.sh < 'EOF'
#!/bin/sh
#Throw away the nonce, and then run the command given
shift
"$#"
EOF
chmod +x ~/wrapper.sh
#Make a random string, so we can kill it later
nonce=`tr -dc '0-9A-Za-z' < /dev/urandom | head -n 10`
gnome-terminal -- ~/wrapper.sh "$nonce" watch ls /tmp
#...
#...
#...
#Kill any command with our nonce as one of its arguments
pkill -f "$nonce"

Resources