How to stop supervisor process with piped command - linux

I want to send server logs to the telegram bot. Here's my supervisor config:
[program:telegram-log-nginx]
process_name=%(program_name)s_%(process_num)02d
command=bash -c 'tail -f /var/log/nginx/error.log | /usr/share/telegram_log.sh nginx'
autostart=true
autorestart=true
numprocs=1
When I stop supervisor
supervisorctl stop telegram-log-nginx:*
the process is still running:
ps aux | grep telegram
www-data 32151 0.0 0.0 21608 3804 ? S 20:53 0:00 /bin/bash /usr/share/telegram_log.sh nginx
Is there a proper way to stop all processes?
telegram_log.sh
#!/bin/bash
CHATID="chat"
KEY="key"
SERVICE=$1
TIME="10"
URL="https://api.telegram.org/bot$KEY/sendMessage"
while IFS= read -r line; do
read -r -d '' TEXT <<- EOM
Service: $SERVICE
$line
EOM
curl -s --max-time $TIME -d "chat_id=$CHATID&disable_web_page_preview=1&text=$TEXT" $URL >/dev/null
done
├─supervisord,1101 /usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
│ ├─php,643187 /var/www/web/artisan queue:work
│ ├─php,643188 /var/www/web/artisan queue:work
│ ├─php,643189 /var/www/web/artisan queue:work
├─systemd,640839 --user
│ └─(sd-pam),640841
├─systemd-journal,406
├─systemd-logind,1102
├─systemd-resolve,807
├─systemd-timesyn,684
│ └─{systemd-timesyn},689
├─systemd-udevd,440
├─tail,643203 -f /var/log/nginx/error.log
├─telegram_log.sh,643204 /usr/share/telegram_log.sh nginx

Assuming that you have a new enough version of bash that process substitutions update $!, you can have your parent script store the PIDs of both its direct children and signal them explicitly during shutdown:
#!/usr/bin/env bash
# make our stdin come directly from tail -f; record its PID
exec < <(exec tail -f /var/log/nginx/error.log); tail_pid=$!
# start telegram_log.sh in the background inheriting our stdin; record its PID
/usr/share/telegram_log.sh nginx & telegram_script_pid=$!
# close our stdin to ensure that we don't keep the tail alive -- only
# telegram_log.sh should have a handle on it
exec </dev/null
# define a cleanup function that shuts down both subprocesses
cleanup() { kill "$tail_pid" "$telegram_script_pid"; }
# tell the shell to call the cleanup function when receiving a SIGTERM, or exiting
trap cleanup TERM EXIT
# wait until telegram_log.sh exits and exit with the same status
wait "$telegram_script_pid"
This means your config file might become something more like:
command=bash -c 'exec < <(exec tail -f /var/log/nginx/error.log); tail_pid=$!; /usr/share/telegram_log.sh nginx & telegram_script_pid=$!; exec </dev/null; cleanup() { kill "$tail_pid" "$telegram_script_pid"; }; trap cleanup TERM EXIT; wait "$telegram_script_pid"'

#CharlesDuffy has provided the answer
bash -c 'tail -f /var/log/nginx/error.log | /usr/share/telegram_log.sh nginx'
should be
bash -c 'exec < <(exec tail -f /var/log/nginx/error.log); exec /usr/share/telegram_log.sh nginx'

Related

why defunct process generate when call exec in shell script?

why defunct process generate when call exec in shell script?
Because some extra configure and sharelib should be set and preload before starting snmpd,
so I use shell script like bellow, but the problem is that a zombie process was generated every time when start the shell script.
as far as I know, exec will replace the original shell process 26452, why a child process 26453 generate and become zombie?
$# ps -ef | grep snmpd
root 26452 12652 0 10:24 pts/4 00:00:00 snmpd udp:161,udp6:161 -f -Ln -I -system_mib ifTable -c /opt/snmp/config/snmpd.conf
root 26453 26452 0 10:24 pts/4 00:00:00 [snmpd_wapper.sh] <defunct>
how to avoid the zombie process, pls help!
cat /home/xpeng/snmpd_wapper.sh
#!/bin/bash
( sleep 2;/opt/snmp/bin/snmpusm -v 3 -u myuser -l authNoPriv -a MD5 -A xpeng localhost create top myuser >/dev/null 2>&1; \
/opt/snmp/bin/snmpvacm -v 3 -u myuser -l authNoPriv -a MD5 -A xpeng localhost createSec2Group 3 top RWGroup >/dev/null 2>&1; \
/opt/snmp/bin/snmpvacm -v 3 -u myuser -l authNoPriv -a MD5 -A xpeng localhost createView all .1 80 >/dev/null 2>&1; \
/opt/snmp/bin/snmpvacm -v 3 -u myuser -l authNoPriv -a MD5 -A xpeng localhost createAccess RWGroup 3 1 1 all all none >/dev/null 2>&1 ) &
LIBRT=/usr/lib64
if [ "$(. /etc/os-release; echo $NAME)" = "Ubuntu" ]; then
LIBRT=/usr/lib/x86_64-linux-gnu
fi
echo $$>/tmp/snmpd.pid
export LD_PRELOAD=$LD_PRELOAD:$LIBRT/librt.so:/opt/xpeng/lib/libxpengsnmp.so
exec -a "snmpd" /opt/snmp/sbin/snmpd udp:161,udp6:161 -f -Ln -I -system_mib,ifTable -c /opt/snmp/config/snmpd.conf
It's a parent process' responsibility to wait for any child processes. The child process will be a zombie from the time it dies until the parent waits for it.
You started a child process, but then you used exec to replace the parent process. The new program doesn't know that it has children, so it doesn't wait. The child therefore becomes a zombie until the parent process dies.
Here's a MCVE:
#!/bin/sh
sleep 1 & # This process will become a zombie
exec sleep 30 # Because this executable won't `wait`
You can instead do a double fork:
#!/bin/sh
( # Start a child shell
sleep 1 & # Start a grandchild process
) # Child shell dies, grandchild is given to `init`
exec sleep 30 # This process now has no direct children

How to use multiple exec command in a upstart script?

Here is what I tried to run multiple exec command , but I am getting output of email but not for the sms . Is there a way to run the both exec command ?
description "starts a kafka consumer for email and sms "
respawn
start on runlevel [2345]
stop on runlevel [!2345]
env FOUNDATION_HOME=/opt/home/configs
env VIRTUAL_ENV=/opt/home/virtualenvs/analytics
# run as non privileged user
setuid xxx
setgid xxx
console log
chdir /opt/xxx
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_sms > /tmp/sms.out 2>&1
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_email > /tmp/email.out 2>&1
post-start script
PID=`status kafka_upstart | egrep -oi '([0-9]+)$' | head -n1`
echo $PID > /var/tmp/kafka_upstart.pid
end script
post-stop script
rm -f /var/tmp/kafka_upstart.pid
end script
You can try concatenating them with && (assuming they're not blocking indefinitely):
exec stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_sms > /tmp/sms.out 2>&1 && stdbuf -oL /opt/xxx/virtualenvs/analytics/bin/python -m yukon.pipelinerunnerexternal /opt/xxx/configs/datastream.pheme_email > /tmp/email.out 2>&1
Or put the commands in a separate script, kafkalaunch.sh, then run the script:
exec kafkalaunch.sh
Which is more elegant in my opinion.

Start and stop openconnect using Bash

I am trying to achieve the following:
./vpnconnect.sh start should establish a VPN connection to a server.
./vpnconnect.sh stop should terminate the VPN connection.
Here is the attempted shell script which doesn't work as expected.
It gives error:
~$ ./vpnconnect.sh stop
Stopping VPN connection:
./vpnconnect.sh: 22: ./vpnconnect.sh: root: not found
./vpnconnect.sh: 26: ./vpnconnect.sh: 14128: not found
The script:
#!/bin/sh
#
#
#
#
PIDOCN=""
VAR2=""
# Start the VPN
start() {
echo "Starting VPN Connection"
eval $(echo 'TestVpn&!' | sudo openconnect -q -b --no-cert-check 127.0.0.1 -u myUser --passwd-on-stdin)
success $"VPN Connection established"
}
# Stop the VPN
stop() {
echo "Stopping VPN connection:"
VAR2=eval $(sudo ps -aef | grep openconnect)
echo $VAR2
eval $(sudo kill -9 $VAR2)
PIDOCN=eval $(pidof openconnect)
echo $PIDOCN
eval $(sudo kill -9 $PIDOCN)
}
### main logic ###
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status openconnect
;;
restart|reload|condrestart)
stop
start
;;
*)
echo $"Usage: $0 {start|stop|restart|reload|status}"
exit 1
esac
exit 0
The error messages:
./vpnconnect.sh: 22: ./vpnconnect.sh: root: not found
./vpnconnect.sh: 26: ./vpnconnect.sh: 14128: not found
Come from these lines:
VAR2=eval $(sudo ps -aef | grep openconnect)
PIDOCN=eval $(pidof openconnect)
These lines are non-sense. The shell takes the output of the $(...) sub-shells and tries to execute them as commands, with VAR2 and PIDOCN variables set to "eval". This is definitely not what you wanted.
Probably you're looking for something more like this:
stop() {
echo "Stopping VPN connection:"
sudo ps -aef | grep openconnect
sudo kill -9 $(pidof openconnect)
}
The issue is with eval:
VAR2=eval $(sudo ps -aef | grep openconnect)
Here, eval will try to execute the output of sudo ps -aef | grep openconnect command. That's the reason you are getting the errors you are seeing.
Rewrite it as:
VAR2=$(sudo ps -aef | grep openconnect)
Which will simply assign the output of the sudo command pipeline to VAR2 variable. However, you can't use VAR2 as an argument to kill because it contains other tokens like username along with the PID.
In other places where you are doing eval $(command), all you need is command.
You could use pkill openconnect to kill any existing openconnect processes instead of finding out the PID and issuing a kill against it. pgrep and pkill are quite handy for start/stop/restart script like yours.

Daemon won't kill children that are reading from a named pipe

I've written this bash daemon that keeps an eye on a named pipe, logs everything it sees on a file named $LOG_FILE_BASENAME.$DATE, and it also creates a filtered version of it in $ACTIONABLE_LOG_FILE:
while true
do
DATE=`date +%Y%m%d`
cat $NAMED_PIPE | tee -a "$LOG_FILE_BASENAME.$DATE" | grep -P -v "$EXCEPTIONS" >> "$ACTIONABLE_LOG_FILE"
done
pkill -P $$ # Here it's where it should kill it's children
exit 0
When the daemon is running, this is how the process table looks:
/bin/sh the_daemon.sh
\_ cat the_fifo_queue
\_ tee -a log_file.20150807
\_ grep -P -v "regexp" > filtered_log_file
The problem is that when I kill the daemon (SIGTERM), the cat, the tee, and the grep processes that where spawned by the daemon are not collected by the parent. Instead, they become orphans and keep on waiting for input on the named pipe.
Once the FIFO receives some input, then they process that input as instructed and die.
How can I make the daemon kill its children before dying? Why aren't they dying with pkill -P $$?
You want to setup a signal handler for your script which kills all members of its process group (its children) in case the script itself gets signalled:
#!/bin/bash
function handle_sigterm()
{
pkill -P $$
exit 0
}
trap handle_sigterm SIGTERM
while true
do
DATE=`date +%Y%m%d`
cat $NAMED_PIPE | tee -a "$LOG_FILE_BASENAME.$DATE" | grep -P -v "$EXCEPTIONS" >> "$ACTIONABLE_LOG_FILE"
done
handle_sigterm
exit 0
Update:
As per pilcrow's comment replace
cat $NAMED_PIPE | tee -a "$LOG_FILE_BASENAME.$DATE" | grep -P -v "$EXCEPTIONS" >> "$ACTIONABLE_LOG_FILE"
by
cat $NAMED_PIPE | tee -a "$LOG_FILE_BASENAME.$DATE" | grep -P -v "$EXCEPTIONS" >> "$ACTIONABLE_LOG_FILE" &
wait $!

How to get PID from remote executed command?

If I do the following in Bash, then I get the PID of the remotely started mbuffer, and even though mbuffer is still running, I get the terminal back, which is what I want.
read -r pid < <(ssh 10.10.10.47 'nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
echo $pid
Now I would like to do the same in Perl, so I try
use Capture::Tiny 'capture';
my ($stdout, $stderr, $exit) = capture {
system("read -r pid < <(ssh 10.10.10.47 'nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!'); echo \$pid");
};
print "stdout $stdout\n";
print "stderr $stderr\n";
print "exit $exit\n";
Here I would have expected that $stdout would have given me the PID from the last echo command, but I got nothing.
Question
How do I get the PID of the remotely executed mbuffer in Perl, and so the Perl script isn't waiting for mbuffer to exit before continuing?
The problem seams to be that it is not possible to execute two commands in one system() or maybe it is, but not possible to get the output from the last command.
Creating a local helper script solved the problem.
#!/usr/bin/bash
# Redirection of stdin and stderr to files (preventing them from holding
# handles that connect, eventually, to the terminal).
read -r pid < <(ssh $1 "/usr/gnu/bin/nohup /opt/omni/bin/mbuffer -4 -s 128k -m 2G -v 0 -q -I 8023 >/tmp/mtest$2 </dev/null 2>/tmp/mtest.err & echo \$!")
echo $pid
and in Perl
my ($stdout, $stderr, $exit) = capture {
system("/comp/mbuffer-zfs-listen.sh 10.10.10.47 11");
};

Resources