How to make sure child processes have correct user ID? - linux

As a etude, I wrote a quick-and-dirty bash script to set a radio alarm a few months ago. It sets a cron/at-job to start streaming an internet radio at as specified time (I use mplayer), and records the job id in a file for easy undoing. However, as it stands the logic to turn off a running alarm is simply to kill of the most recent couple of mplayer instances. This is potentially a problem if you're watching a video at the time the alarm goes off, or running a batch job converting audio or video files...
So I thought I'd create a designated virtual user for running this script, and, instead of killing the most recent mplayer instances, kill all and only those invoked by this user. I thus created a user radiowecker and invoke the script with sudo -u radiowecker /var/lib/radiowecker/wecker $1. However, this doesn't seem to do the job: While the at-job does show up as radiowecker's, the mplayer instances it spawns are filed under my UID. How do I ensure the child processes are also filed as radiowecker's?
if [ $2 ];
then stream="$2"
else stream=http://mp3stream1.apasf.apa.at:8000;
fi
if [ $1 ]; then
# with argument, set new radio alarm using 'at' and log the at-job-id
remove_log_when_playing="rm ~/.local/bin/weckerlogs/${*} "
play_radio="mplayer $stream -cache 1024"
and="&&"
show_running="touch ~/.local/bin/alarm_running"
printf "$remove_log_when_playing && $show_running && $play_radio" | at "${*}" \
&& echo $(atq | sort -nr | { read first last ; echo $first ; }) >> ~/.local/bin/weckerlogs/"${*}"
else
if [[ $(pgrep mplayer) && -e ~/.local/bin/alarm_running ]]; then
rm ~/.local/bin/alarm_running
# turn off running mplayer, assumed to be called from an earlier alarm
for i in 0 1; do
for id in $(pgrep mplayer)
do WECKER=$id
done
kill $WECKER
done
else
# turning off an alarm in the future has its own tool
echo "No active mplayer instances found."
echo "To turn off a future alarm, instead"
echo "use 'wecker-aus' <time>!"
echo "Currently set alarms:"
ls ~/.local/bin/weckerlogs/
fi
fi
turn off future alarm:
#/bin/bash
log=~/.local/bin/weckerlogs/"${*}"
atrm $(cat "$log")
rm "$log"

Related

script that will monitor changes in at least 2 files/directories and execute a third script only when both are modified

I have two sensors that each create an entry in a text file when triggered. Now I need something to monitor these two files (i can also put them in 2 directories each if that helps in any way) and trigger a third script only when changes occur to both of the aforementioned files/directories. I need real-time (or near to it) between events and notification. I have found tools like inotify-wait, fswatch, entr and some others but all of these are triggered at any change.
At the moment I'm trying this but it does not work properly:
#!/bin/bash
while inotifywait -e modify /home/user/triggerdir/ ;
do
if [ "inotifywait -e modify /home/user/triggerdir2/" ];
then
echo Alert | mail -n -s "test-notify SCRIPT HUZAAAA" user#gmail.com
else
# Don't do anything unless we've found one of those
:
fi
done
I have looked for similar issues/solutions on the web, the closest would be this but it has no working answer.
Since you're having trouble with that, you might consider a simplistic approach.
Rather than a loop, I'd put your script in the crontab. Run it every day, every hour, every minute, whatever you need. If you need more often, you could loop, but make sure you at least sleep a second to be nice to the CPU.
If a minute or more between event and notification is ok this should be all you need:
#!/bin/bash
key=/some/safe/path/.hidden_filename
[[ -e "$key" ]] || touch "$key" # make sure it exists
if [[ file1 -nt "$key" && file2 -nt "$key" ]]; then
mail -n -s "test-notify SCRIPT HUZAAAA" user#gmail.com <<< "Alert!"
touch "$key"
fi
I have hacked something together which does work as I need it to though is terrible coding (probably shouldn't be called that)
3 scripts involved:
script 1:
#!/bin/bash
count=0
while :
do
{ inotifywait -e modify /home/user/triggerdir/ && let count="$count + 1"; } || exit 1
if [ "$count" -eq "2" ]; then
echo Alert | mail -n -s "Huzzah" user#gmail.com
/home/user/trigger2.sh &&
killall trigger.sh inotifywait
fi
done
script 2:
#!/bin/bash
count=0
while :
do
{ inotifywait -e modify /home/user/triggerdir/ && let count="$count + 1"; } || exit 1
if [ "$count" -eq "2" ]; then
echo Alert | mail -n -s "Huzzah" user#gmail.com
/home/user/trigger.sh &&
killall trigger2.sh #Do something.
# count=-250
fi
done
as the two scripts spawn bash/inotify processes I run once in 24 hours a cronjob two kill those using this script 3:
#!/bin/bash
killall trigger2.sh trigger.sh inotifywait bash
any help to improve is welcome, thanks :)

fork wget 's with ability to control specific downloads

I'm writing a bash script, aka download-manager.
Point of interest is make this simple lines more advanced:
for link in ${links}; do
wget -q --show-progress ${link}
done
How to fork all downloads and provide for script-user a friendly way to kill one specific download after all has started?
Does wget -bqc run in parallel or not?
Is there anything to use something instead --show-progress to provide ability for script-user to show current status of specific download?
So idea is:
#declare associative array [download_url]=pid_of_download
declare -A downloads
#declare array of killed downloads
declare -a killedDownloads=()
function download() {
local url=$1
# killed downloads can be reloaded, but not successfully downloaded URL's
if [ ${downloads[${url}]} ] && [[ ! ${killedDownloads[*]} =~ ${do wnloads[${url}]} ]]; then
echo "Already download-[ing/ed] !"
else
wget -q ${url} &
downloads[${url}]=$!
fi
}
$! contains the process ID of the most recently executed background command, obviously PID of just executed wget command
When user wants to kill some download, we print all download array. While printing, we can also show status obtained as a substring from
jobs -l | grep ${downloads[$i])
...and use killedDownloads to know which URLs was actually killed, not finished.
if [[ ${killedDownloads[*]} =~ ${downloads[$i]} ]]
//set status var to killed
Kill itself :
function stop(){
local pid=$1
if ps -p ${pid} > /dev/null; then
kill -9 ${pid}
wait ${pid} 2>/dev/null
echo "Killed downloading of ${pid}"
killedDownloads+=${pid}
else
echo "No active download with id = ${pid} exists"
fi
}
And of course you'll need to put this inside some interactive loop, add checks for invalid urls, etc.

clear my script logs every 10 second

I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done

Linux Single Instance Kill if running too long

I am using the following to keep a single instance of a script running on my server. I have a cronjob to run this every minute.
How do I daemonize an arbitrary script in unix?
#!/bin/bash
if [[ $# < 1 ]]; then
echo "Name of pid file not given."
exit
fi
# Get the pid file's name.
PIDFILE=$1
shift
if [[ $# < 1 ]]; then
echo "No command given."
exit
fi
echo "Checking pid in file $PIDFILE."
#Check to see if process running.
PID=$(cat $PIDFILE 2>/dev/null)
if [[ $? = 0 ]]; then
ps -p $PID >/dev/null 2>&1
if [[ $? = 0 ]]; then
echo "Command $1 already running."
exit
fi
fi
# Write our pid to file.
echo $$ >$PIDFILE
# Get command.
COMMAND=$1
shift
# Run command
$COMMAND "$*"
Now I found out that my script had hung for some reason and therefore it was stuck. I'd like a way to check if the $PIDFILE is "old" and if so, kill the process. I know that's possible (check the timestamp on the file) but I don't know the syntax or if this is even a good idea. Also, when this script is running, the CPU should be pretty heavily used. If it hangs (rare but it happened at least once so far), the CPU usage drops to 0%. It would be nice if I could check that the process is really hung/not active, but I don't know if there's an easy way to do that (and I don't want to have many false positives where it gets killed but it's running fine).
To answer the question in your title, which seems quite different from your problem, use timeout.
Now, for your problem, I don't see where it could hang, unless you gave it a fifo queue for the pid file. Now, to run and respawn, you can just run this script once, on startup:
#!/bin/bash
while /bin/true; do
"$#"
wait
done
Which brings up another bug in the code you got from the other question: "$*" will pass all the arguments to the script as a single argument; without the quotes it'll split arguments with white space. "$#" will pass them individually and handling white space properly.
Call with /path/to/script command [argument]....

Using named pipes with bash - Problem with data loss

Did some search online, found simple 'tutorials' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.
[[Edit: found a much simpler solution, see reply to post. So the question I put forward is now academic - in case one might want a job server]]
Using Ubuntu 10.04 with Linux 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu).
My bash function is:
function jqs
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read txt <"$pipe"
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
fi
fi
done
}
I run this in the background:
> jqs&
[1] 5336
And now I feed it:
for i in 1 2 3 4 5 6 7 8
do
(echo aaa$i > /tmp/__job_control_manager__ && echo success$i &)
done
The output is inconsistent.
I frequently don't get all success echoes.
I get at most as many new text echos as success echoes, sometimes less.
If I remove the '&' from the 'feed', it seems to work, but I am blocked until the output is read. Hence me wanting to let sub-processes get blocked, but not the main process.
The aim being to write a simple job control script so I can run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run.
Full job manager below:
function jq_manage
{
export __gn__="$1"
pipe=/tmp/__job_control_manager_"$__gn__"__
trap "rm -f $pipe" EXIT
trap "break" SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
date
jobs
if (($(jobs | egrep "Running.*echo '%#_Group_#%_$__gn__'" | wc -l) < $__jN__))
then
echo "Waiting for new job"
if read new_job <"$pipe"
then
echo "new job is [[$new_job]]"
if [[ "$new_job" == 'quit' ]]
then
break
fi
echo "In group $__gn__, starting job $new_job"
eval "(echo '%#_Group_#%_$__gn__' > /dev/null; $new_job) &"
fi
else
sleep 3
fi
done
}
function jq
{
# __gn__ = first parameter to this function, the job group name (the pool within which to allocate __jN__ jobs)
# __jN__ = second parameter to this function, the maximum of job numbers to run concurrently
export __gn__="$1"
shift
export __jN__="$1"
shift
export __jq__=$(jobs | egrep "Running.*echo '%#_GroupQueue_#%_$__gn__'" | wc -l)
if (($__jq__ '<' 1))
then
eval "(echo '%#_GroupQueue_#%_$__gn__' > /dev/null; jq_manage $__gn__) &"
fi
pipe=/tmp/__job_control_manager_"$__gn__"__
echo $# >$pipe
}
Calling
jq <name> <max processes> <command>
jq abc 2 sleep 20
will start one process.
That part works fine. Start a second one, fine.
One by one by hand seem to work fine.
But starting 10 in a loop seems to lose the system, as in the simpler example above.
Any hints as to what I can do to solve this apparent loss of IPC data would be greatly appreciated.
Regards,
Alain.
Your problem is if statement below:
while true
do
if read txt <"$pipe"
....
done
What is happening is that your job queue server is opening and closing the pipe each time around the loop. This means that some of the clients are getting a "broken pipe" error when they try to write to the pipe - that is, the reader of the pipe goes away after the writer opens it.
To fix this, change your loop in the server open the pipe once for the entire loop:
while true
do
if read txt
....
done < "$pipe"
Done this way, the pipe is opened once and kept open.
You will need to be careful of what you run inside the loop, as all processing inside the loop will have stdin attached to the named pipe. You will want to make sure you redirect stdin of all your processes inside the loop from somewhere else, otherwise they may consume the data from the pipe.
Edit: With the problem now being that you are getting EOF on your reads when the last client closes the pipe, you can use jilles method of duping the file descriptors, or you can just make sure you are a client too and keep the write side of the pipe open:
while true
do
if read txt
....
done < "$pipe" 3> "$pipe"
This will hold the write side of the pipe open on fd 3. The same caveat applies with this file descriptor as with stdin. You will need to close it so any child processes dont inherit it. It probably matters less than with stdin, but it would be cleaner.
As said in other answers you need to keep the fifo open at all times to avoid losing data.
However, once all writers have left after the fifo has been open (so there was a writer), reads return immediately (and poll() returns POLLHUP). The only way to clear this state is to reopen the fifo.
POSIX does not provide a solution to this but at least Linux and FreeBSD do: if reads start failing, open the fifo again while keeping the original descriptor open. This works because in Linux and FreeBSD the "hangup" state is local to a particular open file description, while in POSIX it is global to the fifo.
This can be done in a shell script like this:
while :; do
exec 3<tmp/testfifo
exec 4<&-
while read x; do
echo "input: $x"
done <&3
exec 4<&3
exec 3<&-
done
Just for those that might be interested, [[re-edited]] following comments by camh and jilles, here are two new versions of the test server script.
Both versions now works exactly as hoped.
camh's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
fi
done 3< "$pipe" 4> "$pipe" # 4 is just to keep the pipe opened so any real client does not end up causing read to return EOF
}
jille's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
exec 3< "$pipe"
exec 4<&-
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
else
# Close the pipe and reconnect it so that the next read does not end up returning EOF
exec 4<&3
exec 3<&-
exec 3< "$pipe"
exec 4<&-
fi
done
}
Thanks to all for your help.
Like camh & Dennis Williamson say don't break the pipe.
Now I have smaller examples, direct on the command line:
Server:
(
for i in {0,1,2,3,4}{0,1,2,3,4,5,6,7,8,9};
do
if read s;
then echo ">>$i--$s//";
else
echo "<<$i";
fi;
done < tst-fifo
)&
Client:
(
for i in {%a,#b}{1,2}{0,1};
do
echo "Test-$i" > tst-fifo;
done
)&
Can replace the key line with:
(echo "Test-$i" > tst-fifo&);
All client data sent to the pipe gets read, though with option two of the client one may need to start the server a couple of times before all data is read.
But although the read waits for data in the pipe to start with, once data has been pushed, it reads the empty string forever.
Any way to stop this?
Thanks for any insights again.
On the one hand the problem is worse than I thought:
Now there seems to be a case in my more complex example (jq_manage) where the same data is being read over and over again from the pipe (even though no new data is being written to it).
On the other hand, I found a simple solution (edited following Dennis' comment):
function jqn # compute the number of jobs running in that group
{
__jqty__=$(jobs | egrep "Running.*echo '%#_Group_#%_$__groupn__'" | wc -l)
}
function jq
{
__groupn__="$1"; shift # job group name (the pool within which to allocate $__jmax__ jobs)
__jmax__="$1"; shift # maximum of job numbers to run concurrently
jqn
while (($__jqty__ '>=' $__jmax__))
do
sleep 1
jqn
done
eval "(echo '%#_Group_#%_$__groupn__' > /dev/null; $#) &"
}
Works like a charm.
No socket or pipe involved.
Simple.
run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run
You can do this with GNU Parallel. You will not need a this scripting.
http://www.gnu.org/software/parallel/man.html#options
You can set max-procs "Number of jobslots. Run up to N jobs in parallel." There is an option to set the number of CPU cores you want to use. You can save the list of executed jobs to a log file, but that is a beta feature.

Resources