Concurrency in linux - linux

I am trying to write a small shell script, make it go to sleep for some amount of time like 20 seconds and then run it. Now if i open another terminal and try to run the same script, it shouldn't run as the process is running else where. How do I do it?
I know i should write something, make it go to sleep captures its pid and write a condition that if this pis is running somewhere then don't let it run anywhere. but how do i do it? Please give a code.
echo "this is a process"
sleep 60
testfilepid = `ps ax | grep test1.sh | grep -v grep | tr -s " " | cut -f1 -d " "| tail -1`
echo $testfilepid
if [[ $tesfilepid = " " ]]
sh test1.sh
else
echo "this process is already running"
fi
This is what I tried. when i execute this in 2 windows, both the windows give me the output this is a process.

You could use pgrep to check that your script/process is running, and negate the output, this is a very basic example that could give you an idea:
if ! pgrep -f sleep >/dev/null; then echo "will sleep" && sleep 3; fi
Notice the !, pgrep -f sleep will search for a process matching against full argument lists. (you could customize this to your needs). so if nothing matches your pattern then your script will be called.

Related

Script to check if vim is open or another script is running?

I'm making a background script that requires a user to input a certain string (a function) to continue. The script runs fine, but will interrupt anything else that is open in vim or any script that is running. Is there a way I can test in my script if the command line is waiting for input to avoid interrupting something?
I'm running the script enclosed in parenthesis to hide the job completion message, so I'm using (. nightFall &)
Here is the script so far:
#!/bin/bash
# nightFall
clear
text=""
echo "Night begins to fall... Now might be a good time to rest."
while [[ "$text" != "rest" ]]
do
read -p "" text
done
Thank you in advance!
If you launch nightFall from the shell you are monitoring, you can use "ps" with the parent PID to see how many processes are launched by the shell as well:
# bg.sh
for k in `seq 1 15`; do
N=$(ps -ef | grep -sw $PPID | grep -v $$ | wc -l)
(( N -= 2 ))
[ "$N" -eq 0 ] && echo "At prompt"
[ "$N" -ne 0 ] && echo "Child processes: $N"
sleep 1
done
Note that I subtract 2 from N: one for the shell process itself and one for the bg.sh script. The remainder is = how many other child processes does the shell have.
Launch the above script from a shell in background:
bash bg.sh &
Then start any command (for example "sleep 15") and it will detect if you are at the prompt or in a command.

Shutdown computer when all instances of a given program have finished

I use the following script to check whether wget has finished downloading. To check for this, I'm looking for its PID, and when it is not found the computer shutdowns. This works fine for a single instance of wget, however, I'd like the script to look for all already running wget programs.
#!/bin/bash
while kill -0 $(pidof wget) 2> /dev/null; do
for i in '-' '/' '|' '\'
do
echo -ne "\b$i"
sleep 0.1
done
done
poweroff
EDIT: I'd would be great if the script would check if at least one instance of wget is running and only then check whether wget has finished and shutdown the computer.
In addition to the other answers, you can satisfy your check for at least one wget pid by initially reading the result of pidof wget into an array, for example:
pids=($(pidof wget))
if ((${#pids[#]} > 0)); then
# do your loop
fi
This also brings up a way to routinely monitor the remaining pids as each wget operation completes, for example,
edit
npids=${#pids[#]} ## save original number of pids
while (( ${#pids[#]} -gt 0 )); do ## while pids remain
for ((i = 0; i < npids; i++)); do ## loop, checking remaining pids
kill -0 ${pids[i]} || pids[$i]= ## if not unset in array
done
## do your sleep and spin
done
poweroff
There are probably many more ways to do it. This is just one that came to mind.
I don't think kill is a right Idea,
may be some thing on the lines like this
while [ 1 ]
do
live_wgets=0
for pid in `ps -ef | grep wget| awk '{print $2}'` ; # Adjust the grep
do
live_wgets=$((live_wgets+1))
done
if test $live_wgets -eq 0; then # shutdown
sudo poweroff; # or whatever that suits
fi
sleep 5; # wait for sometime
done
You can adapt your script in the following way:
#!/bin/bash
spin[0]="-"
spin[1]="\\"
spin[2]="|"
spin[3]="/"
DOWNLOAD=`ps -ef | grep wget | grep -v grep`
while [ -n "$DOWNLOAD" ]; do
for i in "${spin[#]}"
do
DOWNLOAD=`ps -ef | grep wget | grep -v grep`
echo -ne "\b$i"
sleep 0.1
done
done
sudo poweroff
However I would recommend using cron instead of an active waiting approach or even use wait
How to wait in bash for several subprocesses to finish and return exit code !=0 when any subprocess ends with code !=0?

Bash script optimization for waiting for a particular string in log files

I am using a bash script that calls multiple processes which have to start up in a particular order, and certain actions have to be completed (they then print out certain messages to the logs) before the next one can be started. The bash script has the following code which works really well for most cases:
tail -Fn +1 "$log_file" | while read line; do
if echo "$line" | grep -qEi "$search_text"; then
echo "[INFO] $process_name process started up successfully"
pkill -9 -P $$ tail
return 0
elif echo "$line" | grep -qEi '^error\b'; then
echo "[INFO] ERROR or Exception is thrown listed below. $process_name process startup aborted"
echo " ($line) "
echo "[INFO] Please check $process_name process log file=$log_file for problems"
pkill -9 -P $$ tail
return 1
fi
done
However, when we set the processes to print logging in DEBUG mode, they print so much logging that this script cannot keep up, and it takes about 15 minutes after the process is complete for the bash script to catch up. Is there a way of optimizing this, like changing 'while read line' to 'while read 100 lines', or something like that?
How about not forking up to two grep processes per log line?
tail -Fn +1 "$log_file" | grep -Ei "$search_text|^error\b" | while read line; do
So one long running grep process shall do preprocessing if you will.
Edit: As noted in the comments, it is safer to add --line-buffered to the grep invocation.
Some tips relevant for this script:
Checking that the service is doing its job is a much better check for daemon startup than looking at the log output
You can use grep ... <<<"$line" to execute fewer echos.
You can use tail -f | grep -q ... to avoid the while loop by stopping as soon as there's a matching line.
If you can avoid -i on grep it might be significantly faster to process the input.
Thou shalt not kill -9.

How to get watch to run a bash script with quotes

I'm trying to have a lightweight memory profiler for the matlab jobs that are run on my machine. There is either one or zero matlab job instance, but its process id changes frequently (since it is actually called by another script).
So here is the bash script that I put together to log memory usage:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
when I run this script in bash like this:
./script.sh
it works fine, giving me the following result:
VmSize: 1289004 kB
which is exactly what I want.
Now, I want to run this periodically. So I run it with watch, like this:
watch ./script.sh
But in this case I only receive:
no pid
Please note that I know the matlab job is still running, because I can see it with the same pid on top, and besides, I know each matlab job take several hours to finish.
I'm pretty sure that something is wrong with the quotes I have when setting pid. I just can't figure out how to fix it. Anyone knows what I'm doing wrong?
PS.
In the man page of watch, it says that commands are executed by sh -c. I did run my script like sh -c ./script and it works just fine, but watch doesn't.
Why don't you use a loop with sleep command instead?
For example:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
while [ "1" ]
do
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
sleep 10
done
Here the script sleeps(waits) for 10 seconds. You can set the interval you need changing the sleep command. For example to make the script sleep for an hour use sleep 1h.
To exit the script press Ctrl - C
This
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
could be changed to:
pid=$(pidof MATLAB)
I have no idea why it's not working in watch but you could use a cron job and make the script log to a file like so:
#!/bin/bash
pid=$(pidof MATLAB) # Just to follow previously given advice :)
if [[ -n $pid ]]
then
echo "$(date): $(\grep VmSize /proc/$pid/status)" >> logfile
else
echo "$(date): no pid" >> logfile
fi
You'd of course have to create logfile with touch.
You might try just running ps command in watch. I have had issues in the past with watch chopping lines and such when they get too long.
It can be fixed by making the terminal you are running the command from wider or changing the column like this (may need to adjust the 160 to your liking):
export COLUMNS=160;

Bash: How do I make sub-processes of a script be terminated, when the script is terminated?

The question applies to a script such as the following:
Script
#!/bin/sh
SRC="/tmp/my-server-logs"
echo "STARTING GREP JOBS..."
for f in `find ${SRC} -name '*log*2011*' | sort --reverse`
do
(
OUT=`nice grep -ci -E "${1}" "${f}"`
if [ "${OUT}" != "0" ]
then
printf '%7s : %s\n' "${OUT}" "${f}"
else
printf '%7s %s\n' "(none)" "${f}"
fi
) &
done
echo "WAITING..."
wait
echo "FINISHED!"
Current behavior
Pressing Ctrl+C in console terminates the script but not the already running grep processes.
Write a trap for Ctrl+c and in the trap kill all of the subprocesses. Put this before your wait command.
function handle_sigint()
{
for proc in `jobs -p`
do
kill $proc
done
}
trap handle_sigint SIGINT
A simple alternative is using a cat pipe. The following worked for me:
echo "-" > test.text;
for x in 1 2 3; do
( sleep $x; echo $x | tee --append test.text; ) &
done | cat
If I press Ctrl-C before the last number is printed to stdout. It also works if the text-generating command is something that takes a long time such as "find /", i.e. it is not only the connection to stdout through cat that is killed but actually the child process.
For large scripts that make extensive use of subprocesses the easiest way to ensure the indented Ctrl-C behaviour is wrapping the whole script into such a subshell, e.g.
#!/usr/bin/bash
(
...
) | cat
I am not sure though if this has the exactly same effect as Andrew's answer (i.e. I'm not sure what signal is sent to the subprocesses). Also I only tested this with cygwin, not with a native Linux shell.

Resources