Multi-threaded BASH programming - generalized method? - multithreading

Ok, I was running POV-Ray on all the demos, but POV's still single-threaded and wouldn't utilize more than one core. So, I started thinking about a solution in BASH.
I wrote a general function that takes a list of commands and runs them in the designated number of sub-shells. This actually works but I don't like the way it handles accessing the next command in a thread-safe multi-process way:
It takes, as an argument, a file with commands (1 per line),
To get the "next" command, each process ("thread") will:
Waits until it can create a lock file, with: ln $CMDFILE $LOCKFILE
Read the command from the file,
Modifies $CMDFILE by removing the first line,
Removes the $LOCKFILE.
Is there a cleaner way to do this? I couldn't get the sub-shells to read a single line from a FIFO correctly.
Incidentally, the point of this is to enhance what I can do on a BASH command line, and not to find non-bash solutions. I tend to perform a lot of complicated tasks from the command line and want another tool in the toolbox.
Meanwhile, here's the function that handles getting the next line from the file. As you can see, it modifies an on-disk file each time it reads/removes a line. That's what seems hackish, but I'm not coming up with anything better, since FIFO's didn't work w/o setvbuf() in bash.
#
# Get/remove the first line from FILE, using LOCK as a semaphore (with
# short sleep for collisions). Returns the text on standard output,
# returns zero on success, non-zero when file is empty.
#
parallel__nextLine()
{
local line rest file=$1 lock=$2
# Wait for lock...
until ln "${file}" "${lock}" 2>/dev/null
do sleep 1
[ -s "${file}" ] || return $?
done
# Open, read one "line" save "rest" back to the file:
exec 3<"$file"
read line <&3 ; rest=$(cat<&3)
exec 3<&-
# After last line, make sure file is empty:
( [ -z "$rest" ] || echo "$rest" ) > "${file}"
# Remove lock and 'return' the line read:
rm -f "${lock}"
[ -n "$line" ] && echo "$line"
}

#adjust these as required
args_per_proc=1 #1 is fine for long running tasks
procs_in_parallel=4
xargs -n$args_per_proc -P$procs_in_parallel povray < list
Note the nproc command coming soon to coreutils will auto determine
the number of available processing units which can then be passed to -P

If you need real thread safety, I would recommend to migrate to a better scripting system.
With python, for example, you can create real threads with safe synchronization using semaphores/queues.

sorry to bump this after so long, but I pieced together a fairly good solution for this IMO
It doesnt work perfectly, but it will limit the script to a certain number of child tasks running, and then wait for all the rest at the end.
#!/bin/bash
pids=()
thread() {
local this
while [ ${#} -gt 6 ]; do
this=${1}
wait "$this"
shift
done
pids=($1 $2 $3 $4 $5 $6)
}
for i in 1 2 3 4 5 6 7 8 9 10
do
sleep 5 &
pids=( ${pids[#]-} $(echo $!) )
thread ${pids[#]}
done
for pid in ${pids[#]}
do
wait "$pid"
done
it seems to work great for what I'm doing (handling parallel uploading of a bunch of files at once) and keeps it from breaking my server, while still making sure all the files get uploaded before it finishes the script

I believe you're actually forking processes here, and not threading. I would recommend looking for threading support in a different scripting language like perl, python, or ruby.

Related

How best to implement atomic update on a file inside a bash script [duplicate]

This question already has answers here:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
(43 answers)
Closed 3 years ago.
I have a script which has multiple functions, running in parallel which checks a file and updates it frequently. I dont want two functions to update the file at the same time and create an issue. So what will be the best way to have an atomic update. I have the following so far.
counter(){
a=$1
while true;do
if [ ! -e /tmp/counter.lock ];then
touch /tmp/counter.lock
curr_count=`cat /tmp/count.txt`
n_count=`echo "${curr_count} + $a" | bc`
echo ${n_count} > /tmp/count.txt
rm -fv /tmp/counter.lock
break
fi
sleep 1
done
}
I am not sure how to convert my function to use flock, since it uses file descriptor and it might create issue if I call this function multiple time(or I think so.)
flock works by letting anyone open the lock file, but blocking if someone else locks it first. In your code, a second process could test for the existence of the lock after you see it doesn't exist but before you actually create it.
counter () {
a=$1
{
flock -s 200
read current_count < /tmp/count.xt
...
echo new_count > /tmp/count.txt
} 200> /tmp/counter.lock
}
Here, two processes can open /tmp/counter.lock for writing. In one process, flock will get the lock and exit immediately. In the other, flock will block until the first process releases the lock by closing its file descriptor once the command block completes.

Start programs synchronized in bash

What I simply want to do is to do a wait for release lock.
I have for example 4 (because I have 4 core) identical script that works each on a part of a project each script looks like that:
#!/bin/bash
./prerenderscript $1
scriptsync step1 4
./renderscript $1
scriptsync step2 4
./postprod $1
when I run the main script that call the four script, I want each script to work individualy but at some point, I want to have each script waiting for each other because the next part need all data from the first part.
For now I used some logic like the number of file or a file that get created for each process and their existance getting tested with other one.
I also got the idea to use a makefile and to have
prerender%: source
./prerender $#
renderscript%: prerender1 prerender2 prerender3 prerender4
./renderscript $#
postprod: renderscript1 renderscript2 renderscript3 renderscript4
./postprod $#
But actually the process is simplified here the script is more complex and for each step the thread need to keep his variables.
Is there anyway to get the script in sync instead of the placeholder command scriptsync.
To achieve this in Bash, one way to do it is using inter-process communication to cause a task to wait for the previous one to finish. Here is an example.
#!/bin/bash
# $1 is received to allow for an example command, not required for the mechanism suggested
task_a()
{
# Do some work
sleep $1 # This is just a dummy command as an example
echo "Task A/$1 completed" >&2
# Send status to stdin, telling next task to proceed
echo "OK"
}
task_b()
{
IFS= read status ; [[ $status = OK ]] || return 1
# Do some work
sleep $1 # This is just a dummy command as an example
echo "Task B/$1 completed" >&2
}
task_a 2 | task_b 2 &
task_a 1 | task_b 1 &
wait
You will notice that the read could be anywhere in task B, so you could do some work, then wait (read) for the other task, then continue. You could have many signals sent by task A to task B, and several corresponding read statements.
As shown in the example, you can launch several pipelines in parallel.
One limit of this approach is that a pipeline establishes a communication channel between one writer and one reader. If a task needs to wait for signals from several tasks, you would need FIFOs to allow the task with dependencies to read from multiple sources.

Need an in-depth explanation of how to use flock in Linux shell scripting

I am working on a tiny Raspberry Pi cluster (4 pis). I have 3 Raspberry Pi nodes that will be leaving a message in a message.txt file on the head Pi. The head Pi will be in a loop checking the message.txt file to see if it has any lines. When it does I want to lock the file and then extract the info I need. The problem I am having is that I need to do multiple commands. The only ways I have found that allows multiple commands look like this...
(
flock -s 200
# ... commands executed under lock ...
) 200>/var/lock/mylockfile
The problem with this way is that it uses a sub shell. The problem with that is that I have "job" files labeled job_1 job_2 etc..... that I want to be able to use a counter with. If I place the increment of the counter in the subshell it will be considered only in the scope of the subshell. If I pull the incrementation out there is a chance that another pi will add an entry before I increment the counter and lock the file.
I have heard talk that there is a way to lock the file and run multiple commands and flow control and then unlock it all using flock. I have not seen any good examples though.
Here is my current code.
# Now go into loop to send out jobs as pis ask for more work
while [ $jobsLeftCount -gt 0 ]
do
echo "launchJobs.sh: About to check msg file"
msgLines=$(wc -l < $msgLocation)
if [ $msgLines ]; then
#FIND WAY TO LOCK FILE AND DO THAT HERE
echo "launchJobs.sh: Messages found. Locking message file to read contents"
(
flock -e 350
echo "Message Received"
while read line; do
#rename file to be sent to node "job"
mv $jobLocation$jobName$jobsLeftCount /home/pi/algo2/Jobs/job
#transfer new job to each script that left a message
scp /home/pi/algo2/Jobs/job pi#192.168.0.$line:/home/pi/algo2/Jobs/
jobsLeftCount=$jobsLeftCount-1;
echo $line
done < $msgLocation
#clear msg file
>$msgLocation
#UNLOCK MESG FILE HERE
) 350>>$msgLocation
echo "Head node has $jobsLeftCount remaining"
fi
#jobsLeftCount=$jobsLeftCount-1;
#echo "here is $jobsLeftCount file"
done
If the sub-shell environment is not acceptable, use braces in place of parentheses to group the commands:
{
flock -s 200
# ... commands executed under lock ...
} 200>/var/lock/mylockfile
This runs the commands executed under lock in a new I/O context, but does not start a sub-shell. Within the braces, all the commands executed will have file descriptor 200 open to the locked lock file.

Re-run bash script if another instance was invoked

I have a bash script that may be invoked multiple times simultaneously. To protect the state information (saved in a /tmp file) that the script accesses, I am using file locking like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
Now any other instance that was invoked when this script was running exits. I want the script to run only one extra time if there were n simultaneous invocations, not n-times, something like this:
do_something()
{
...
}
// Check if there are any other instances of the script; if so exit
exec 8>$LOCK
if ! flock -n -x 8; then
exit 1
fi
// script does something...
do_something
// check if another instance was invoked, if so re-run do_something again
if [ condition ]; then
do_something
fi
How can I go about doing this? Touching a file inside the flock before quitting and having that file as the condition for the second if doesn't seem to work.
Have one flag (lockfile) to signal that a things needs doing, and always set it. Have a separate flag that is unset by the execution part.
REQUEST_FILE=/tmp/please_do_something
LOCK_FILE=/tmp/doing_something
# request running
touch $REQUEST_FILE
# lock and run
if ln -s /proc/$$ $LOCK_FILE 2>/dev/null ; then
while [ -e $REQUEST_FILE ]; do
do_something
rm $REQUEST_FILE
done
rm $LOCK_FILE
fi
If you want to ensure that "do_something" is run exactly once for each time the whole script is run, then you need to create some kind of a queue. The overall structure is similar.
They're not everone's favourite, but I've always been a fan of symbolic links to make lockfiles, since they're atomic. For example:
lockfile=/var/run/`basename $0`.lock
if ! ln -s "pid=$$ when=`date '+%s'` status=$something" "$lockfile"; then
echo "Can't set lock." >&2
exit 1
fi
By encoding useful information directly into the link target, you eliminate the race condition introduced by writing to files.
That said, the link that Dennis posted provides much more useful information that you should probably try to understand before writing much more of your script. My example above is sort of related to BashFAQ/045 which suggests doing a similar thing with mkdir.
If I understand your question correctly, then what you want to do might be achieved (slightly unreliably) by using two lock files. If setting the first lock fails, we try the second lock. If setting the second lock fails, we exit. The error exists if the first lock is delete after we check it but before check the second existant lock. If this level of error is acceptable to you, that's great.
This is untested; but it looks reasonable to me.
#!/usr/local/bin/bash
lockbase="/tmp/test.lock"
setlock() {
if ln -s "pid=$$" "$lockbase".$1 2>/dev/null; then
trap "rm \"$lockbase\".$1" 0 1 2 5 15
else
return 1
fi
}
if setlock 1 || setlock 2; then
echo "I'm in!"
do_something_amazing
else
echo "No lock - aborting."
fi
Please see Process Management.

Trace of executed programs called by a Bash script

A script is misbehaving. I need to know who calls that script, and who calls the calling script, and so on, only by modifying the misbehaving script.
This is similar to a stack-trace, but I am not interested in a call stack of function calls within a single bash script.
Instead, I need the chain of executed programs/scripts that is initiated by my script.
A simple script I wrote some days ago...
# FILE : sctrace.sh
# LICENSE : GPL v2.0 (only)
# PURPOSE : print the recursive callers' list for a script
# (sort of a process backtrace)
# USAGE : [in a script] source sctrace.sh
#
# TESTED ON :
# - Linux, x86 32-bit, Bash 3.2.39(1)-release
# REFERENCES:
# [1]: http://tldp.org/LDP/abs/html/internalvariables.html#PROCCID
# [2]: http://linux.die.net/man/5/proc
# [3]: http://linux.about.com/library/cmd/blcmdl1_tac.htm
#! /bin/bash
TRACE=""
CP=$$ # PID of the script itself [1]
while true # safe because "all starts with init..."
do
CMDLINE=$(cat /proc/$CP/cmdline)
PP=$(grep PPid /proc/$CP/status | awk '{ print $2; }') # [2]
TRACE="$TRACE [$CP]:$CMDLINE\n"
if [ "$CP" == "1" ]; then # we reach 'init' [PID 1] => backtrace end
break
fi
CP=$PP
done
echo "Backtrace of '$0'"
echo -en "$TRACE" | tac | grep -n ":" # using tac to "print in reverse" [3]
... and a simple test.
I hope you like it.
You can use Bash Debugger http://bashdb.sourceforge.net/
Or, as mentioned in the previous comments, the caller bash built-in. See: http://wiki.bash-hackers.org/commands/builtin/caller
i=0; while caller $i ;do ((i++)) ;done
Or as a bash function:
dump_stack(){
local i=0
local line_no
local function_name
local file_name
while caller $i ;do ((i++)) ;done | while read line_no function_name file_name;do echo -e "\t$file_name:$line_no\t$function_name" ;done >&2
}
Another way to do it is to change PS4 and enable xtrace:
PS4='+$(date "+%F %T") ${FUNCNAME[0]}() $BASH_SOURCE:${BASH_LINENO[0]}+ '
set -o xtrace # Comment this line to disable tracing.
~$ help caller
caller: caller [EXPR]
Returns the context of the current subroutine call.
Without EXPR, returns "$line $filename". With EXPR,
returns "$line $subroutine $filename"; this extra information
can be used to provide a stack trace.
The value of EXPR indicates how many call frames to go back before the
current one; the top frame is frame 0.
Since you say you can edit the script itself, simply put a:
ps -ef >/tmp/bash_stack_trace.$$
in it, where the problem is occurring.
This will create a number of files in your tmp directory that show the entire process list at the time it happened.
You can then work out which process called which other process by examining this output. This can either be done manually, or automated with something like awk, since the output is regular - you just use those PID and PPID columns to work out the relationships between all the processes you're interested in.
You'll need to keep an eye on the files, since you'll get one per process so they may have to be managed. Since this is something that should only be done during debugging, most of the time that line will be commented out (preceded by #), so the files won't be created.
To clean them up, you can simply do:
rm /tmp/bash_stack_trace.*
UPDATE:
The code below should work. Now I have a newer answer with a newer code version that allows a message inserted in the stacktrace.
IIRC I just couldn't find this answer to update it as well at the time. But now decided code is better kept in git so latest version of the above should be in this gist.
original code-corrected answer below:
There was another answer about this somewhere but here is a function to use for getting stack trace in the sense used for example in the java programming language. You call the function and it puts the stack trace into the variable $STACK. It show the code points that led to get_stack being called. This is mostly useful for complicated execution where single shell sources multiple script snippets and nesting.
function get_stack () {
STACK=""
# to avoid noise we start with 1 to skip get_stack caller
local i
local stack_size=${#FUNCNAME[#]}
for (( i=1; i<$stack_size ; i++ )); do
local func="${FUNCNAME[$i]}"
[ x$func = x ] && func=MAIN
local linen="${BASH_LINENO[(( i - 1 ))]}"
local src="${BASH_SOURCE[$i]}"
[ x"$src" = x ] && src=non_file_source
STACK+=$'\n'" "$func" "$src" "$linen
done
}
adding pstree -p -u `whoami` >>output in your script will probably get you the information you need.
The simplest script which returns a stack trace with all callers:
i=0; while caller $i ;do ((i++)) ;done
You could try something like
strace -f -e execve script.sh

Resources