Allow sh to be run from anywhere - linux

I have been monitoring the performance of my Linux server with ioping (had some performance degradation last year). For this purpose I created a simple script:
echo $(date) | tee -a ../sb-output.log | tee -a ../iotest.txt
./ioping -c 10 . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing I\/O\: /" | tee -a ../iotest.txt
./ioping -RD . 2>&1 | tee -a ../sb-output.log | grep "requests completed in\|ioping" | grep -v "ioping statistics" | sed "s/^/IOPing seek rate\: /" | tee -a ../iotest.txt
etc
The script calls ioping in the folder /home/bench/ioping-0.6. Then it saves the output in readable form in /home/bench/iotest.txt. It also adds the date so I can compare points in time.
Unfortunately I am no experienced programmer and this version of the script only works if you first enter the right directory (/home/bench/ioping-0.6).
I would like to call this script from anywhere. For example by calling
sh /home/bench/ioping.sh
Googling this and reading about path variables was a bit over my head. I kept up ending up with different version of
line 3: ./ioping: No such file or directory
Any thoughts on how to upgrade my scripts so that it works anywhere?

The trick is the shell's $0 variable. This is set to the path of the script.
#!/bin/sh
set -x
cd $(dirname $0)
pwd
cd ${0%/*}
pwd
If dirname isn't available for some reason, like some limited busybox distributions, you can try using shell parameter expansion tricks like the second one in my example.

Isn't it obvious? ioping is not in . so you can't use ./ioping.
Easiest solution is to set PATH to include the directory where ioping is. perhaps more robust - figure out the path to $0 and use that path as the location for ioping (assing your script sits next to ioping).
If iopinf itself depend on being ruin in a certain directory, you might have to make your script cd to the ioping directory before running.

Related

bash -- execute command on file change; doubling issue + how to skip loop until command completes

I'm a bash noob, and I am trying to set up a sort of "hot reload" functionality for a project I'm working on using inotifywait. Ubuntu 20.04 if that matters.
Here is what I hoped would have worked:
inotifywait -m -r ../.. -e modify,create,delete |
while read line; do
custom_command
done
I'm having two problems:
Issue #1 is that custom_command takes some time to work, and so if I make more changes to the directory in the meantime, custom command appears to "queue up" custom_command, where really I just want it to keep the most recent one and drop the others.
Issue #2 is that I'm getting some sort of "double output." So for example if I bash auto-exec.sh and auto-exec.sh looks like this:
inotifywait -m -r . -q -e modify,create,delete
Then each time a change registers, I get this as output (not a mistake that it's doubled -- I get two identical lines each time there is a modification):
./ MODIFY auto-exec-testfile.txt
./ MODIFY auto-exec-testfile.txt
I should note I've tried making changes both with Visual Code Studio and gedit, with the same results.
If I modify the bash file like so:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "$line"
echo "..."
done
I get the following output each time there is a change:
./ MODIFY auto-exec-testfile.txt
...
./ MODIFY auto-exec-testfile.txt
...
If I modify bash_test.sh to the following:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "help me..."
done
Then I get the following each time a change is made:
help me...
help me...
What happened to the the ./ MODIFY ... line?? Presumably there's something I don't understand about bash, stdout or similar /related concepts here?
And finally, if I change the .sh file to the following:
inotifywait -m -r . -q -q -e modify,create,delete |
while read _; do
echo "help me..."
done
Then I get no output at all. This one I think I understand, because the -q -q means that inotifywait is in "super silent" mode, so there is no log and therefore nothing to trigger the while.
What I'd love to do is just trigger the code once when something changes, and drop all but the most recent execution. I'm not sure doing this using a while is entirely necessary, but I tried inotifywait -m -r . -q -q -e modify,create,delete | echo "help me..", and the script printed "help me..." once at startup, then exited on modification.
Assistance very much appreciated.
EDIT - 20201-Mar-23
I removed -m and create from the inotifywait line, and it appears to work as expected, except that it doesn't stay "up" in monitor mode. So this at least only gives me one entry from inotifywait:
notifywait -r .. -q -e modify,delete |
while read line1; do
echo ${line1}
done
Related:
inotifywait - pause monitoring while executing command
https://unix.stackexchange.com/questions/140679/using-inotify-to-monitor-a-directory-but-not-working-100
inotifywait not performing the while loop in bash script
while inotifywait -e close_write,delete .; do
pkill custom_command
custom_command&
done

ssh tail with nested ls and head cannot access

am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).

Find the current shell of the user using a shell script [duplicate]

How can I determine the current shell I am working on?
Would the output of the ps command alone be sufficient?
How can this be done in different flavors of Unix?
There are three approaches to finding the name of the current shell's executable:
Please note that all three approaches can be fooled if the executable of the shell is /bin/sh, but it's really a renamed bash, for example (which frequently happens).
Thus your second question of whether ps output will do is answered with "not always".
echo $0 - will print the program name... which in the case of the shell is the actual shell.
ps -ef | grep $$ | grep -v grep - this will look for the current process ID in the list of running processes. Since the current process is the shell, it will be included.
This is not 100% reliable, as you might have other processes whose ps listing includes the same number as shell's process ID, especially if that ID is a small number (for example, if the shell's PID is "5", you may find processes called "java5" or "perl5" in the same grep output!). This is the second problem with the "ps" approach, on top of not being able to rely on the shell name.
echo $SHELL - The path to the current shell is stored as the SHELL variable for any shell. The caveat for this one is that if you launch a shell explicitly as a subprocess (for example, it's not your login shell), you will get your login shell's value instead. If that's a possibility, use the ps or $0 approach.
If, however, the executable doesn't match your actual shell (e.g. /bin/sh is actually bash or ksh), you need heuristics. Here are some environmental variables specific to various shells:
$version is set on tcsh
$BASH is set on bash
$shell (lowercase) is set to actual shell name in csh or tcsh
$ZSH_NAME is set on zsh
ksh has $PS3 and $PS4 set, whereas the normal Bourne shell (sh) only has $PS1 and $PS2 set. This generally seems like the hardest to distinguish - the only difference in the entire set of environment variables between sh and ksh we have installed on Solaris boxen is $ERRNO, $FCEDIT, $LINENO, $PPID, $PS3, $PS4, $RANDOM, $SECONDS, and $TMOUT.
ps -p $$
should work anywhere that the solutions involving ps -ef and grep do (on any Unix variant which supports POSIX options for ps) and will not suffer from the false positives introduced by grepping for a sequence of digits which may appear elsewhere.
Try
ps -p $$ -oargs=
or
ps -p $$ -ocomm=
If you just want to ensure the user is invoking a script with Bash:
if [ -z "$BASH" ]; then echo "Please run this script $0 with bash"; exit; fi
or ref
if [ -z "$BASH" ]; then exec bash $0 ; exit; fi
You can try:
ps | grep `echo $$` | awk '{ print $4 }'
Or:
echo $SHELL
$SHELL need not always show the current shell. It only reflects the default shell to be invoked.
To test the above, say bash is the default shell, try echo $SHELL, and then in the same terminal, get into some other shell (KornShell (ksh) for example) and try $SHELL. You will see the result as bash in both cases.
To get the name of the current shell, Use cat /proc/$$/cmdline. And the path to the shell executable by readlink /proc/$$/exe.
There are many ways to find out the shell and its corresponding version. Here are few which worked for me.
Straightforward
$> echo $0 (Gives you the program name. In my case the output was -bash.)
$> $SHELL (This takes you into the shell and in the prompt you get the shell name and version. In my case bash3.2$.)
$> echo $SHELL (This will give you executable path. In my case /bin/bash.)
$> $SHELL --version (This will give complete info about the shell software with license type)
Hackish approach
$> ******* (Type a set of random characters and in the output you will get the shell name. In my case -bash: chapter2-a-sample-isomorphic-app: command not found)
ps is the most reliable method. The SHELL environment variable is not guaranteed to be set and even if it is, it can be easily spoofed.
I have a simple trick to find the current shell. Just type a random string (which is not a command). It will fail and return a "not found" error, but at start of the line it will say which shell it is:
ksh: aaaaa: not found [No such file or directory]
bash: aaaaa: command not found
I have tried many different approaches and the best one for me is:
ps -p $$
It also works under Cygwin and cannot produce false positives as PID grepping. With some cleaning, it outputs just an executable name (under Cygwin with path):
ps -p $$ | tail -1 | awk '{print $NF}'
You can create a function so you don't have to memorize it:
# Print currently active shell
shell () {
ps -p $$ | tail -1 | awk '{print $NF}'
}
...and then just execute shell.
It was tested under Debian and Cygwin.
The following will always give the actual shell used - it gets the name of the actual executable and not the shell name (i.e. ksh93 instead of ksh, etc.). For /bin/sh, it will show the actual shell used, i.e. dash.
ls -l /proc/$$/exe | sed 's%.*/%%'
I know that there are many who say the ls output should never be processed, but what is the probability you'll have a shell you are using that is named with special characters or placed in a directory named with special characters? If this is still the case, there are plenty of other examples of doing it differently.
As pointed out by Toby Speight, this would be a more proper and cleaner way of achieving the same:
basename $(readlink /proc/$$/exe)
My variant on printing the parent process:
ps -p $$ | awk '$1 == PP {print $4}' PP=$$
Don't run unnecessary applications when AWK can do it for you.
Provided that your /bin/sh supports the POSIX standard and your system has the lsof command installed - a possible alternative to lsof could in this case be pid2path - you can also use (or adapt) the following script that prints full paths:
#!/bin/sh
# cat /usr/local/bin/cursh
set -eu
pid="$$"
set -- sh bash zsh ksh ash dash csh tcsh pdksh mksh fish psh rc scsh bournesh wish Wish login
unset echo env sed ps lsof awk getconf
# getconf _POSIX_VERSION # reliable test for availability of POSIX system?
PATH="`PATH=/usr/bin:/bin:/usr/sbin:/sbin getconf PATH`"
[ $? -ne 0 ] && { echo "'getconf PATH' failed"; exit 1; }
export PATH
cmd="lsof"
env -i PATH="${PATH}" type "$cmd" 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
awkstr="`echo "$#" | sed 's/\([^ ]\{1,\}\)/|\/\1/g; s/ /$/g' | sed 's/^|//; s/$/$/'`"
ppid="`env -i PATH="${PATH}" ps -p $pid -o ppid=`"
[ "${ppid}"X = ""X ] && { echo "no ppid found"; exit 1; }
lsofstr="`lsof -p $ppid`" ||
{ printf "%s\n" "lsof failed" "try: sudo lsof -p \`ps -p \$\$ -o ppid=\`"; exit 1; }
printf "%s\n" "${lsofstr}" |
LC_ALL=C awk -v var="${awkstr}" '$NF ~ var {print $NF}'
My solution:
ps -o command | grep -v -e "\<ps\>" -e grep -e tail | tail -1
This should be portable across different platforms and shells. It uses ps like other solutions, but it doesn't rely on sed or awk and filters out junk from piping and ps itself so that the shell should always be the last entry. This way we don't need to rely on non-portable PID variables or picking out the right lines and columns.
I've tested on Debian and macOS with Bash, Z shell (zsh), and fish (which doesn't work with most of these solutions without changing the expression specifically for fish, because it uses a different PID variable).
If you just want to check that you are running (a particular version of) Bash, the best way to do so is to use the $BASH_VERSINFO array variable. As a (read-only) array variable it cannot be set in the environment,
so you can be sure it is coming (if at all) from the current shell.
However, since Bash has a different behavior when invoked as sh, you do also need to check the $BASH environment variable ends with /bash.
In a script I wrote that uses function names with - (not underscore), and depends on associative arrays (added in Bash 4), I have the following sanity check (with helpful user error message):
case `eval 'echo $BASH#${BASH_VERSINFO[0]}' 2>/dev/null` in
*/bash#[456789])
# Claims bash version 4+, check for func-names and associative arrays
if ! eval "declare -A _ARRAY && func-name() { :; }" 2>/dev/null; then
echo >&2 "bash $BASH_VERSION is not supported (not really bash?)"
exit 1
fi
;;
*/bash#[123])
echo >&2 "bash $BASH_VERSION is not supported (version 4+ required)"
exit 1
;;
*)
echo >&2 "This script requires BASH (version 4+) - not regular sh"
echo >&2 "Re-run as \"bash $CMD\" for proper operation"
exit 1
;;
esac
You could omit the somewhat paranoid functional check for features in the first case, and just assume that future Bash versions would be compatible.
None of the answers worked with fish shell (it doesn't have the variables $$ or $0).
This works for me (tested on sh, bash, fish, ksh, csh, true, tcsh, and zsh; openSUSE 13.2):
ps | tail -n 4 | sed -E '2,$d;s/.* (.*)/\1/'
This command outputs a string like bash. Here I'm only using ps, tail, and sed (without GNU extesions; try to add --posix to check it). They are all standard POSIX commands. I'm sure tail can be removed, but my sed fu is not strong enough to do this.
It seems to me, that this solution is not very portable as it doesn't work on OS X. :(
echo $$ # Gives the Parent Process ID
ps -ef | grep $$ | awk '{print $8}' # Use the PID to see what the process is.
From How do you know what your current shell is?.
This is not a very clean solution, but it does what you want.
# MUST BE SOURCED..
getshell() {
local shell="`ps -p $$ | tail -1 | awk '{print $4}'`"
shells_array=(
# It is important that the shells are listed in descending order of their name length.
pdksh
bash dash mksh
zsh ksh
sh
)
local suited=false
for i in ${shells_array[*]}; do
if ! [ -z `printf $shell | grep $i` ] && ! $suited; then
shell=$i
suited=true
fi
done
echo $shell
}
getshell
Now you can use $(getshell) --version.
This works, though, only on KornShell-like shells (ksh).
Do the following to know whether your shell is using Dash/Bash.
ls –la /bin/sh:
if the result is /bin/sh -> /bin/bash ==> Then your shell is using Bash.
if the result is /bin/sh ->/bin/dash ==> Then your shell is using Dash.
If you want to change from Bash to Dash or vice-versa, use the below code:
ln -s /bin/bash /bin/sh (change shell to Bash)
Note: If the above command results in a error saying, /bin/sh already exists, remove the /bin/sh and try again.
I like Nahuel Fouilleul's solution particularly, but I had to run the following variant of it on Ubuntu 18.04 (Bionic Beaver) with the built-in Bash shell:
bash -c 'shellPID=$$; ps -ocomm= -q $shellPID'
Without the temporary variable shellPID, e.g. the following:
bash -c 'ps -ocomm= -q $$'
Would just output ps for me. Maybe you aren't all using non-interactive mode, and that makes a difference.
Get it with the $SHELL environment variable. A simple sed could remove the path:
echo $SHELL | sed -E 's/^.*\/([aA-zZ]+$)/\1/g'
Output:
bash
It was tested on macOS, Ubuntu, and CentOS.
On Mac OS X (and FreeBSD):
ps -p $$ -axco command | sed -n '$p'
Grepping PID from the output of "ps" is not needed, because you can read the respective command line for any PID from the /proc directory structure:
echo $(cat /proc/$$/cmdline)
However, that might not be any better than just simply:
echo $0
About running an actually different shell than the name indicates, one idea is to request the version from the shell using the name you got previously:
<some_shell> --version
sh seems to fail with exit code 2 while others give something useful (but I am not able to verify all since I don't have them):
$ sh --version
sh: 0: Illegal option --
echo $?
2
One way is:
ps -p $$ -o exe=
which is IMO better than using -o args or -o comm as suggested in another answer (these may use, e.g., some symbolic link like when /bin/sh points to some specific shell as Dash or Bash).
The above returns the path of the executable, but beware that due to /usr-merge, one might need to check for multiple paths (e.g., /bin/bash and /usr/bin/bash).
Also note that the above is not fully POSIX-compatible (POSIX ps doesn't have exe).
Kindly use the below command:
ps -p $$ | tail -1 | awk '{print $4}'
This one works well on Red Hat Linux (RHEL), macOS, BSD and some AIXes:
ps -T $$ | awk 'NR==2{print $NF}'
alternatively, the following one should also work if pstree is available,
pstree | egrep $$ | awk 'NR==2{print $NF}'
You can use echo $SHELL|sed "s/\/bin\///g"
And I came up with this:
sed 's/.*SHELL=//; s/[[:upper:]].*//' /proc/$$/environ

Bash grep command finding the same file 5 times

I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!

Run ssh in shell script in parallel and set remote variables

I'm writing a script to read from a input file, which contains ~1000 lines of host info. The script ssh to each host, cd to the remote hosts log directory and cat the latest daily log file. Then I redirect the cat log file locally to do some pattern matching and statistics.
The simplified structure of my program is a while loop looks like this:
while read host
do
ssh -n name#$host "cd TO LOG DIR AND cat THE LATEST LOGFILE" | matchPattern
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
where matchPattern is a function to match pattern and do statistics.
Right now I got 2 questions for this:
1) How to find the latest daily log file remotely? The latest log file name matches xxxx2012-05-02.log and is newest created, is it possible to do ls remotely and find the file matching the xxxx2012-05-02.log file name?(I can do this locally but get jammed when appending it to ssh command) Another way I could come up with is to do
cat 'ls -t | head -1' or
cat $(ls -t | head -1)
However if I append this to ssh, it will list my local newest created file name, can we set this to a remote variable so that cat will find the correct file?
2) As there are nearly 1000 hosts, I'm wondering can I do this in parallel (like to do 20 ssh at a time and do the next 20 after the first 20 finishes), appending & to each ssh seems not suffice to accomplish it.
Any ideas would be greatly appreciated!
Follow up:
Hi everyone, I finally find a crappy way do solve the first problem by doing this:
ssh -n name#$host "cd $logDir; cat *$logName" | matchPattern
Where $logName is "today's date.log"(2012-05-02.log). The problem is that I can only use local variables within the double quotes. Since my log file ends with 2012-05-02.log, and there is no other files ends with this suffix, I just do a blindly cat *2012-05-02.log on remote machine and it will cat the desired file for me.
For your first question,
ssh -n name#$host 'cat $(ls -t /path/to/log/dir/*.log | head -n 1)'
should work. Note single quotes around the remote command.
For your second question, wrap all the ssh | matchPattern | analyse stuff into its own function, then iterate over it by
outstanding=0
while read host
do
sshMatchPatternStuff &
outstanding=$((outstanding + 1))
if [ $outstanding -ge 20 ] ; then
wait
outstanding=$((outstanding - 1))
fi
done << EOA
$(awk -F, '{print &7}' $FILEIN)
EOA
while [ $outstanding -gt 0 ] ; do
wait
outstanding=$((outstanding - 1))
done
(I assume you're using bash.)
It may be better to separate the ssh | matchPattern | analyse stuff into its own script, and then use a parallel variant of xargs to call it.
for your second question, take a look at parallel distributed shell:
http://sourceforge.net/projects/pdsh/
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1` | grep pattern'
This way you get the matching done on the remote server. If you prefer to transfer the full log file and do the matching locally, simply move the grep outside:
parallel -j0 --nonall --slf <(awk -F, '{print $7}' servers.txt) 'cd logdir; cat `ls -t | head -1`' | grep pattern
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Resources