Run all shell scripts in folder - linux

I have many .sh scripts in a single folder and would like to run them one after another. A single script can be executed as:
bash wget-some_long_number.sh -H
Assume my directory is /dat/dat1/files
How can I run bash wget-some_long_number.sh -H one after another?
I understand something in these lines should work:
for i in *.sh;...do ....; done

Use this:
for f in *.sh; do
bash "$f"
done
If you want to stop the whole execution when a script fails:
for f in *.sh; do
bash "$f" || break # execute successfully or break
# Or more explicitly: if this execution fails, then stop the `for`:
# if ! bash "$f"; then break; fi
done
It you want to run, e.g., x1.sh, x2.sh, ..., x10.sh:
for i in `seq 1 10`; do
bash "x$i.sh"
done
To preserve exit code of failed script (responding to #VespaQQ):
#!/bin/bash
set -e
for f in *.sh; do
bash "$f"
done

There is a much simpler way, you can use the run-parts command which will execute all scripts in the folder:
run-parts /path/to/folder

I ran into this problem where I couldn't use loops and run-parts works with cron.
Answer:
foo () {
bash -H $1
#echo $1
#cat $1
}
cd /dat/dat1/files #change directory
export -f foo #export foo
parallel foo ::: *.sh #equivalent to putting a & in between each script
You use GNU parallel, this executes everything in the directory, with the added buff of it happening at a lot faster rate. Not to mention it isn't just with script execution, you could put any command in the function and it'll work.

Related

How to run shell script commands in an sh file in parallel?

I'm trying to take backup of tables in my database server.
I have around 200 tables. I have a shell script that contains commands to take backups of each table like:
backup.sh
psql -u username ..... table1 ... file1;
psql -u username ..... table2 ... file2;
psql -u username ..... table3 ... file3;
I can run the script and create backups in my machine. But as there are 200 tables, it's gonna run the commands sequentially and takes lot of time.
I want to run the backup commands in parallel. I have seen articles where in they suggested to use && after each command or use nohup command or wait command.
But I don't want to edit the script and include around 200 such commands.
Is there any way to run these list of shell script commands parallelly? something like nodejs does? Is it possible to do it? Or am I looking at it wrong?
Sample command in the script:
psql --host=somehost --port=5490 --username=user --dbname=db -c '\copy dbo.tablename TO "/home/username/Desktop/PostgresFiles/tablename.csv" with DELIMITER ","';
You can leverage xargs to run command in parallel, AND control the number of concurrent jobs. Running 200 backup jobs might overwhelm your database, and result in less than optimal performance.
Assuming you have backup.sh with one backup command per line
xargs -P5 -I{} bash -c "{}" < backup.sh
The commands in backup.sh should be modified to allow quoting (using single quote when possible, escaping double quote):
psql --host=somehost --port=5490 --username=user --dbname=db -c '\copy dbo.tablename TO \"/home/username/Desktop/PostgresFiles/tablename.csv\" with DELIMITER \",\"';
Where -P5 control the number of concurrent jobs. This will be able to process command lines WITHOUT double quotes. For the above script, you change "\copy ..." to '\copy ...'
Simpler alternative will be to use a helper backup-table.sh, which will take two parameters (table, file), and use
xargs -P5 -I{} backup-table.sh "{}" < tables.txt
And put all the complex quoting into the backup-table.sh
doit() {
table=$1
psql --host=somehost --port=5490 --username=user --dbname=db -c '\copy dbo.'$table' TO "/home/username/Desktop/PostgresFiles/'$table'.csv" with DELIMITER ","';
}
export -f doit
sql --listtables -n postgresql://user:pass#host:5490/db | parallel -j0 doit
Is there any logic in the script other than individual commands? (EG: and if's or processing of output?).
If it's just a file with a list of scripts, you could write a wrapper for the script (or a loop from the CLI) EG:
$ cat help.txt
echo 1
echo 2
echo 3
$ while read -r i;do bash -c "$i" &done < help.txt
[1] 18772
[2] 18773
[3] 18774
1
2
3
[1] Done bash -c "$i"
[2]- Done bash -c "$i"
[3]+ Done bash -c "$i"
$ while read -r i;do bash -c "$i" &done < help.txt
[1] 18820
[2] 18821
[3] 18822
2
3
1
[1] Done bash -c "$i"
[2]- Done bash -c "$i"
[3]+ Done bash -c "$i"
Each line of help.txt contains a command and I run a loop where I take each command and run it in subshell. (this is a simple example where I just background each job. You could get more complex using something like xargs -p or parallel but this is a starting point)

What kind of command is "sudo", "su", or "torify"

I know what they do. I was just wondering what kind of command are they. How can you make one using shell scripting.
For example, command like:
ignoreError ls /Home/
ignoreError mkdir /Home/
ignoreError cat
ignoreError randomcommand
Hope you get the idea
The way to do it in a shell script is with the "$#" construct.
"$#" expands to a quoted list of all of the arguments you passed to your shell script. $1 would be the command you want your shell script to run, and $2 $3 etc are the arguments to that command.
The only example I have is from cygwin. Cygwin does not have sudo, but I have this script that emulates it:
#!/usr/bin/bash
cygstart --action=runas "$#"
So when I run a command like
$ sudo ls -l
my sudo script does whatever it needs to do (cygstart --action=runas) and calls the ls command with the -l argument.
Try this script:
#!/bin/sh
"$#"
Call it, for example, run, make it runnable chmod u+x run, and try it:
$ run ls -l #or ./run ls -l
...
output of ls
...
The idea is that the script takes the parameters specified on the command line and use them as a (sub)command... Modify the script this way:
#!/bin/sh
echo "Trying to run $*"
"$#"
and you will see.

launch process in background and modify it from bash script

I'm creating a bash script that will run a process in the background, which creates a socket file. The socket file then needs to be chmod'd. The problem I'm having is that the socket file isn't being created before trying to chmod the file.
Example source:
#!/bin/bash
# first create folder that will hold socket file
mkdir /tmp/myproc
# now run process in background that generates the socket file
node ../main.js &
# finally chmod the thing
chmod /tmp/myproc/*.sock
How do I delay the execution of the chmod until after the socket file has been created?
The easiest way I know to do this is to busywait for the file to appear. Conveniently, ls returns non-zero when the file it is asked to list doesn't exist; so just loop on ls until it returns 0, and when it does you know you have at least one *.sock file to chmod.
#!/bin/sh
echo -n "Waiting for socket to open.."
( while [ ! $(ls /tmp/myproc/*.sock) ]; do
echo -n "."
sleep 2
done ) 2> /dev/null
echo ". Found"
If this is something you need to do more than once wrap it in a function, but otherwise as is should do what you need.
EDIT:
As pointed out in the comments, using ls like this is inferior to -e in the test, so the rewritten script below is to be preferred. (I have also corrected the shell invocation, as -n is not supported on all platforms in sh emulation mode.)
#!/bin/bash
echo -n "Waiting for socket to open.."
while [ ! -e /tmp/myproc/*.sock ]; do
echo -n "."
sleep 2
done
echo ". Found"
Test to see if the file exists before proceeding:
while [[ ! -e filename ]]
do
sleep 1
done
If you set your umask (try umask 0) you may not have to chmod at all. If you still don't get the right permissions check if node has options to change that.

Bash script manual execute run normally but not with crontab

Hello i have a script like this one:
#!/usr/bin/bash
ARSIP=/apps/bea/scripts/arsip
CURDIR=/apps/bea/scripts
OUTDIR=/apps/bea/scripts/out
DIRLOG=/apps/bea/jboss-6.0.0/server/default/log
LISTFILE=$CURDIR/tmp/file.$$
DATE=`perl -e 'use POSIX; print strftime "%Y-%m-%d", localtime time-86400;'`
JAVACMD=/apps/bea/jdk1.6.0_26/bin/sparcv9/java
HR=00
for (( c=0; c<24; c++ ))
do
echo $DATE $HR
$JAVACMD -jar LatencyCounter.jar LatencyCounter.xml $DATE $HR
sleep 1
cd $OUTDIR
mv btw_120-180.txt btw_120-180-$DATE-$HR.txt
mv btw_180-360.txt btw_180-360-$DATE-$HR.txt
mv btw_60-120.txt btw_60-120-$DATE-$HR.txt
mv failed_to_deliver.txt failed_to_deliver-$DATE-$HR.txt
mv gt_360.txt gt_360-$DATE-$HR.txt
mv out.log out-$DATE-$HR.log
cd -
let HR=10#$HR+1
HR=$(printf %02d $HR);
done
cd $OUTDIR
tar -cf latency-$DATE.tar btw*-$DATE-*.txt gt*$DATE*.txt out-$DATE-*.log
sleep 300
gzip latency-$DATE.tar
sleep 300
/apps/bea/scripts/summaryLatency.sh
sleep 300
rm -f btw* failed* gt* out*
#mv latency-$DATE.tar.gz ../$ARSIP
cd -
It basically execute jar files in same directory as this script and then tar the result, gzip it and execute another bash file then delete all of the previous collected files. The problem is i need this script to run daily and i use crontab to do that. It still return empty tar file but if i execute it manually it works well..I also have other 4 scripts running in crontab and they work good..i still can't figure out what is the main reason of this phenomena
thank you
I'll take a stab: your script is run by /bin/sh instead of /bin/bash.
Try explicitly running it with bash at the cron entry, like this:
* * * * * /bin/bash /your/script
I'm guessing that when you execute $JAVACMD -jar LatencyCounter.jar LatencyCounter.xml $DATE $HR, you're not in the directory containing LatencyCounter.jar. You might want to cd $CURDIR before you enter the for loop.

My shell script stops after exec

I'm writing a shell script that looks like this:
for i in $ACTIONS_DIR/*
do
if [ -x $i ]; then
exec $i nap
fi
done
Now, what I'm trying to achieve is to list every file in $ACTIONS_DIR to be able to execute it. Each file under $ACTIONS_DIR is another shell script.
Now, the problem here is that after using exec the script stops and doesn't go to the next file in line. Any ideas why might this be?
exec replaces the shell process. Remove it if you only want to call the command as a subprocess instead.
exec transfers control of the PID over to the program you're exec'ing. This is mainly used in scripts whose sole purpose is to set up options to that program. Once the exec is hit, nothing below it in the script is executed.
Also, you should try some quoting techniques:
for i in "$ACTIONS_DIR"/*
do
if [ -x "$i" ]; then
"./$i" nap
fi
done
You might also look into using find for this operation:
find "$ACTIONS_DIR" \
-maxdepth 1 \
-type f \
-perm +0111 \
-exec {} nap \;
exec never returns to the caller. Just try
if [ -x "${i}" ]
then
"${i}" nap
fi

Resources