BASH script : Integrated document creation hangs - linux

I find that a piece of my bash script causes the hang up. I extract it here :
#!/bin/bash
cat << EndOfFspreadFile >> ./myscript.sh
echo Enter Source Path :
read SRCPATH
FILECNT=`find $SRCPATH/* 2>/dev/null | wc -l`
FILECNTERR=`find $SRCPATH/* 2>&1 | grep "find:" | wc -l`
echo count : $FILECNT
echo problems : $FILECNTERR
EndOfFspreadFile
echo done
This script is expected to just append the script part in the integrated block into myscript.sh file. But it just HANGS !
Thanks !
- Mohamed -

Your $ variables and back quotes will get expanded. You need to escape them in script.
Right now you end up searching the entire filesystem.
Basically, find $SRCPATH/* 2>/dev/null | wc -l gets executed as find /* 2>/dev/null | wc -l
Here is how you can rewrite it (just one line example):
FILECNT=\$(find \$SRCPATH/* 2>/dev/null | wc -l)
By the way, it's easy to find out if you run bash -x <your script>.

Related

Limit number of parallel jobs in bash [duplicate]

This question already has answers here:
Bash: limit the number of concurrent jobs? [duplicate]
(14 answers)
Closed 1 year ago.
I want to read links from file, which is passed by argument, and download content from each.
How can I do it in parallel with 20 processes?
I understand how to do it with an unlimited number of processes:
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
You can add this test :
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
This will maintain maximum 21 instance of curl in parallel .
And wait until you reach 19 or a lower value to start another one .
If you are using GNU sleep , you can do sleep 0.5 , to optimize the wait time
So you code will be
#!/bin/bash
filename="$1"
mkdir -p saved
while read -r line; do
until [ "$( jobs -lr 2>&1 | wc -l)" -lt 20 ]; do
sleep 1
done
url="$line"
name_download_file_sha="$(echo $url | sha256sum | awk '{print $1}').jpeg"
curl -L $url > saved/$name_download_file_sha &
done < "$filename"
wait
xargs -P is the simple solution. It gets somewhat more complicated when you want to save to separate files, but you can use sh -c to add this bit.
: ${processes:=20}
< $filename xargs -P $processes -I% sh -c '
line="$1"
url_file="$line"
name_download_file_sha="$(echo $url_file | sha256sum | awk "{print \$1}").jpeg"
curl -L $url > saved/$name_download_file_sha
' -- %
Based on triplee's suggestions, I've lower-cased the environment variable and changed its name to 'processes' to be more correct.
I've also made the suggested corrections to the awk script to avoid quoting issues.
You may still find it easier to replace the awk script with cut -f1, but you'll need to specify the cut delimeter if it's spaces (not tabs).

BASH: How to add text in the same line after command

I have to print number of folders in my directory, so i use
ls -l $1| grep "^d" | wc -l
after that, I would liked to add a text in the same line.
any ideas?
If you don’t want to use a variable to hold the output you can use echo and put your command in $( ) on that echo line.
echo $(ls -l $1| grep "^d" | wc -l ) more text to follow here
Assign the result to a variable, then print the variable on the same line as the directory name.
folders=$(ls -l "$1" | grep "^d" | wc -l)
printf "%s %d\n" "$1" "$folders"
Also, remember to quote your variables, otherwise your script won't work when filenames contain whitespace.

while loop in ssh going infinite in shell scripting [duplicate]

I'm a shell script newbie, so I must be doing something stupid, why won't this work:
#!/bin/sh
myFile=$1
while read line
do
ssh $USER#$line <<ENDSSH
ls -d foo* | wc -l
count=`ls -d foo* | wc -l`
echo $count
ENDSSH
done <$myfile
Two lines should be printed, and each should have the same value... but they don't. The first print statement [the result of ls -d foo* | wc -l] has the correct value, the second print statement is incorrect, it always prints blank. Do I need to do something special to assign the value to $count?
What am I doing wrong?
Thanks
#!/bin/sh
while read line; do
echo Begin $line
ssh $USER#$line << \ENDSSH
ls -d foo* | wc -l
count=`ls -d foo* | wc -l`
echo $count
ENDSSH
done < $1
The only problem with your script was that when the heredoc token is not quoted, the shell does variable expansion, so $count was being expanded by your local shell before the remote commands were shipped off...

Calling a shell script that is stored in another shell script variabl

I searched SO but could not find any relevant post with this specific problem. I would like to know how to call a shell script which is stored in a variable of another shell script.
In the below script I am trying to read service name & corresponding shellscript, check if the service is running, if not, start the service using the shell script associated with that service name. tried multiple options shared in various forums(like 'eval' etc) with no luck. please help to provide your suggestions on this.
checker.sh
#!/bin/sh
while read service
do
servicename=`echo $service | cut -d: -f1`
servicestartcommand=`echo $service | rev | cut -d: -f1 | rev`
if (( $(ps -ef | grep -v grep | grep $servicename | wc -l) > 0 ))
then
echo "$servicename Running"
else
echo "!!$servicename!! Not Running, calling $servicestartcommand"
eval "$servicestartcommand"
fi
done < names.txt
Names.txt
WebSphere:\opt\software\WebSphere\startServer.sh
WebLogic:\opt\software\WebLogic\startWeblogic.sh
Your script can be refactored into this:
#!/bin/bash
while IFS=: read -r servicename servicestartcommand; do
if ps cax | grep -q "$servicename"; then
echo "$servicename Running"
else
echo "!!$servicename!! Not Running, calling $servicestartcommand"
$servicestartcommand
fi
done < names.txt
No need to use wc -l after grep's output as you can use grep -q
No need to use read full line and then use cut, rev etc later. You can use IFS=: and read the line into 2 separate variables
No need to use eval in the end
It is much simpler than you expect. Instead of:
eval "$servicestartcommand"
eval should only be used in extreme circumstances. All you need is
$servicestartcommand
Note: no quotes.
As an example, try this on the command-line:
cmd='ls -l'
$cmd
That should work. But:
"$cmd"
will fail. It will look for a program with a space in its name called 'ls -l'.
May be I don't get the idea, but why not use system variables?
export FOO=bar
echo $FOO
bar

Bash grep command finding the same file 5 times

I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!

Resources