How to run all the scripts found by find - linux

I'm trying to find all the init scripts created for websphere.
I know all the scripts end up with -init, so the first part of the code is:
find /etc/rc.d/init.d -name "*-init"
Also, I need all the script that run on an specific path, so the second part would be
| grep -i "/opt/ibm"
Finally, I need help with the last part. I have found the scripts I need to run them with the stop argument.
find /etc/rc.d/init.d -name "*-init" | grep -i "/opt/ibm" | <<run script found with stop argument>>
How can I run the command found with find?

Use a loop so that we are a little more careful while executing them:
#!/bin/bash
shopt -s globstar
for file in /etc/rc.d/init.d/**/*-init; do # grab all -init scripts
script=$(readlink -f "$file") # grab the actual file in case of a symlink
[[ -f $script ]] || continue # skip if not a regular file
[[ $file = */opt/ibm/* ]] || continue # not "/opt/ibm/", skip
printf '%s\n' "Executing script '$script'"
"$script" stop; exit_code=$?
printf '%s\n' "Script '$script' finished with exit_code $exit_code"
done

If you omit the 'find' and use grep directly you could do something like this:
grep -i "/opt/ibm" /etc/rc.d/init.d/* | sed 's/:.*/ stop/g' | sort -u | bash
it uses grep directly, which adds the filename to the output: filename:matched line
since you only need the filename and not the match, use sed to replace the ':' and the rest of the line with ' stop' (see the space before stop)
use sort -u (make sure, to execute each script only once)
Pipe the result into a shell

Related

While loop with sed

I have the following code but it doesnt work when i execute the code, the file th2.csv its empty.
The function of the sed is replace two words. I dont know how to make the script work correctly.
It must be done with the while.
bash th1.csv > th2.csv
Script bash
#!/bin/bash
while read -r line; do
echo "$line" | sed -E "s/,True,/,ll,/g;s/,False,/,th,/" th1.csv
done < th1.csv
Given the requirements that you must loop and apply regex, line by line, then consider:
#!/bin/bash
while read -r line; do
echo "$line" | sed -E "s/,True,/,ll,/g;s/,False,/,th,/" >> th2.csv
done < th1.csv
This reads, line by line, via a while loop. Each line is passed as stdin to sed. Note we remove the th1.csv at the end of your original sed attempt, as that will override sed reading from stdin (causing it to ignore it and instead process the file over and over again, every iteration). Lastly we append >> to your th2.csv file each iteration.
Guessing a step ahead, that you may want to pass the two files in as parameters to the script (just based on your first code snippet) then you can change this to:
#!/bin/bash
while read -r line; do
echo "$line" | sed -E "s/,True,/,ll,/g;s/,False,/,th,/" >> "$2"
done < "$1"
And, assuming this script is called myscript.sh you can call it like:
/bin/bash myscript.sh 'th1.csv' 'th2.csv'
Or, if you make it executable with chmod +x myscript.sh then:
./myscript.sh 'th1.csv' 'th2.csv'.

Multiple process curl command for urls to output to individual files

I am attempting to curl multiple urls in a bash command. Eventually I will be curling a large number of Urls so I am using xargs to use multiple processes to speed up the process.
My file consists of x number of URLs:
https://someurl.com
https://someotherurl.com
My issue comes when attempting to output the results to separate files named after the URLs I curl.
The bash command I have is:
xargs -P 5 -n 1 -I% curl -k -L % -0 % < urls.txt
When I run this I get 'Failed to create file https://someotherurl.com'
You cannot create a file with / in the filename. You could do it this way:
#!/bin/bash
while IFS= read -r line
do
echo "LINE: $line"
if [[ "$line" != "" ]]
then
filename="${line#https://}"
echo "FILENAME: $filename"
# YOUR CURL COMMAND HERE, USING $filename
fi
done < url.txt
it ignores empty lines
variable substitution is used to remove the https:// part of each URL
this will allow you to create the file
Note: if your URLs containt sub-directories, they must be removed as well.
Ex: you want to do https://www.exemple.com/some/sub/dir
The script I suggested here would try to create a file named "www.exemple.com/some/sub/dir". In this case, you could replace the / with _ using tr.
The script would become:
#!/bin/bash
while IFS= read -r line
do
echo "LINE: $line"
if [[ "$line" != "" ]]
then
filename=$(echo "$line" | tr '/' '_')
filename2=${filename#https:__}
echo "FILENAME: $filename2"
# YOUR CURL COMMAND HERE, USING $filename2
fi
done < url.txt
Because your question is ambiguous, I would assume:
You have a file urls.txt that contains URLs separated by LF.
You want to download all URLs by curl and use each URL as its filename.
Unfortunately, that's not possible because URL contains invalid characters like slash /. Alternatively, for this case, I would suggest you use Bsse64 safe mode to decode URL before saving to file based on RFC 3548.
After applying this requirement, your script would become like:
seq 100 | xargs -I# echo 'https://example.com?#' > urls.txt
xargs -P0 -L1 sh -c 'curl -SskL0 -o $(printf %s "$1" | uuencode -m /dev/stdout | sed "1d;\$d" | tr +/ -_) "$1"' sh < urls.txt

BASH: grep doesn't work in shell script but echo shows correct command and it works on command line

I need to write a script that checks some >20k files for some >2k search text and it needs to be flexible, so I came up with this script:
#!/bin/bash
# This script checks all files in a given directory against a list of criteria
shopt -s expand_aliases
source ~/.bashrc
TIMESTAMP=$(date "+%Y-%m-%d-%T")
ROOT_DIR=/data
PROJECT_NAME=$1
FILE_DIR=$ROOT_DIR/projects/$1/$2
RESULT_DIR=$ROOT_DIR/projects/$1/check_result
SEARCHTEXT_FILE=$ROOT_DIR/scripts/$3
OIFS="$IFS"
IFS=$'\n'
files=$(find $FILE_DIR -type f -name '*.json')
for file in $files; do
while read line; do
grep -H -o $line "$file" >> $RESULT_DIR/check_result_$TIMESTAMP.log
done < $SEARCHTEXT_FILE
done
IFS="$OIFS"
This script only produces the empty $RESULT_DIR/check_result_$TIMESTAMP.log log file with correct name.
Because the file names sometimes contain spaces I added the IFS... statements and I enclosed $file in " quotes (copied from another post).
The content of the $SEARCHTEXT_FILE is for example:
'Tel alt........'
'City ..........'
If I place an echo before the grep like this
echo grep -H -o $line "$file"
then output I get is
grep -H -o 'Tel alt........' /data/projects/DNAR/input/report-157538.json
and I can execute this line as is and get the correct result.
I tried to put various combinations of " or ' or ` or () or {} around any part of this grep command but nothing changed.
Somewhere I did read about alias and the alias set for grep is
alias grep='grep --color=auto'
After many hours of searching on the internet I couldn't find any post that helped me as most of them are covering issues around wrong quotes or inline bash issues.
What are I missing here?
The simple and obvious workaround is to remove all that complexity and simply use the features of the commands you are running anyway.
find "$FILE_DIR" -type f -name '*.json' \
-exec grep -H -o -f "$SEARCHTEXT_FILE" {} + > "$RESULT_DIR/check_result_$TIMESTAMP.log"
Notice also the quoting fixes; see When to wrap quotes around a shell variable; to avoid mishaps, you should switch to lower case for your private variables (see Correct Bash and shell script variable capitalization).
shopt -s expand_aliases
and source ~/.bashrc merely look superfluous, but could contribute to whatever problem you are trying to troubleshoot; they should basically never be part of a script you plan to use in production.

How can i move/group specific folders in bash?

I have a folder structure like the following:
2020-123-1
2020-123-2
2020-123-3
2020-124-1
2020-124-2
...
I need to create folders from the first 2 numbers and omit whatever follows the second dash (-). Then I need to put the prior folders under the newly created ones with the correct name.
2020-123
->2020-123-1
->2020-123-2
->2020-123-3
2020-124
->2020-124-1
->2020-124-2
I tried to write a script in bash like this:
ls -d */ > folder.txt
cut -f1,2 -d"-" folder.txt |cut -f1 -d"/" |sort|uniq > mainfolder.txt
while read line; do mkdir $line ; done < mainfolder.txt
while read line; do mv $(cut -f1,2 -d"-" $line) $line/ ; done < folder.txt
I couldn't make the last line work, I know it has issues.
Actually, you don't have to parse the directory names and build the hierarchy. You can make use of the -p option of mkdir, thus, an awk one-liner will do the job:
awk -F'-' '{top=$1 FS $2;printf "mkdir -p %s; mv %s %s\n",top, $0, top}' dir.txt
The output with your example:
mkdir -p 2020-123; mv 2020-123-1 2020-123
mkdir -p 2020-123; mv 2020-123-2 2020-123
mkdir -p 2020-123; mv 2020-123-3 2020-123
mkdir -p 2020-124; mv 2020-124-1 2020-124
mkdir -p 2020-124; mv 2020-124-2 2020-124
Note
This one-liner just print the commands without executing them, you just pipe the output to |sh if everything looks fine. Examine the output commands, change the printf format/values for adjustment.
I didn't quote the filenames, since your example doesn't contain any special chars. Do it if it is in the case.
So the final script is as follows:
ls -d */ | cut -f1 -d"/" > folder.txt
awk -F'-' '{top=$1 FS $2;printf "mkdir -p %s; mv %s %s\n",top, $0, top}' folder.txt |sh
In pure bash:
#!/bin/bash
for src in *-*-*; do
destdir=${src%-*}
[[ -d $destdir ]] || mkdir "$destdir" || exit
# This just prints out the command that will be called.
# Remove the "echo" in actual script after making sure it will run as intented
echo mv "$src" "$destdir"
done
In the script above it is assumed that each file name to be moved contains exactly two dashes. If it can contain two or more dashes then the destdir=${src%-*} line should be replaced with these two lines:
suffix=${src#*-*-}
destdir=${src%"-$suffix"}
For detailed information read the "shell parameter expansion" section in bash reference.
Additionally, a good read article is: Why you shouldn't parse the output of ls

Bash Script Variable

#!/bin/bash
RESULT=$(grep -i -e "\.[a-zA-z]\{3\}$" ./test.txt)
for i in $(RESULT);
do
echo "$i"
FILENAME="$(dirname $RESULT)"
done
I have a problem with the line FILENAME="$(dirname $RESULT)". Running the script in debugging mode(bash -x script-name), the ouput is:
test.sh: line 9: RESULT: command not found
For some reason, it can't take the result of the variable RESULT and save the output of dir command to the new variable FILENAME. I can't understand why this happens.
After lots of tries, I found the solution to save full path of finame and finame to two different variables.
Now, I want for each finame, find non-case sensitive of each filename. For example, looking for file image.png, it doesn't matter if the file is image.PNG
I am running the script
while read -r name; do
echo "$name"
FILENAME="$(dirname $name)"
BASENAME="$(basename $name)"
done < <(grep -i -e "\.[a-zA-z]\{3\}$" ./test.txt)
and then enter the command:
find . $FILENAME -iname $BASENAME
but it says command FILENAME and BASENAME not found.
The syntax:
$(RESULT)
denotes command substitution. Saying so would attempt to run the command RESULT.
In order to substitute the result of the variable RESULT, say:
${RESULT}
instead.
Moreover, if the command returns more than one line of output this approach wouldn't work.
Instead say:
while read -r name; do
echo "$name"
FILENAME="$(dirname $name)"
done < <(grep -i -e "\.[a-zA-z]\{3\}$" ./test.txt)
The <(command) syntax is referred to as Process Substitution.
for i in $(RESULT) isn't right.You can use $RESULT or ${RESULT}

Resources