Unable to pass string from each field into for loop - string

I'm new to Bash scripting, and been writing a script to check different log files if they exist or not, and I'm a bit stuck here.
clientlist=/path/to/logfile/which/consists/of/client/names
# I grepped only the client name from the logfile,
# and piped into awk to add ".log" to each client name
clients=$(grep -i 'list of client assets:' $clientlist | cut -d":" -f1 | awk '{print $NF".log"}')
echo "Clients : $clients"
#For example "Clients: Apple.log
# Samsung.log
# Nokia.log
# ...."
export alertfiles="*_$clients" #path/to/each/client/logfiles
for file in $alertfiles
do
# I will test each ".log" file of each client, if it exists or not
test -f "$file" && echo $file exists || { echo Error: $file does not exist && exit; }
done
The code above greps the client name from the log file, and using awk, added .log at the end of each client field. From the output, I'm trying to pass eachclientname.log from each field into one variable, i.e. alertfiles, and construct a path to be tested for the file existence.
The number of clients is indefinite and may vary from time to time.
The code I have returns the client name as a whole:
"Clients: Apple.log
Samsung.log
Nokia.log
....."
I'm unsure of how to pass each client name one by one into the loop, so that each client name log file will be tested if it exists or not. How can I do this?
export alertfiles="*_$clients" #path/to/each/client/logfiles
I want to have $clients output listed here one by one, so that it returns all client name one by one, and not as a whole thing, and I can pass that into the loop, so the client log filename gets checked one by one.

Use bash arrays.
(BTW: I can't test this as you have not supplied an example of the input data)
clientlist=/path/to/logfile/which/consists/of/client/names
logfilebase=/path/to/where/the/logfiles/should/exist
declare -a clients=($(grep -i 'list of client assets:' $clientlist | cut -d":" -f1))
for item in "${clients[#]}"; do
if [ -e ${logfilebase}/${item}.log ]; then
echo "$item exists"
else
echo "$item does not exist - quit"
exit 1
fi
done

It's really not clear what you are asking. $clients is already a list of tokens which you can loop over, though saving it in a variable seems like an unnecessary waste of memory.
Also, why are you looping over the wildcard and then checking if the files exist? With nullglob you can make sure that file is not looped at all if there are no matches on the wildcard.
I'm guessing your actual question is how to check whether the log files exist in the directory you nominated.
I have refactored your code to do the grep and cut in Awk, too. See useless use of grep
shopt -s nullglob # bash feature
awk -F: 'tolower($0) ~ /list of client assets:/ {
print(tolower($1).log))' "$clientlist" |
while read -r client; do
# some heavy guessing here
for file in path/to/each/"$client"/logfiles/*; do
test -f "$file" && echo "$file" exists || { echo "Error: $file does not exist" && exit; }
done
done

Related

Can I get the name of the file currently being read in a for loop?

I want to write a script that takes a word as an argument and searches the current and sub directories' files for the word. if it is found in any of the files it should echo out a message containing the file name and the line the word is found on.
this is what I have so far, but I can't find a way to actually store the file name of the file being read or the line number..
word=$1
for var in $(grep -R "$word *")
do
filename=$(find . -type f -name "*") ------- //this doesnt work
linenmbr=$(grep -n "$ord" file) ----------- //this doesnt work
echo found $word in $filename on line number $linenmbr
done
In bash, any time you are looping, you want to avoid calling utilities (e.g. grep and find) within the loop. That is horribly inefficient because it will spawn a separate subshell for every utility every iteration. (which for 10 iterations -- that is 20 additional subshells, it adds up quick) So in your case, you call grep to feed the loop, and then spawn a separate subshell calling grep again within the loop as well as spawning a separate subshell for find.
You should think of a way to only call grep (or a utility that will provide the needed information) only once, and then parse the output.
If you did want to use grep, then calling grep -rn within a process substitution which is used to feed a while loop is probably as good as you are going to get. You can then use the bash builtin parameter expansions to isolate the filename and line-numbers which will be about as efficient as bash could get, e.g.
#!/bin/bash
[ -z "$1" ] && { ## validate at least 1 input given
printf "error: insufficient input.\nusage: %s srch_term\n" "${0##*/}"
exit 1
}
while read -r line; do ## read each line of grep output
fn="${line%%:*}" ## isolate filename
no="${line#*:}" ## remove filename
no="${no%%:*}" ## isolate number
printf "found %s in %s on line number %d\n" "$1" "$fn" "$no"
done < <(grep -rn "$1") ## grep in process substitution
Choosing A More Efficient Method
If you can accomplish what you are attempting with one of the stream editing tools, e.g. awk or sed, you are likely to be able to isolate the wanted information an order of magnitude faster. For example, using awk and setting globstar you could do something similar to the following:
#!/bin/bash
shopt -s globstar ## set globstar
[ -z "$1" ] && { ## validate at least 1 input given
printf "error: insufficient input.\nusage: %s srch_term\n" "${0##*/}"
exit 1
}
## find all matching files and line numbers
awk -v word="$1" '/'$1'/ {
print "found",word,"in",FILENAME,"on line number",FNR; next
}' **/* 2>/dev/null
Give both a try and let me know if you have further questions.
If you want to compare and ensure both are producing the same output, you can use diff to confirm, e.g.
$ diff <(grepscript.sh | sort) <(awkscript.sh | sort)
(if no difference is reported, the output is the same)

Create file with egrep matches and file names

I need some help...
I'm creating a script of unit tests using shellscripts. That script, stores all the beeline calls from all files inside a directory.
The script is doing it's purpose, but I don't wanna append the file name if grep does not return results.
That's my code:
for file in $(ls)
do
cat $file | egrep -on '^( +)?\bbeeline.*password=;"?' >> testa_scripts.sh
echo $file >> testa_scripts.sh
done
How can I do that?
Thanks
grep returns a falsy exit status (1) if it doesn't find any matching lines, so you can put in an if statement to test if it matched anything. Inverted with ! here:
for file in ./*; do
if ! egrep -on '...' "$file" >> somefile; then
echo 'grep did not match anything'
fi
done
(I don't think there's any need for the ls instead of just a shell glob here.)

Get current directory (not full path) with filename only when sub folder is present in Linux bash

I have prepared a bash script to get only the directory (not full path) with file name where file is present. It has to be done only when file is located in sub directory.
For example:
if input is src/email/${sub_dir}/Bank_Casefeed.email, output should be ${sub_dir}/Bank_Casefeed.email.
If input is src/layouts/Bank_Casefeed.layout, output should be Bank_Casefeed.layout. I can easily get this using basename command.
src/basefolder is always constant. In some cases (after src/email(basefolder) directory), sub_directories will be there.
This script will work. I can use this script (only if module is email) to get output. but script should work even if sub directory is present in other modules. Maybe should I count the directories? if there are more than two directories (src/basefolder), script should get sub directories. Is there any better way to handle both scenarios?
#!/bin/bash
filename=`basename src/email/${sub_dir}/Bank_Casefeed.email`
echo "filename is $filename"
fulldir=`dirname src/email/${sub_dir}/Bank_Casefeed.email`
dir=`basename $fulldir`
echo "subdirectory name: $dir"
echo "concatenate $filename $dir"
Entity=$dir/$filename
echo $Entity
Using shell parameter expansion:
sub_dir='test'
files=( "src/email/${sub_dir}/Bank_Casefeed.email" "src/email/Bank_Casefeed.email" )
for f in "${files[#]}"; do
if [[ $f == *"/$sub_dir/"* ]]; then
echo "${f/*\/$sub_dir\//$sub_dir\/}"
else
basename "$f"
fi
done
test/Bank_Casefeed.email
Bank_Casefeed.email
I know there might be an easier way to do this. But I believe you can just manipulate the input string. For example:
#!/bin/bash
sub_dir='test'
DIRNAME1="src/email/${sub_dir}/Bank_Casefeed.email"
DIRNAME2="src/email/Bank_Casefeed.email"
echo $DIRNAME1 | cut -f3- -d'/'
echo $DIRNAME2 | cut -f3- -d'/'
This will remove the first two directories.

check if a username appears in the output of who

The task requires that a bash script be written that will search the "who" command for a given user ID which will be provided via command line argument
This script will display whether or not this user ID is logged in
So far I know that to get the user ID, one can do:
who | cut -d' ' -f1 | grep "userIdToSearchFor"
This grep will display the user ID if it exists, or nothing if it doesn't, so it seems like a good method
I believe the $1 variable will hold the first command line argument
How can I implement this in a bash script file please?
EDIT:
Current working script looks like this
#!/bin/bash
userid=$(who | cut -d' ' -f1 | grep "$1")
if [ "$1" == "$userid" ]
then
echo "online"
else
echo "offline"
fi
This should work for you :
STRING=$(who | cut -d' ' -f1 | grep "$1")
if [ "$1" = "$STRING" ]
then
echo "online"
else
echo "offline"
fi
Some comments and suggestions :
No spaces on both sides of the = when you assign variables (that's where your error message come from).
To assign commands result to a variable, you must use the $( ) syntax. See command substitution for more.
Quote you vars in your test to prevent word splitting.
You should loop on the test, there could be multiple identical usernames.
Avoid caps in you variable names not to confuse with environment variables which are capitalized by convention.
Avoid to use the type of the var for its name, in your case username would be a better choice.
You're doing it the hard way.
$ cat user.sh
#!/bin/bash
# user.sh username - shows whether username is logged on or not
if who | grep --silent "^$1 " ; then
echo online
else
echo offline
fi
$ ./user.sh msw
online

Make SED command work for any variable

deploy.sh
USERNAME="Tom"
PASSWORD="abc123"
FILE="config.conf"
sed -i "s/\PLACEHOLDER_USERNAME/$USERNAME/g" $FILE
sed -i "s/\PLACEHOLDER_PASSWORD/$PASSWORD/g" $FILE
config.conf
deloy="PLACEHOLDER_USERNAME"
pass="PLACEHOLDER_PASSWORD"
This file puts my variables defined in deploy into my config file. I can't source the file so I want put my variables in this way.
Question
I want a command that is generic to work for all placeholder variables using some sort of while loop rather than needing one command per variable. This means any term starting with placeholder_ in the file will try to be replaced with the value of the variable defined already in deploy.sh
All variables should be set and not empty. I guess if there is the ability to print a warning if it can't find the variable that would be good but it isn't mandatory for this.
Basically, use shell code to write a sed script and then use sed -i .bak -f sed.script config.conf to apply it:
trap "rm -f sed.script; exit 1" 0 1 2 3 13 15
for var in USERNAME PASSWORD
do
echo "s/PLACEHOLDER_$var/${!var}/"
done > sed.script
sed -i .bak -f sed.script config.conf
rm -f sed.script
trap 0
The main 'tricks' here are:
knowing that ${!var} expands to the value of the variable named by $var, and
knowing that sed will take a script full of commands via -f sed.script, and
knowing how to use trap to ensure temporary files are cleaned up.
You could also use sed -e "s/.../.../" -e "s/.../.../" -i .bak config.conf too, but the script file is easier, I think, especially if you have more than 2 values to substitute. If you want to go down this route, use a bash array to hold the arguments to sed. A more careful script would use at least $$ in the script file name, or use mktemp to create the temporary file.
Revised answer
The trouble is, although much closer to being generic, it is still not generic since I have to manually put in what variables I want to change. Can it not be more like "for each placeholder_, find the variable in deploy.sh and add that variable, so it can work for any number of variables.
So, find what the variables are in the configuration file, then apply the techniques of the previous answer to solve that problem:
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
for file in "$#"
do
for var in $(sed 's/.*PLACEHOLDER_\([A-Z0-9_]*\).*/\1/' "$file")
do
value="${!var}"
[ -z "$value" ] && { echo "$0: variable $var not set for $file" >&2; exit 1; }
echo "s/PLACEHOLDER_$var/$value/"
done > $tmp
sed -i .bak -f $tmp "$file"
rm -f $tmp
done
trap 0
This code still pulls the values from the environment. You need to clarify what is required if you want to extract the settings from the shell script, but it can be done — the script will have to be sufficiently self-aware to find its source so it can search it for the names. But the basics are in this answer; the rest is a question of tinkering until it does what you need.
#!/bin/ksh
TemplateFile=$1
SourceData=$2
(sed 's/.*/#V0r:PLACEHOLDER_&:r0V#/' ${SourceData}; cat ${TemplateFile}) | sed -n "
s/$/²/
H
$ {
x
s/^\(\n *\)*//
# also reset t flag
t varxs
:varxs
s/^#V0r:\([a-zA-Z0-9_]\{1,\}\)=\([^²]*\):r0V#²\(\n.*\)\"\1\"/#V0r:\1=\2:r0V#²\3\2/
t varxs
# clean the line when no more occurance in text
s/^[^²]*:r0V#²\n//
# and next
t varxs
# clean the marker
s/²\(\n\)/\1/g
s/²$//
# display the result
p
}
"
call like this: YourScript.ksh YourTemplateFile YourDataSourceFile where:
YourTemplateFile is the file that contain the structure with generic value like deloy="PLACEHOLDER_USERNAME"
YourDataSourceFile is the file that contain all the peer Generic value = specific value like USERNAME="Tom"

Resources