How to print an array value by value in a loop in Linux? - linux

So my shell script look something like this:
VAR=$(shuf -i 1-10 -n 3)
N=1
while [$N le 3 ]; do
NUM=$VAR | awk '{print$`echo $N`}'
#some commands that uses $NUM
N=$(($N+1))
done
But I think awk does not work here, since
echo $VAR | awk '{print$`echo $N`}'
gives me
awk: cmd. line:1: {print$`echo $N`}
awk: cmd. line:1: ^ invalid char '`' in expression
awk: cmd. line:1: {print$`echo $N`}
awk: cmd. line:1: ^ syntax error
So I tried the following command
echo $VAR | awk '{print$$(echo $N)}'
This time I always see all three values, regardless of what $N was
Are there other commands I could try?

Multiple syntax issues and anti-patterns in use! Do check your script in ShellCheck for trivial syntax issues.
Variables in shell are not meant to store multi-line items. Use an array and loop over it.
The bash constructs are space-sensitive, a simple missed space in [$N le 3 ] needs to be written as [ $N le 3 ]
The syntax for running commands and storing output in a variable is just wrong. The actual command substitution syntax is to use var=$(..), where the $(..) contains the commands to be run.
You can't run command-substitution (back-ticks or $(..)) inside awk. Remember awk is not shell. You don't need to use awk or any third party tool for iterating over an array, just use the shell internals.
Since shuf prints output in new line. Use a tool mapfile/readarray to store items safely into the array, i.e.
mapfile -t randomElements < <(shuf -i 1-10 -n 3)
The <() is a special construct in bash called process substitution which the output of a process (shuf in your case) appear as a temporary file to read from.
We now use a loop to iterate over the elements,
for ((i=0; i<"${#randomElements[#]}"; i++)); do
printf '%s\n' "${randomElements[i]}"
done
If by chance mapfile/readarray which should be available in bash versions 4.3 and later is not present, use the read command
while IFS= read -r line; do arr+=("$line"); done < <(shuf -i 1-10 -n 3)
and use the printing logic as usual.

Related

Pass command-line arguments to grep as search patterns and print lines which match them all

I'm learning about grep commands.
I want to make a program that when a user enters more than one word, outputs a line containing the word in the data file.
So I connected the words that the user typed with '|' and put them in the grep command to create the program I intended.
But this is OR operation. I want to make AND operation.
So I learned how to use AND operation with grep commands as follows.
cat <file> | grep 'pattern1' | grep 'pattern2' | grep 'pattern3'
But I don't know how to put the user input in the 'pattern1', 'pattern2', 'pattern3' position. Because the number of words the user inputs is not determined.
As user input increases, grep must be executed using more and more pipes, but I don't know how to build this part.
The user input is as follows:
$ [the name of my program] 'pattern1' 'pattern2' 'pattern3' ...
I'd really appreciate your help.
With grep -f you can grep multiple items, when each of them is on a line in a file.
With <(command) you can let Bash think that the result of command is a file.
With printf "%s\n" and a list of arguments, each argument is printed on a new line.
Together:
grep -f <(printf "%s\n" "$#") datafile
suggesting to use awk pattern logic:
awk '/RegExp-pattern-1/ && /RegExp-pattern-2/ && /RegExp-pattern-3/ 1' input.txt
The advantages: you can play with logic operators && || on RegExp patterns. And your are scanning the whole file once.
The disadvantages: must provide files list (can't traverse sub directories), and limited RegExp syntax compared to grep -E or grep -P
In principle, what you are asking could be done with a loop with output to a temporary file.
file=inputfile
temp=$(mktemp -d -t multigrep.XXXXXXXXX) || exit
trap 'rm -rf "$temp"' ERR EXIT
for regex in "$#"; do
grep "$regex" "$file" >"$temp"/output
mv "$temp"/output "$temp"/input
file="$temp"/input
done
cat "$temp"/input
However, a better solution is probably to arrange for Awk to check for all the patterns in one go, and avoid reading the same lines over and over again.
Passing the arguments to Awk with quoting intact is not entirely trivial. Here, we simply pass them as command-line arguments and process those into an array within the Awk script itself.
awk 'BEGIN { for(i=1; i<ARGC; ++i) a[i]=ARGV[i];
ARGV[1]="-"; ARGC=1 }
{ for(n=1; n<=i; ++n) if ($0 !~ a[n]) next; }1' "$#" <file
In brief, in the BEGIN block, we copy the command-line arguments from ARGV to a, then replace ARGV and ARGC to pass Awk a new array of (apparent) command-line arguments which consists of just - which means to read standard input. Then, we simply iterate over a and skip to the next line if the current input line from standard input does not match. Any remaining lines have matched all the patterns we passed in, and are thus printed.

numeric variable in egrep regular expression bash script

So I am trying to make a script that contains egrep and accepts a numeric variable
#!/bin/bash
var=$1
list="egrep "^.{$var}$ /usr/share/dict/words"
cat list
For example, if var is 5, I would like this script to print out every line with 5 characters. For some reason the script does not do that. Help would be greatly appreciated!
Your script doesn't work because there are several problems with these lines:
list="egrep "^.{$var}$ /usr/share/dict/words"
cat list
The first line isn't complete, it's missing a closing quote,
Even if you fixed it, you're assigning a literal string to list, not the output of a command,
RE and filename should be separated
cat doesn't print a variable's content, echo does that.
So:
#!/bin/bash
var="$1"
list="$(egrep '^.{'"$var"'}$' /usr/share/dict/words)"
echo "$list"
should work.
Or even better, you can use just an awk command:
awk 'length==5' /usr/share/dict/words
with $1 or any other variable:
awk -v n="$1" 'length==n' /usr/share/dict/words

Using awk command in Bash

I'm trying to loop an awk command using bash script and I'm having a hard time including a variable within the single quotes for the awk command. I'm thinking I should be doing this completely in awk, but I feel more comfortable with bash right now.
#!/bin/bash
index="1"
while [ $index -le 13 ]
do
awk "'"/^$index/ {print}"'" text.txt
done
Use the standard approach -- -v option of awk to set/pass the variable:
awk -v idx="$index" '$0 ~ "^"idx' text.txt
Here i have set the variable idx as having the value of shell variable $index. Inside awk, i have simply used idx as an awk variable.
$0 ~ "^"idx matches if the record starts with (^) whatever the variable idx contains; if so, print the record.
awk '/'"$index"'/' text.txt
# A lil play with the script part where you split the awk command
# and sandwich the bash variable in between using double quotes
# Note awk prints by default, so idiomatic awk omits the '{print}' too.
should do, alternatively use grep like
grep "$index" text.txt # Mind the double quotes
Note : -le is used for comparing numerals, so you may change index="1" to index=1.

Unable to print by using awk command in the shell script

#!/bin/bash
echo "Number of hosts entered are "$#
echo "Hostnames are "$#
for i in "$#"
do
echo "Logging in to the host "$i
pbsh root#$i '
ipaddr=`ip r | awk '{print $9}'`
if [ ipaddr = 172.*.*.* ]
then
echo "Script can not be run in this IP series"
exit
else
cd /var/tmp ; wget http://**********
fi'
done
After executing the above script it is throwing below error. The script is getting execute but not in the desired way.
awk: cmd. line:1: {print
awk: cmd. line:1: ^ unexpected newline or end of string
I am newbie to the scripting. Kindly correct me if anything wrong in the script.
In the listing which you posted, the opening single quote in the pbsh line is closed by single quote immediately followed by the awk command. You can escape the latter by prefixing it with a backslash.
If pbsh also accepts the command to be executed from stdin, an alternative would be to use a HERE document (see the bash man-page, section Here Documents).
UPDATE: Gordon Davisson is right in his comment. \' doesn't work either. pbsh insists on getting the command to be executed as a single argument, you could either fiddle around with the quotes, as he suggested, or put the whole input for pbsh into a separate file and use, i.e.,
pbsh root#$i $(<input_script.pbsh)

Linux: Append variable to end of line using line number as variable

I am new to shell scripting. I am using ksh.
I have this particular line in my script which I use to append text in a variable q to the end of a particular line given by the variable a
containing the line number .
sed -i ''$a's#$#'"$q"'#' test.txt
Now the variable q can contain a large amount of text, with all sorts of special characters, such as !##$%^&*()_+:"<>.,/;'[]= etc etc, no exceptions. For now, I use a couple of sed commands in my script to remove any ' and " in this text (sed "s/'/ /g" | sed 's/"/ /g'), but still when I execute the above command I get the following error
sed: -e expression #1, char 168: unterminated `s' command
Any sed, awk, perl, suggestions are very much appreciated
The difficulty here is to quote (escape) the substitution separator characters # in the sed command:
sed -i ''$a's#$#'"$q"'#' test.txt
For example, if q contains # it will not work. The # will terminate the replacement pattern prematurely. Example: q='a#b', a=2, and the command expands to
sed -i 2s#$#a#b# test.txt
which will not append a#b to the end of line 2, but rather a#.
This can be solved by escaping the # characters in q:
sed -i 2s#$#a\#b# test.txt
However, this escaping could be cumbersome to do in shell.
Another approach is to use another level of indirection. Here is an example of using a Perl one-liner. First q is passed to the script in quoted form. Then, within the script the variable assigned to a new internal variable $q. Using this approach there is no need to escape the substitution separator characters:
perl -pi -E 'BEGIN {$q = shift; $a = shift} s/$/$q/ if $. == $a' "$q" "$a" test.txt
Do not bother trying to sanitize the string. Just put it in a file, and use sed's r command to read it in:
echo "$q" > tmpfile
sed -i -e ${a}rtmpfile test.txt
Ah, but that creates an extra newline that you don't want. You can remove it with:
sed -e ${a}rtmpfile test.txt | awk 'NR=='$a'{printf $0; next}1' > output
Another approach is to use the patch utility if present in your system.
patch test.txt <<-EOF
${a}c
$(sed "${a}q;d" test.txt)$q
.
EOF
${a}c will be replaced with the line number followed by c which means the operation is a change in line ${a}.
The second line is the replacement of the change. This is the concatenated value of the original text and the added text.
The sole . means execute the commands.

Resources