I have the following csv file
more file.csv
1,yes,yes,customer1,1,2,3,4
2,no,yes,customer5,34,56,33,2
3,yes,yes,customer11
4,no,no,customer14
5,yes,no,customer15
6,yes,yes,customer21
7,no,yes,customer34
8,no,yes,customer89
The following (awk) line was written in order to manipulate and take line from the csv and put each element (line) in to the parameter - LINES
declare LINES=` awk -F, 'BEGIN{IGNORECASE=1} $2=="yes" {printf "\"Line number %d customer %s\"\n", $1, $4}' file.csv `
.
echo $LINES
"Line number 1 customer customer1" "Line number 3 customer customer11" "Line number 5 customer customer15" "Line number 6 customer 21”
but when I want to print the number of elemnt in parameter LINES I get 1 ??
echo ${#LINES[*]}
1
While actually I need to get 4 elements ( lines )
Please advice how to fix the awk line in order to get 4 elements?
remark:
please see this example , when I edit manual the LINES , the elements should be 4
declare LINES=( "Line number 1 customer customer1" "Line number 3 customer customer11" "Line number 5 customer customer15" "Line number 6 customer 21” )
echo ${#LINES[*]}
4
The awk output isn't being stored in an array. You’d need declare -a LINES=($(...)) to do that. But even then, bash splits array elements on any whitespace, not only newlines. And if you were to wrap the process substitution in quotes like LINES=("$(...)") you would only have a single element containing the entire output from the command.
You could do the necessary text manipulation with a read loop to preserve the number of elements that contain whitespace.
declare -a lines
while IFS=, read -r line_number answer _ customer _; do
if [[ $answer == #(yes|YES) ]]; then
lines+=("Line number $line_number customer $customer")
fi
done < file.csv
As noted in the comments, depending on the bash version, usage of #(...) inside [[ ... ]] may require shopt -s extglob.
Alternatively, the if could be replaced with a case:
case $answer in
yes|YES)
LINES+=("Line number $line_number customer $customer")
;;
esac
Try this:
a=$(awk -F, 'BEGIN{IGNORECASE=1} $2=="yes" {printf "Line number %d customer %s;", $1, $4}' file.csv)
IFS=';' read -a LINES <<< "${a}"
As #JohnB mentioned, you are populating LINES as a scalar variable, not an array. Try this:
$ IFS=$'\n' LINES=( $(awk 'BEGIN{for(i=1;i<=3;i++) printf "\"Line number %d\"\n", i}') )
$ echo ${#LINES[*]}
3
$ echo "${LINES[0]}"
"Line number 1"
$ echo "${LINES[1]}"
"Line number 2"
$ echo "${LINES[2]}"
"Line number 3"
and tweak to suit your real input/output which would probably result in:
IFS=$'\n' LINES=( $(awk -F, 'BEGIN{IGNORECASE=1} $2=="yes"{printf "\"Line number %d customer %s\"\n", $1, $4}' file.csv) )
If you're using bash, you can just use the mapfile builtin:
$ mapfile -t LINES < \
<(awk -F, 'BEGIN{IGNORECASE=1}
$2=="yes" {printf "\"Line number %d customer %s\"\n", $1, $4}' file.csv)
$ echo "${#LINES[*]}"
4
$ echo "${LINES[#]}"
"Line number 1 customer customer1" "Line number 3 customer customer11" "Line number 5 customer customer15" "Line number 6 customer customer21"
Related
I would like to wrote a little shell script that permit to check if all lines on a file has the same number of ;
I have a file containing the following format :
$ cat filename.txt
34567890;098765456789;098765567;9876;9876;EXTG;687J;
4567800987987;09876789;9667876YH;9876;098765;098765;09876;
SLKL987H;09876LKJ;POIUYT;PÖIUYT;88765K;POIUYTY;LKJHGFDF;
TYUIO;09876LKJ;POIUYT;LKJHG;88765K;POIUYTY;OIUYT;
...
...
...
SDFGHJK;RTYUIO9876;4567890LKJHGFD;POIUYTRF56789;POIUY;POIUYT;9876;
I use the following command for determine of the number of ; of each line :
awk -F';' 'NF{print (NF-1)}' filename.txt
I have the following output :
7
7
7
7
...
...
...
7
Because number of ; on each line of this file is 7.
Now, I want to wrote a script that permit me to verify if all the lines in the file have 7 commas. If it's OK, it tells me that the file is correct. Otherwise, if there is a single line containing more than 7 commas, it tells me that the file is not correct.
Rather than printing output, return a value. eg
awk -F',' 'NR==1{count = NF} NF!=count{status=1}END{exit status}' filename.txt
If there are no lines or if all lines contain the same number of commas, this will return 0. Otherwise, it returns 1 to indicate failure.
Count the number of unique lines and verify that the count is 1.
if (($(awk -F';' 'NF{print (NF-1)}' filename.txt | uniq | wc -l) == 1)); then
echo good
else
echo bad
fi
Just pipe the result through sort -u | wc -l. If all lines have the same number of fields, this will produce one line of output.
Alternatively, just look for a line in awk that doesn't have the same number of fields as the first line.
awk -F';' 'NR==1 {linecount=NF}
linecount != NF { print "Bad line " $0; exit 1}
' filename.txt && echo "Good file"
You can also adapt the old trick used to output only the first of duplicate lines.
awk -F';' '{a[NF]=1}; length(a) > 1 {exit 1}' filename.txt
Each line updates the count of lines with that number of fields. Exit with status 1 as soon as a has more than one entry. Basically, a acts as a set of all field counts seen so far.
Based on all the information you have given me, I ended up doing the following. And it works for me.
nbCol=`awk -F';' '(NR==1){print NF;}' $1`
val=7
awk -F';' 'NR==1{count = NF} NF != count { exit 1}' $1
result=`echo $?`
if [ $result -eq 0 ] && [ $nbCol -eq $val ];then
echo "Good Format"
else
echo "Bad Format"
fi
I want to count the lines which do not have any words separated by spaces.
Example in domainlist.txt:
Hi My name is Ritesh Mishra
my.name
my
There the script should should give the output: 2
I have witten below code
#!/bin/bash
param=" "
cat domainlist.txt | while read line
do
d=`echo $line | awk '{print $2}' `
if [[ $d == $param ]];
then
let count++
fi
done
echo $count
It should count the lines which do not have any space-separated words. But its not showing any inputs.
Using awk to count the lines which doesn't have any space separated word:
$ awk 'NF==1{c++}END{print c}' file
2
Seems like we are overcomplicating this.
Why not just grep?
$: cat file
Hi My name is Ritesh Mishra
my.name
my
$: grep -vc ' ' file
2
I need a shell script, where I read in a number and it compares the number with the numbers in another file.
Here is an example:
I have a file called numbers.txt, which contains the following:
name;type;value;description
samsung;s5;1500;blue
iphone;6;1000;silver
I read in a number for example 1200. And it should print out the values from the file which are lesser than 1200(in my example it should print out 1000)
Here is the code that I started to write but I don't know how to finish it.
echo " Enter a number"
read num
if [ $numbersinthefile -le $num ]; then
echo "$numbersinthefile"
I hope I defined my question properly. Can somebody help me?
Use:
#!/bin/bash
echo -n "Enter the number: "
read num
awk -F\; '$3 < '$num' {print $0;}' myfile
Try this, first you use sed to remove first line then you use cut to get the actual number from line and you compare that number to the input.
echo " Enter a number"
read num
sed '1d' numbers.txt | while read line; do
numbersinthefile=$(echo $line | cut -d';' -f3);
if [ $numbersinthefile -lt $num ]; then
echo $line;
fi
done
i was trying to read a file and count a specific number at a specific place and show how many times it appears, for example:
1st field are numbers, 2nd field brand name, 3rd field a group they belong to, 4th and 5th not important.
1:audi:2:1990:5
2:bmw:2:1987:4
3:bugatti:3:1988:19
4.buick:4:2000:12
5:dodge:2:1999:4
6:ferrari:2:2000:4
As an output, i want to search by column 3, and group together 2's(by brand name) and count how many of them i have.
The output i am looking for should look like this:
1:audi:2:1990:5
2:bmw:2:1987:4
5:dodge:2:1999:4
6:ferrari:2:2000:4
4 -> showing how many lines there are.
I have tried taken this approach but can't figure it out:
file="cars.txt"; sort -t ":" -k3 $file #sorting by the 3rd field
grep -c '2' cars.txt # this counts all the 2's in the file including number 2.
I hope you understand. and thank you in advance.
I am not sure exactly what you mean by "group together by brand name", but the following will get you the output that you describe.
awk -F':' '$3 == 2' Input.txt
If you want a line count, you can pipe that to wc -l.
awk -F':' '$3 == 2' Input.txt | wc -l
I guess line 4 is 4:buick and not 4.buick. Then I suggest this
$ awk 'BEGIN{FS=":"} $3~2{total++;print} END{print "TOTAL --- "total}' Input.txt
Plain bash solution:
#!/bin/bash
while IFS=":" read -ra line; do
if (( ${line[2]} == 2 )); then
IFS=":" && echo "${line[*]}"
(( count++ ))
fi
done < file
echo "Count = $count"
Output:
1:audi:2:1990:5
2:bmw:2:1987:4
5:dodge:2:1999:4
6:ferrari:2:2000:4
Count = 4
I'm trying to get the follow code to read in variables from the user; files to search, a search string, wanted whitespace for output and the amount of fields to be output.
First issue is with the AWK command. If I enter a valid white space such as " " (A single space" or "\t" I am given the Unterminated string and syntax error, which only occurs if I request more than one field to be output (otherwise no whitespace is added on).
Secondly GREP seems to be a bit picky when using the search string. I've had to add double quotation marks to the start and finish of the variable in order for the entire string to be used.
#!/bin/bash
#****************************************************
#Name: reportCreator.sh
#Purpose: Create reports from log files
#Author:
#Date Written: 11/01/2013
#Last Updated: 11/01/2013
#****************************************************
clear
#Determine what to search for
printf "Please enter input file name(s): "
read inputFile
printf "Please enter your search query: "
read searchQuery
printf "Please enter the whitespace character: "
IFS= read whitespace
printf "Please enter the amount of fields to be displayed: "
read fieldAmount
#Add quotation marks to whitespace and searchQuery
whitespace=\""$whitespace"\"
searchQuery=\""$searchQuery"\"
#Declare variables
declare -i counter=0
declare -a fields[$fieldAmount]
declare -a fieldInsert[$fieldAmount]
#While loop for entering fields
while [[ "$counter" -ne "$fieldAmount" ]]
do
#Ask for field numbers
printf "Please enter number for required field $((counter+1)): "
read fields[$counter]
((counter++))
done
#Create function to add '$' before every field and the whitespace characters
function fieldFunction
{
for (( counter=0; counter <= ($fieldAmount-1); counter++ ))
do
fieldInsert[$fieldAmount]="$""${fields[$counter]}"
if (( counter!=($fieldAmount-1) ))
then
printf "${fieldInsert[*]}$whitespace"
else
printf "${fieldInsert[*]}"
fi
done
}
printf "%b\n"
tac $inputFile | grep "$searchQuery" | less #| awk '{print $(fieldFunction)}'
exit 0
Any help would be appreciated.
Grep doesn't understand quotes, so delete the line that adds them to $searchQuery.
Use double quotes instead of single quotes for awk, so $(fieldFunction) will expand.
Fixing this (as well as uncommenting the awk, of course), it works:
user#host 15:00 ~ $ cat script
#!/bin/bash
#****************************************************
#Name: reportCreator.sh
#Purpose: Create reports from log files
#Author:
#Date Written: 11/01/2013
#Last Updated: 11/01/2013
#****************************************************
clear
#Determine what to search for
printf "Please enter input file name(s): "
read inputFile
printf "Please enter your search query: "
read searchQuery
printf "Please enter the whitespace character: "
IFS= read whitespace
printf "Please enter the amount of fields to be displayed: "
read fieldAmount
#Add quotation marks to whitespace and searchQuery
whitespace=\""$whitespace"\"
#Declare variables
declare -i counter=0
declare -a fields[$fieldAmount]
declare -a fieldInsert[$fieldAmount]
#While loop for entering fields
while [[ "$counter" -ne "$fieldAmount" ]]
do
#Ask for field numbers
printf "Please enter number for required field $((counter+1)): "
read fields[$counter]
((counter++))
done
#Create function to add '$' before every field and the whitespace characters
function fieldFunction
{
for (( counter=0; counter <= ($fieldAmount-1); counter++ ))
do
fieldInsert[$fieldAmount]="$""${fields[$counter]}"
if (( counter!=($fieldAmount-1) ))
then
printf "${fieldInsert[*]}$whitespace"
else
printf "${fieldInsert[*]}"
fi
done
}
printf "%b\n"
tac $inputFile | grep "$searchQuery" | awk "{print $(fieldFunction)}"
exit 0
user#host 15:01 ~ $ cat file
foo two three four
foo two2 three2 four2
bar two three four
user#host 15:01 ~ $ bash script
Please enter input file name(s): file
Please enter your search query: foo
Please enter the whitespace character:
Please enter the amount of fields to be displayed: 2
Please enter number for required field 1: 4
Please enter number for required field 2: 2
four2 two2
four two
user#host 15:01 ~ $
Let's take a look at this section of code:
#Create function to add '$' before every field and the whitespace characters
function fieldFunction
{
for (( counter=0; counter <= ($fieldAmount-1); counter++ ))
do
fieldInsert[$fieldAmount]="$""${fields[$counter]}"
if (( counter!=($fieldAmount-1) ))
then
printf "${fieldInsert[*]}$whitespace"
else
printf "${fieldInsert[*]}"
fi
done
}
printf "%b\n"
tac $inputFile | grep "$searchQuery" | awk "{print $(fieldFunction)}"
It can be re-written more simply and robustly as (untested):
tac "$inputFile" |
awk -v fieldsStr="${fields[*]}" -v searchQuery="$searchQuery" -v OFS="$whitespace" '
BEGIN{ numFields = split(fieldsStr,fieldsArr) }
$0 ~ searchQuery {
for ( i=1; i <= numFields; i++ )
printf "%s%s", (i==1?"":OFS), $(fieldsArr[i])
print "\b"
}
'
I don't know why you need "tac" to open the file but I assume you have your reasons so I left it in.