Conditional Awk hashmap match lookup - linux

I have 2 tabular files. One file contains a mapping of 50 key values only called lookup_file.txt.
The other file has the actual tabular data with 30 columns and millions of rows. data.txt
I would like to replace the id column of the second file with the values from the lookup_file.txt..
How can I do this? I would prefer using awk in bash script..
Also, Is there a hashmap data-structure i can use in bash for storing the 50 key/values rather than another file?

Assuming your files have comma-separated fields and the "id column" is field 3:
awk '
BEGIN{ FS=OFS="," }
NR==FNR { map[$1] = $2; next }
{ $3 = map[$3]; print }
' lookup_file.txt data.txt
If any of those assumptions are wrong, clue us in if the fix isn't obvious...
EDIT: and if you want to avoid the (IMHO negligible) NR==FNR test performance impact, this would be one of those every rare cases when use of getline is appropriate:
awk '
BEGIN{
FS=OFS=","
while ( (getline line < "lookup_file.txt") > 0 ) {
split(line,f)
map[f[1]] = f[2]
}
}
{ $3 = map[$3]; print }
' data.txt

You could use a mix of "sort" and "join" via bash instead of having to write it in awk/sed and it is likely to be even faster:
key.cvs (id, name)
1,homer
2,marge
3,bart
4,lisa
5,maggie
data.cvs (name,animal,owner,age)
snowball,dog,3,1
frosty,yeti,1,245
cujo,dog,5,4
Now, you need to sort both files first on the user id columns:
cat key.cvs | sort -t, -k1,1 > sorted_keys.cvs
cat data.cvs | sort -t, -k3,3 > sorted_data.cvs
Now join the 2 files:
join -1 1 -2 3 -o "2.1 2.2 1.2 2.4" -t , sorted_keys.cvs sorted_data.cvs > replaced_data.cvs
This should produce:
snowball,dog,bart,1
frosty,yeti,homer,245
cujo,dog,maggie,4
This:
-o "2.1 2.2 1.2 2.4"
Is saying what columns from the 2 files you want in your final output.
It is pretty fast for finding and replacing multiple gigs of data compared to other scripting languages. I haven't done a direct comparison to SED/AWK, but it is much easier to write a bash script wrapping this than writing in SED/AWK (for me at least).
Also, you can speed up the sort by using an upgraded version of gnu coreutils so that you can do the sort in parallel
cat data.cvs | sort --parallel=4 -t, -k3,3 > sorted_data.cvs
4 being how many threads you want to run it in. I was recommended 2 threads per machine core will usually max out the machine, but if it is dedicated just for this, that is fine.

There are several ways to do this. But if you want an easy one liner, without much in the way of validation I would go with an awk/sed solution.
Assume the following:
the files are tab delimited
you are using bash shell
the id in the data file is in the first column
your files look like this:
lookup
1 one
2 two
3 three
4 four
5 five
data
1 col2 col3 col4 col5
2 col2 col3 col4 col5
3 col2 col3 col4 col5
4 col2 col3 col4 col5
5 col2 col3 col4 col5
I would use awk and sed to accomplish this task like this:
awk '{print "sed -i s/^"$1"/"$2"/ data"}' lookup | bash
what this is doing is going through each line of lookup and writing the following to stdout
sed -i s/^1/one/ data
sed -i s/^2/two/ data
and so on.
it next pipes each line to the shell (| bash), which will execute the sed expression. -i for inplace, you may want -i.bak to create a backup file. note you can change the extension to whatever you would like.
the sed is looking for the id at the start of the line, as indicated by the ^. You don't want to be replacing an 'id' in a column that might not contain an id.
your output would look like the following:
one col2 col3 col4 col5
two col2 col3 col4 col5
three col2 col3 col4 col5
four col2 col3 col4 col5
five col2 col3 col4 col5
of course, your ids are probably not simply 1 to one, 2 to two, etc, but this might get you started in the right direction. And I use the term right very loosely.

The way I'd do this is to use awk to write an awk program to process the larger file:
awk -f <(awk '
BEGIN{print " BEGIN{"}
{printf " a[\"%s\"]=\"%s\";",$1,$2}
END {print " }";
print " {$1=a[$1];print $0}"}
' lookup_file.txt
) data.txt
That assumes that the id column is column 1; if not, you need to change both instances of $1 in $1=a[$1]

Related

Iterate over columns of a csv file bash script

I have been trying to iterate over a file with 20 columns, using two ways, the first one is creating an array with the name of the columns and then passing it using for, but it does not work.
#!/bin/bash
a="${#}"
columns=('$col2' col3 col4 col5 col6 coll7 col8 col9 col10 col11 col12 col13 col14 col15 col16
col17 col18 col19 col20)
for elem in ${columns[*]}
do
while IFS=, read -r col1 col2 col3 col4 col5 col6 coll7 col8 col9 col10 col11 col12;do
b+=($elem)
done < $a
printf '%s\n' "${b[*]}"
done
The other method looks like to take the entire row, and that is not the idea, I would like to take the entire column individually, not by row. However this one did not work, looks like to be a problem with the way the for is written.
#!/bin/bash
a="${#}"
while IFS= read -r line; do
IFS=, read -ra fields <<<"$line"
for ((i=${fields[#]} ; i >= 1 ; i-- ))
do
printf '%s' "${fields[i]}"
done
done < $a
I have the next table, that represent some sales per year. I would like to take the information of each product per year and sum it, in order to verify the total, because in some cases this total values is incorrect. So for example in the year 2004, if you sum each product (45.000 + 70.000 + 100.000) the output is not 323.000 as the table is mentioned.
https://easyupload.io/3lpz6p
While Bash may not be the most efficient way to verify the total of each column, there is a way to do it and I had fun finding the solution.
Background
I have found a way to obtain the total of the values in one column using Bash. My idea was to use the read command and make sure to only read the values in the column specified by the user in the variable col_num. This is because the read command goes through a file row by row. In the example below, I specified col_num to be 0, which means the read command will go through the .csv input line by line, while grabbing only the values in the first column. I then did addition of the values the Bash way according to this.
I believe that only a small adjustment to my code is needed to enable it to go through all 20 columns before it terminates. But since you only have 20 columns, I thought it would not be too bad to increment col_num for every column.
My solution
#!/bin/bash
{
# this reads the first row which has the column names so we will not go
# through that row in the loop below
read
# this is where you specify the column number
col_num=0
# var_2 specifies the number of values (length) grabbed per line. we only
# want one value from each line so let it be 1
var2=1
while IFS=, read -a arr
do
final_arr+=`echo "${arr[#]:$col_num:$var2} + "`
column_total="$(($final_arr 0)) "
done
echo $column_total
} < input.csv
Important note: The script works for every column except for the last column, probably because it does not end with a comma.

Compare fields in two files and print to another in shell

Please help me with the below. I want to print values in 3rd file in shell scripting
Formatting of files should as per below one
a_file - col1 col2 col3
b_file - col1 col2
. a_file - col1 col2 col3
1 P I
1 1Q JI
.b_file
1 I
How to compare 1st field and 3rd field of a_file to 1st and 2nd fields of b_file? Can anyone please help me on this.
Well, if i understand your question correctly, the simplest way is to read both files line by line, compare specified columns and write customized output to the 3rd file.
# read 3 columns from file descriptor 11 and two columns from file descriptor 12
while read -u 11 acol1 acol2 acol3 && read -u 12 bcol1 bcol2; do
# do comparisions betweencolumns
if [ $acol1 = $bcol1 -a $acol3 = $bcol2 ]; then
echo Comparision true
# print values in 3rd file in shell scripting
echo $acol1 $bcol1 $acol3 $bcol2 >>3rd_file
else
echo Comparision false
fi
# bind a_file to 11 file descriptor, and b_file to 12 file descriptor
done 11<a_file 12<b_file

replace tab in an enclosed string in a tab delimited file linux

I have a tab delimited txt file in which third column contains enclosed string that might also has a tab. Because of this extra tab i am getting 5 columns when i try to read this tab delimited file. So i want to replace the tab with space.
Following is the sample file.
col1 col2 col3 col4
1 abc "pqr xyz" asd
2 asd "lmn pqr" aws
3 abc "asd" lmn
I want the output like this
col1 col2 col3 col4
1 abc "pqr xyz" asd
2 asd "lmn pqr" aws
3 abc "asd" lmn
Here is what i have tried
awk -F"\t" '{ gsub("\t","",$3); print $3 }' file.txt
after that i am getting following output
col3
"pqr
"lmn
"asd"
Please help
Having GNU awk (gawk) you can use the following expression:
gawk '{gsub("\t"," ",$3)}1' OFS='\t' FPAT='"[^"]*"|[^\t]*' file
The key here is the FPAT variable. It defines how a field can look like instead of just specifying the field delimiter.
In our case a field can either be an sequence of non-double-quote chars enclosed in double quotes "[^"]*" or a sequence of zero or more non tab characters [^\t]*. (zero, to handle empty fields properly)
Since we are specifying the sequence of non quote characters first it has a precedence.

linux: extract pattern from file

I have a big tab delimited .txt file of 4 columns
col1 col2 col3 col4
name1 1 2 ens|name1,ccds|name2,ref|name3,ref|name4
name2 3 10 ref|name5,ref|name6
... ... ... ...
Now I want to extract from this file everything that starts with 'ref|'. This pattern is only present in col4
So for this example I would like to have as output
ref|name3
ref|name4
ref|name5
ref|name6
I thought of using 'sed' for this, but I don't know where to start.
I think awk is better suited for this task:
$ awk '{for (i=1;i<=NF;i++){if ($i ~ /ref\|/){print $i}}}' FS='( )|(,)' infile
ref|name3
ref|name4
ref|name5
ref|name6
FS='( )|(,)' sets a multile FS to itinerate columns by , and blank spaces, then prints the column when it finds the ref pattern.
Now I want to extract from this file everything that starts with
'ref|'. This pattern is only present in col4
If you are sure that the pattern only present in col4, you could use grep:
grep -o 'ref|[^,]*' file
output:
ref|name3
ref|name4
ref|name5
ref|name6
One solution I had was to first use awk to only get the 4th column, then use sed to convert commas into newlines, and then use grep (or awk again) to get the ones that start with ref:
awk '{print $4}' < data.txt | sed -e 's/,/\n/g' | grep "^ref"
This might work for you (GNU sed):
sed 's/\(ref|[^,]*\),/\n\1\n/;/^ref/P;D' file
Surround the required strings by newlines and only print those lines that begin with the start of the required string.

Linux bash script: how to search on a column but return full row?

I have a tab-delimited file with data like this:
col1 col2 col3
I wrote a bash script that allows the file to be searched using this code:
echo -en "Search term: "
read search
data=`cat data.data | egrep -i "$search"`
This works great for searching the entire file, but I'm now wanting to search only on a specific column (which the user can choose).
I am aware of the cut command and can search on a column using this:
cat data.data | cut -f$col | egrep -i "$search"
But then only that column is outputted, so if I use this method then I somehow need to get the rest of the row back.
How can I search on a column in the file, but return the full rows for the results?
You can pass two variables to awk: the column number and the search term.
awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
If the value of $col is 2, then $2 in awk will correspond to the second column. The ~ operator is used to do a regular expression pattern match. The line will be printed if the column matches the regular expression.
Testing it out:
$ cat data.data
col1 col2 col3
$ col=2
$ search=l2
$ awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
col1 col2 col3
$ search=l3
$ awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
# no output
If you want to do case-insensitive pattern matching, you have two options: convert everything to upper or lower case (tolower($col) ~ tolower(search)), or if you are using GNU awk, set the IGNORECASE variable:
$ search=L2
$ awk -vIGNORECASE=1 -vcol="$col" -vsearch="$search" '$col ~ search' data.data
col1 col2 col3
awk is easier for this:
data=$(awk -v col=$col -v term="$term" 'toupper($col)==toupper(term)' file)
col - column number
term - search term
You could also pass field separator with -F if needed.

Resources