Transpose rows to columns in linux - linux

Input (in a file) is like below -
IndexName ColumnName
Ind1 Col1
Ind1 Col2
Ind2 Col3
Now how to generate below output using linux , please sed or awk or other inbuilt functions -
IndexName ColumnName
Ind1 ("Col1","Col2")
Ind2 ("Col3")
Thanks ,

Here's some ugly awk that I'm certain can be written more elegantly. Also, for the record, this is more "String aggregation" and less "Transpose".
awk -v quote='"' 'NR==1{print $0}NR>1{a[$1]=a[$1]","quote$2quote}END{for (ind in a){print ind, "("substr(a[ind], 2, length(a[ind])-1)")"}}'

Related

How can I add a column to a specific position in a csv file using cat, sed, awk or cut?

I've got a csv file with this content:
col1, col2, col5
1, 1, 1
2, 2, 2
3, 3, 3
4, 4, 4
I need to add a column in the third position of the file with a header but with empty values.
col1, col2, col3, col4
1, 1, , 1
2, 2, , 2
3, 3, , 3
4, 4, , 4
Is there any way to do it using one of the Linux commands like cat, sed, awk or cut?
I know that with cut is possible to modify a file by columns cut -d , -f1,2,3 file1 > file2. But it seems does not have an option to add a custom column.
Any help would be much appreciated.
This is the best I could do with awk
$ awk -v FS=',' -v OFS=',' 'BEGIN{print "col1, col2, col3, col4"} NR>1{k=$3; $3="\t"; $4=k; print $0}' newfile
col1, col2, col3, col4
1, 1, , 1
2, 2, , 2
3, 3, , 3
4, 4, , 4
You could write it to a new file using the redirection operator (> newfile at the end)
The logic is straight-forward:-
-v FS=',' -v OFS=',' sets the input and output field separator to comma(,)
BEGIN{print "col1, col2, col3, col4"} creates a new header with the updated column name, remember the BEGIN block in awk is executed before actual processing of the file
For the <action> part in awk, NR>1{k=$3; $3="\t"; $4=k; print $0} am skipping the header from the original file, taking a backup of the 3rd column in k, over-writing it and replacing with a empty tab-space and 4th column is restored as the older 3rd column value.
Another cool suggestion by fedorqui in the comments, the following even simpler way to do it.
$ awk 'BEGIN {FS=OFS=","} NR==1{$0="col1, col2, col3, col4"} NR>1{$3=sprintf("\t,%s",$3)}1' file
col1, col2, col3, col4
1, 1, , 1
2, 2, , 2
3, 3, , 3
4, 4, , 4
$ awk -v f=3 'BEGIN{FS=OFS=","} {$f = (NR>1?"":"col"f) FS $f} 1' file
col1,col2,col3,col4
1,1,,1
2,2,,2
3,3,,3
4,4,,4
The above was run on this file:
$ cat file
col1,col2,col4
1,1,1
2,2,2
3,3,3
4,4,4
add white space to taste...
If there really is space before the first column like the question shows, let's abuse it:
$ awk -F"(,* +)" '{for(i=j=1; ++j<NF; i++) $i=$j; $i=""} NR==1 {$3="col3"; OFS=","} 1' file
col1,col2,col3,col5
1,1,,1
2,2,,2
3,3,,3
4,4,,4
If not:
$ awk '{$(NF+=1)=$NF; $(NF-1)=(NR==1?"col3":"")","} 1' file
col1, col2, col3, col5
1, 1, , 1
2, 2, , 2
3, 3, , 3
4, 4, , 4

linux: extract pattern from file

I have a big tab delimited .txt file of 4 columns
col1 col2 col3 col4
name1 1 2 ens|name1,ccds|name2,ref|name3,ref|name4
name2 3 10 ref|name5,ref|name6
... ... ... ...
Now I want to extract from this file everything that starts with 'ref|'. This pattern is only present in col4
So for this example I would like to have as output
ref|name3
ref|name4
ref|name5
ref|name6
I thought of using 'sed' for this, but I don't know where to start.
I think awk is better suited for this task:
$ awk '{for (i=1;i<=NF;i++){if ($i ~ /ref\|/){print $i}}}' FS='( )|(,)' infile
ref|name3
ref|name4
ref|name5
ref|name6
FS='( )|(,)' sets a multile FS to itinerate columns by , and blank spaces, then prints the column when it finds the ref pattern.
Now I want to extract from this file everything that starts with
'ref|'. This pattern is only present in col4
If you are sure that the pattern only present in col4, you could use grep:
grep -o 'ref|[^,]*' file
output:
ref|name3
ref|name4
ref|name5
ref|name6
One solution I had was to first use awk to only get the 4th column, then use sed to convert commas into newlines, and then use grep (or awk again) to get the ones that start with ref:
awk '{print $4}' < data.txt | sed -e 's/,/\n/g' | grep "^ref"
This might work for you (GNU sed):
sed 's/\(ref|[^,]*\),/\n\1\n/;/^ref/P;D' file
Surround the required strings by newlines and only print those lines that begin with the start of the required string.

Linux bash script: how to search on a column but return full row?

I have a tab-delimited file with data like this:
col1 col2 col3
I wrote a bash script that allows the file to be searched using this code:
echo -en "Search term: "
read search
data=`cat data.data | egrep -i "$search"`
This works great for searching the entire file, but I'm now wanting to search only on a specific column (which the user can choose).
I am aware of the cut command and can search on a column using this:
cat data.data | cut -f$col | egrep -i "$search"
But then only that column is outputted, so if I use this method then I somehow need to get the rest of the row back.
How can I search on a column in the file, but return the full rows for the results?
You can pass two variables to awk: the column number and the search term.
awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
If the value of $col is 2, then $2 in awk will correspond to the second column. The ~ operator is used to do a regular expression pattern match. The line will be printed if the column matches the regular expression.
Testing it out:
$ cat data.data
col1 col2 col3
$ col=2
$ search=l2
$ awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
col1 col2 col3
$ search=l3
$ awk -vcol="$col" -vsearch="$search" '$col ~ search' data.data
# no output
If you want to do case-insensitive pattern matching, you have two options: convert everything to upper or lower case (tolower($col) ~ tolower(search)), or if you are using GNU awk, set the IGNORECASE variable:
$ search=L2
$ awk -vIGNORECASE=1 -vcol="$col" -vsearch="$search" '$col ~ search' data.data
col1 col2 col3
awk is easier for this:
data=$(awk -v col=$col -v term="$term" 'toupper($col)==toupper(term)' file)
col - column number
term - search term
You could also pass field separator with -F if needed.

insert values of a column into other column

I have a tab-delimited .txt file with two columns and long list of values in both columns
col1 col2
1 a
2 b
3 c
... ...
I want to convert this now to
col1
1
a
2
b
3
c
So that he insert the values from column 2 into column 1 at the correct location.
Is there any way to do this, maybe using awk, or something else through the command line?
You can ask awk to print first column and then second column. By using print for each case, you ensure you have a new line in between them:
awk -F"\t" '{print $1; print $2}' file
Or the following if you just want to print the 1st column on the first line:
awk -F"\t" 'NR==1 {print $1; next} {print $1; print $2}' file
The second command returns the following for your given input:
col1
1
a
2
b
3
c
this should do:
awk -F"\t" -v OFS="\n" '{$1=$1}7' file

Conditional Awk hashmap match lookup

I have 2 tabular files. One file contains a mapping of 50 key values only called lookup_file.txt.
The other file has the actual tabular data with 30 columns and millions of rows. data.txt
I would like to replace the id column of the second file with the values from the lookup_file.txt..
How can I do this? I would prefer using awk in bash script..
Also, Is there a hashmap data-structure i can use in bash for storing the 50 key/values rather than another file?
Assuming your files have comma-separated fields and the "id column" is field 3:
awk '
BEGIN{ FS=OFS="," }
NR==FNR { map[$1] = $2; next }
{ $3 = map[$3]; print }
' lookup_file.txt data.txt
If any of those assumptions are wrong, clue us in if the fix isn't obvious...
EDIT: and if you want to avoid the (IMHO negligible) NR==FNR test performance impact, this would be one of those every rare cases when use of getline is appropriate:
awk '
BEGIN{
FS=OFS=","
while ( (getline line < "lookup_file.txt") > 0 ) {
split(line,f)
map[f[1]] = f[2]
}
}
{ $3 = map[$3]; print }
' data.txt
You could use a mix of "sort" and "join" via bash instead of having to write it in awk/sed and it is likely to be even faster:
key.cvs (id, name)
1,homer
2,marge
3,bart
4,lisa
5,maggie
data.cvs (name,animal,owner,age)
snowball,dog,3,1
frosty,yeti,1,245
cujo,dog,5,4
Now, you need to sort both files first on the user id columns:
cat key.cvs | sort -t, -k1,1 > sorted_keys.cvs
cat data.cvs | sort -t, -k3,3 > sorted_data.cvs
Now join the 2 files:
join -1 1 -2 3 -o "2.1 2.2 1.2 2.4" -t , sorted_keys.cvs sorted_data.cvs > replaced_data.cvs
This should produce:
snowball,dog,bart,1
frosty,yeti,homer,245
cujo,dog,maggie,4
This:
-o "2.1 2.2 1.2 2.4"
Is saying what columns from the 2 files you want in your final output.
It is pretty fast for finding and replacing multiple gigs of data compared to other scripting languages. I haven't done a direct comparison to SED/AWK, but it is much easier to write a bash script wrapping this than writing in SED/AWK (for me at least).
Also, you can speed up the sort by using an upgraded version of gnu coreutils so that you can do the sort in parallel
cat data.cvs | sort --parallel=4 -t, -k3,3 > sorted_data.cvs
4 being how many threads you want to run it in. I was recommended 2 threads per machine core will usually max out the machine, but if it is dedicated just for this, that is fine.
There are several ways to do this. But if you want an easy one liner, without much in the way of validation I would go with an awk/sed solution.
Assume the following:
the files are tab delimited
you are using bash shell
the id in the data file is in the first column
your files look like this:
lookup
1 one
2 two
3 three
4 four
5 five
data
1 col2 col3 col4 col5
2 col2 col3 col4 col5
3 col2 col3 col4 col5
4 col2 col3 col4 col5
5 col2 col3 col4 col5
I would use awk and sed to accomplish this task like this:
awk '{print "sed -i s/^"$1"/"$2"/ data"}' lookup | bash
what this is doing is going through each line of lookup and writing the following to stdout
sed -i s/^1/one/ data
sed -i s/^2/two/ data
and so on.
it next pipes each line to the shell (| bash), which will execute the sed expression. -i for inplace, you may want -i.bak to create a backup file. note you can change the extension to whatever you would like.
the sed is looking for the id at the start of the line, as indicated by the ^. You don't want to be replacing an 'id' in a column that might not contain an id.
your output would look like the following:
one col2 col3 col4 col5
two col2 col3 col4 col5
three col2 col3 col4 col5
four col2 col3 col4 col5
five col2 col3 col4 col5
of course, your ids are probably not simply 1 to one, 2 to two, etc, but this might get you started in the right direction. And I use the term right very loosely.
The way I'd do this is to use awk to write an awk program to process the larger file:
awk -f <(awk '
BEGIN{print " BEGIN{"}
{printf " a[\"%s\"]=\"%s\";",$1,$2}
END {print " }";
print " {$1=a[$1];print $0}"}
' lookup_file.txt
) data.txt
That assumes that the id column is column 1; if not, you need to change both instances of $1 in $1=a[$1]

Resources