Split a column on the space into two columns - linux

I have a large file ~9GB with each row in this format:
12345,6789,Jim Bob
My desired output is this:
12345,6789,Jim,Bob
How would I do this using awk. It seems to be the fastest way to process this and I am new when it comes to using the Terminal for such things. Thanks!

Using awk and regex to replace the first space with a comma:
$ awk '{sub(/ /,",")}1' file
12345,6789,Jim,Bob
or using awk and regex to replace space with a comma in the third field ($3):
$ awk 'BEGIN{FS=OFS=","}{sub(/ /,",",$3)}1' file
12345,6789,Jim,Bob

Another solution using awk
awk '$1=$1' OFS=, file
you get,
12345,6789,Jim,Bob

I have a feeling sed would be a lot faster for your requirement, given the huge size of the input file:
sed -E 's/ ([^ ]+)$/,\1/' file > file.modified
or, for in-place editing:
sed -i.bak -E 's/ ([^ ]+)$/,\1/' file
Benchmarking with a 36 MB file, dummy.txt:
$ time awk 'BEGIN{FS=OFS=","}{sub(/ /,",",$3)}1' dummy.txt >/dev/null
real 0m3.357s
user 0m3.337s
sys 0m0.016s
$ time awk '{sub(/ /,",")}1' dummy.txt >/dev/null
real 0m3.182s
user 0m3.166s
sys 0m0.014s
$ time awk '$1=$1' OFS=, dummy.txt >/dev/null
real 0m3.150s
user 0m3.130s
sys 0m0.018s
$ time sed -E 's/ ([^ ]+)$/,\1/' dummy.txt >/dev/null
real 0m1.646s
user 0m1.633s
sys 0m0.013s
sed is 2x faster than awk! For a 9G file, this difference could be even more dramatic.

Well you can use 'tr' also if that suits you
tr -s ' ' ',' < file.txt > tr.txt
where file.txt is your input file
and tr.txt is the output file.
Well if you want to use awk only, you could choose space as the field separator and use awk to print a ',' between two columns
awk -F' ' '{print $1","$2}' file.txt
Benchmarking done for 283Mb file
Using tr
time tr -s ' ' ',' < file.txt >tr.txt
real 0m10.976s
user 0m1.042s
sys 0m0.966s
Using awk
time awk -F' ' '{print $1","$2}' file.txt > /dev/null
real 0m14.141s
user 0m13.909s
sys 0m0.199s
Using #codeforester method
time sed -E 's/ ([^ ]+)$/,\1/' file.txt >/dev/null
real 0m42.183s
user 0m41.659s
sys 0m0.435s
tr works even faster than sed and awk

Related

How do you change column names to lowercase with linux and store the file as it is?

I am trying to change the column names to lowercase in a csv file. I found the code to do that online but I dont know how to replace the old column names(uppercase) with new column names(lowercase) in the original file. I did something like this:
$cat head -n1 xxx.csv | tr "[A-Z]" "[a-z]"
But it simply just prints out the column names in lowercase, which is not enough for me.
I tried to add sed -i but it did not do any good. Thanks!!
Using awk (readability winner) :
concise way:
awk 'NR==1{print tolower($0);next}1' file.csv
or using ternary operator:
awk '{print (NR==1) ? tolower($0): $0}' file.csv
or using if/else statements:
awk '{if (NR==1) {print tolower($0)} else {print $0}}' file.csv
To change the file for real:
awk 'NR==1{print tolower($0);next}1' file.csv | tee /tmp/temp
mv /tmp/temp file.csv
For your information, sed using the in place edit switch -i do the same: it use a temporary file under the hood.
You can check this by using :
strace -f -s 800 sed -i'' '...' file
Using perl:
perl -i -pe '$_=lc() if $.==1' file.csv
It replace the file on the fly with -i switch
You can use sed to tell it to replace the first line with all lower-case and then print the rest as-is:
sed '1s/.*/\L&/' ./xxx.csv
Redirect the output or use -i to do an in-place edit.
Proof of Concept
$ echo -e "COL1,COL2,COL3\nFoO,bAr,baZ" | sed '1s/.*/\L&/'
col1,col2,col3
FoO,bAr,baZ

unable to redirect data by using awk or cat

I am using AIX for following code:
#!/bin/sh
cat ip.txt | awk -F ' ' '{print $2,$1}' >op.txt
or
awk -F ' ' '{print $2,$1}' ip.txt > op2.txt
It is generating an unknown file named "oxb1du".
Aslo, I can see file op2.txt after ls -ltr but it does not contain any data.
I/P file:
name 1
info 21
city 28
pin 31
state 34
Maybe you are looking for:
cat ip.txt | awk '{print $2,$1}' > op.txt
You probably have binary characters in your file. Try cleaning it first.
tr -cd '[:graph:]\n\t ' <"$file" >$TEMP_FILE && mv $TEMP_FILE "$file"
dos2unix and other programs may work, but I've had issues with dos2unix only removing carriage returns, and not other garbage so I've given you the above (obviously assign or replace the variables). Then just use:
awk -F" " '{print $2,$1}' ip.txt > op2.txt
I only changed the quotes for readability-- having them hanging away from the -F, and before other single quotes looks wonky. This way is quicker to read.

How to run grep inside awk?

Suppose I have a file input.txt with few columns and few rows, the first column is the key, and a directory dir with files which contain some of these keys. I want to find all lines in the files in dir which contain these key words. At first I tried to run the command
cat input.txt | awk '{print $1}' | xargs grep dir
This doesn't work because it thinks the keys are paths on my file system. Next I tried something like
cat input.txt | awk '{system("grep -rn dir $1")}'
But this didn't work either, eventually I have to admit that even this doesn't work
cat input.txt | awk '{system("echo $1")}'
After I tried to use \ to escape the white space and the $ sign, I came here to ask for your advice, any ideas?
Of course I can do something like
for x in `cat input.txt` ; do grep -rn $x dir ; done
This is not good enough, because it takes two commands, but I want only one. This also shows why xargs doesn't work, the parameter is not the last argument
You don't need grep with awk, and you don't need cat to open files:
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' input.txt dir/*
Nor do you need xargs, or shell loops or anything else - just one simple awk command does it all.
If input.txt is not a file, then tweak the above to:
real_input_generating_command |
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' - dir/*
All it's doing is creating an array of keys from the first file (or input stream) and then looking for each key from that array in every file in the dir directory.
Try following
awk '{print $1}' input.txt | xargs -n 1 -I pattern grep -rn pattern dir
First thing you should do is research this.
Next ... you don't need to grep inside awk. That's completely redundant. It's like ... stuffing your turkey with .. a turkey.
Awk can process input and do "grep" like things itself, without the need to launch the grep command. But you don't even need to do this. Adapting your first example:
awk '{print $1}' input.txt | xargs -n 1 -I % grep % dir
This uses xargs' -I option to put xargs' input into a different place on the command line it runs. In FreeBSD or OSX, you would use a -J option instead.
But I prefer your for loop idea, converted into a while loop:
while read key junk; do grep -rn "$key" dir ; done < input.txt
Use process substitution to create a keyword "file" that you can pass to grep via the -f option:
grep -f <(awk '{print $1}' input.txt) dir/*
This will search each file in dir for lines containing keywords printed by the awk command. It's equivalent to
awk '{print $1}' input.txt > tmp.txt
grep -f tmp.txt dir/*
grep requires parameters in order: [what to search] [where to search]. You need to merge keys received from awk and pass them to grep using the \| regexp operator.
For example:
arturcz#szczaw:/tmp/s$ cat words.txt
foo
bar
fubar
foobaz
arturcz#szczaw:/tmp/s$ grep 'foo\|baz' words.txt
foo
foobaz
Finally, you will finish with:
grep `commands|to|prepare|a|keywords|list` directory
In case you still want to use grep inside awk, make sure $1, $2 etc are outside quote.
eg. this works perfectly
cat file_having_query | awk '{system("grep " $1 " file_to_be_greped")}'
// notice the space after grep and before file name

Is there any equivalent command grep -nP "\t" some_file , using sed or awk

I am trying to find the occurance of tab in a file some_file and print those line with leading line number.
grep -nP "\t" some_file works well for me but I want sed or awk equivalent command for the same.
To emulate: grep -nP "\t" file.txt
Here's one way using GNU awk:
awk '/\t/ { print NR ":" $0 }' file.txt
Here's one way using GNU sed:
< file.txt sed -n '/\t/{ =;p }' | sed '{ N;s/\n/:/ }'
Well, you can always do it in sed:
cat -n test.txt | sed -n "/\t/p"
Unfortunately, sed can only print line numbers to stdout with a new line, so in any case, more than one command is necessary. A more lengthy (unnecessary so) version of the above, but one only using sed, would be:
sed = test.txt | sed -n "N;s/\n/ /;/\t/p"
but I like the one with cat more. CATS ARE NICE.

How to reverse order of fields using AWK?

I have a file with the following layout:
123,01-08-2006
124,01-09-2007
125,01-10-2009
126,01-12-2010
How can I convert it into the following by using AWK?
123,2006-08-01
124,2007-09-01
125,2009-10-01
126,2009-12-01
Didn't read the question properly the first time. You need a field separator that can be either a dash or a comma. Once you have that you can use the dash as an output field separator (as it's the most common) and fake the comma using concatenation:
awk -F',|-' 'OFS="-" {print $1 "," $4,$3,$2}' file
Pure awk
awk -F"," '{ n=split($2,b,"-");$2=b[3]"-"b[2]"-"b[1];$i=$1","$2 } 1' file
sed
sed -r 's/(^.[^,]*,)([0-9]{2})-([0-9]{2})-([0-9]{4})/\1\4-\3-\2/' file
sed 's/\(^.[^,]*,\)\([0-9][0-9]\)-\([0-9][0-9]\)-\([0-9]\+\)/\1\4-\3-\2/' file
Bash
#!/bin/bash
while IFS="," read -r a b
do
IFS="-"
set -- $b
echo "$a,$3-$2-$1"
done <"file"
Unfortunately, I think standard awk only allows one field separator character so you'll have to pre-process the data. You can do this with tr but if you really want an awk-only solution, use:
pax> echo '123,01-08-2006
124,01-09-2007
125,01-10-2009
126,01-12-2010' | awk -F, '{print $1"-"$2}' | awk -F- '{print $1","$4"-"$3"-"$2}'
This outputs:
123,2006-08-01
124,2007-09-01
125,2009-10-01
126,2010-12-01
as desired.
The first awk changes the , characters to - so that you have four fields separated with the same character (this is the bit I'd usually use tr ',' '-' for).
The second awk prints them out in the order you specified, correcting the field separators at the same time.
If you're using an awk implementation that allows multiple FS characters, you can use something like:
gawk -F ',|-' '{print $1","$4"-"$3"-"$2}'
If it doesn't need to be awk, you could use Perl too:
$ perl -nle 'print "$1,$4-$3-$2" while (/(\d{3}),(\d{2})-(\d{2})-(\d{4})\s*/g)' < file.txt

Resources