i am using "awk" command to read my csv file, i want to use it with a condition, if the condition is valid i want to take the "row" not with all the colomns and write it in another csv file.
for example:
the CSV File:
fname lname id address street phone telephone
row1:myfname mylname 123 serlanka j12street 05666355 02365410
row2...
row3...
the condition: if the row have an id "123" -> then i want just fname, lname and id columns in the new csv.
i'v used awk command in my code.
code:
zcat "$FileName" | awk -F'\t' '(($4 >=400) && ($4 <=599)) {Str="HTTP Error: " $4;print Str >> "New.csv"}'
how can i write the row or some information from the same row.
Thanks.
More or less add this to the end of your script:
$3 ~ /123/ {print $1,$2,$3 >> "New.csv"}
Try this if the field delimiter is just spaces:
awk '$3 == "123" {print $1,$2,$3}'
or this if the field delimiter is a ,
awk -F, '$3 == "123" {print $1,$2,$3}'
Related
How to get 1st field of a file only when 2nd field matches a given string?
#cat temp.txt
Ankit pass
amit pass
aman fail
abhay pass
asha fail
ashu fail
cat temp.txt | awk -F"\t" '$2 == "fail" { print $1 }'*
gives no output
Another syntax with awk:
awk '$2 ~ /^faild$/{print $1}' input_file
A deleted 'cat' command.
^ start string
$ end string
It's the best way to match patten.
Either:
Your fields are not tab-separated or
You have blanks at the end of the relevant lines or
You have DOS line-endings and so there are CRs at the end of every
line and so also at the end of every $2 in every line (see
Why does my tool output overwrite itself and how do I fix it?)
With GNU cat you can run cat -Tev temp.txt to see tabs (^I), CRs (^M) and line endings ($).
Your code seems to work fine when I remove the * at the end
cat temp.txt | awk -F"\t" '$2 == "fail" { print $1 }'
The other thing to check is if your file is using tab or spaces. My copy/paste of your data file copied spaces, so I needed this line:
cat temp.txt | awk '$2 == "fail" { print $1 }'
The other way of doing this is with grep:
cat temp.txt | grep fail$ | awk '{ print $1 }'
How i want to split content to multiple files using date format as following below:
Test_<ID name><ddmmyyyy>.CSV
How can I split according to the format?
as before this i use:
awk -F"," 'NR>1 {print > "Test_<ID name><ddmmyyyy>.CSV_"$1".csv"}' Original.CSV
Edit
I got there with
awk -v DATE="$(date +"%d%m%Y")" -F"," 'BEGIN{OFS=","}NR>1 { gsub(/"/,"",$1); print > "Assignment_"$1"_"DATE".csv"}' Test_01012020.CSV
but then I want to include my column name too. How?
You could try using variables from the shell in your thing:
_DATE=` date '+%d%m%Y' `
_ID=my_value
F_EXT=${_ID}${_DATE}
# here "var" is set to the value defined from the shell "F_EXT"
awk -v var=${F_EXT} -F"," 'NR>1 {print > "Test_" var ".CSV_"$1".csv"}' Original.CSV
(I didn't get where you were taking your "ID name", so here it's my_value)
Edit
If you want to include your column name, then read it with the case when NR==1:
awk -v DATE="$(date +"%d%m%Y")" -F"," 'BEGIN{OFS="," } NR==1 {COLUMN_NAME=$1} NR>1 { gsub(/"/,"",$1); print > "Assignment_"$1"_"COLUMN_NAME"_"DATE".csv"}' a.txt
I have a file where
data is
90|123456|.. some more fields
90|654321|... some more fields
.... some more lines starting with 90
91|123456|.. some more fields
91|654321|... some more fields
.... some more lines starting with 91
92|123456|.. some more fields
92|654321|... some more fields
.... some more lines starting with 92
2nd Field is Key value for me
& it will have 90,91 & 92 values in start field
90|keyvalue will always be there
91|keyvalue .. not mendatory
92|keyvalue .. not mendatory
expected output is
90|keyvalue [Mendatory]
91|keyvalue --> print if exist in file
92|keyvalue --> print if exist in file
for all key values
what i did was
grep "^90" origfilename |awk -F '|' '{print $2}'> temp90.txt #this gives me all keyvalues
awk '{print "90|"$0"|"}' temp90.txt >> temp90-1.txt
awk '{print "91|"$0"|"}' temp90.txt >> temp90-1.txt
awk '{print "92|"$0"|"}' temp90.txt >> temp90-1.txt
grep -f temp90-1.txt origfilename
This gets me output But I think its not proper efficient way to do this
How to do this on single awk or other way
awk to the rescue!
$ awk -F'|' 'NR==FNR && /^90/ {k[$2]}
NR!=FNR && $2 in k{print}' file{,}
90|123456|.. some more fields
90|654321|... some more fields
91|123456|.. some more fields
91|654321|... some more fields
92|123456|.. some more fields
92|654321|... some more fields
Explanation In the first scan get the keys and in the second scan print the lines with a matching key. Note that file{,} is the same as file file for awk to double scan the input file.
Hi I have the following text and I need to use awk or sed to print 3 separate columns
11/13/14 101 HUDSON AUBONPAINJERSEY CITY NJ $4.15
11/22/14 MTAMVM*110TH ST/CATNEW YORK NY $19.05
11/22/14 DUANE READE #14226 0NEW YORK NY $1.26
So I like to produce a file containing all the dates. Another file containing all the description and third file containing all the numbers
I can use an awk to print the first column printy $1 and then use -F [$] option to print last column but I'm not able to just print the middle column as there are spaces etc. Can I ignore the spaces? or is there a better way of doing this?
Thaking you in advance
Try doing this :
$ awk '
{
print $1 > "dates"; $1=""
print $NF > "prices"; $NF=""
print $0 > "desc"
}
' file
or :
awk -F' +' '
{
print $1 > "dates"
print $2 > "desc"
print $3 > "prices"
}
' file
Then :
$ cat dates
$ cat desc
$ cat prices
Wasn't fast enough to be the first to give an awk solution, so here's one with grep and sed...
grep -o '^.*/.*/1.' file #first col
sed 's/^.*\/.*\/1.//;s/\$.*//' file #middle col
grep -o '\$.*$' file #last col
I have a huge file composed of the following:
this is text
1234.1234567
this is another text
1234.1234567
and so on
I would like to transfer it to:
this is text:1234.1234567
this is another text:1234.1234567
is this possible using sed? or any other similar command?
Thanks
If you just want to join lines using : as separator, you could use paste:
paste -d : - - < file.txt
Or using awk:
awk -v sep=: '{ if (NR % 2 == 0) { print prev sep $0 } else prev = $0 }' file.txt
If you have lines containing just alphabets and other containing floating point numbers, you can do the following:
awk '/[a-zA-Z]+/ {printf "%s:", $0}
/[0-9.]+/ {print $0}' data
data is the filename. You can redirect the output to another file.