Related
everyone. I have
file 1.log:
text1 value11 text
text text
text2 value12 text
file 2.log:
text1 value21 text
text text
text2 value22 text
I want:
value11;value12
value21;value22
For now I grep values in separated files and paste later in another file, but I think this is not a very elegant solution because I need to read all files more than one time, so I try to use grep for extract all data in a single cat | grep line, but is not the result I expected.
I use:
cat *.log | grep -oP "(?<=text1 ).*?(?= )|(?<=text2 ).*?(?= )" | tr '\n' '; '
or
cat *.log | grep -oP "(?<=text1 ).*?(?= )|(?<=text2 ).*?(?= )" | xargs
but I get in each case:
value11;value12;value21;value22
value11 value12 value21 value22
Thank you so much.
Try:
$ awk -v RS='[[:space:]]+' '$0=="text1" || $0=="text2"{getline; printf "%s%s",sep,$0; sep=";"} ENDFILE{if(sep)print""; sep=""}' *.log
value11;value12
value21;value22
For those who prefer their commands spread over multiple lines:
awk -v RS='[[:space:]]+' '
$0=="text1" || $0=="text2" {
getline
printf "%s%s",sep,$0
sep=";"
}
ENDFILE {
if(sep)print""
sep=""
}' *.log
How it works
-v RS='[[:space:]]+'
This tells awk to treat any sequence of whitespace (newlines, blanks, tabs, etc) as a record separator.
$0=="text1" || $0=="text2"{getline; printf "%s%s",sep,$0; sep=";"}
This tells awk to look file records that matches either text1 ortext2`. For those records and those records only the commands in curly braces are executed. Those commands are:
getline tells awk to read in the next record.
printf "%s%s",sep,$0 tells awk to print the variable sep followed by the word in the record.
After we print the first match, the command sep=";" is executed which tells awk to set the value of sep to a semicolon.
As we start each file, sep is empty. This means that the first match from any file is printed with no separator preceding it. All subsequent matches from the same file will have a ; to separate them.
ENDFILE{if(sep)print""; sep=""}
After the end of each file is reached, we print a newline if sep is not empty and then we set sep back to an empty string.
Alternative: Printing the second word if the first word ends with a number
In an alternative interpretation of the question (hat tip: David C. Rankin), we want to print the second word on any line for which the first word ends with a number. In that case, try:
$ awk '$1~/[0-9]$/{printf "%s%s",sep,$2; sep=";"} ENDFILE{if(sep)print""; sep=""}' *.log
value11;value12
value21;value22
In the above, $1~/[0-9]$/ selects the lines for which the first word ends with a number and printf "%s%s",sep,$2 prints the second field on that line.
Discussion
The original command was:
$ cat *.log | grep -oP "(?<=text1 ).*?(?= )|(?<=text2 ).*?(?= )" | tr '\n' '; '
value11;value12;value21;value22;
Note that, when using most unix commands, cat is rarely ever needed. In this case, for example, grep accepts a list of files. So, we could easily do without the extra cat process and get the same output:
$ grep -hoP "(?<=text1 ).*?(?= )|(?<=text2 ).*?(?= )" *.log | tr '\n' '; '
value11;value12;value21;value22;
I agree with #John1024 and how you approach this problem will really depend on what the actual text is you are looking for. If for instance your lines of concern start with text{1,2,...} and then what you want in the second field can be anything, then his approach is optimal. However, if the values in the first field and vary and what you are really interested in is records where you have valueXX in the second field, then an approach keying off the second field may be what you are looking for.
Taking for example your second field, if the text you are interested in is in the form valueXX (where XX are two or more digits at the end of the field), you can process only those records where your second field matches and then use a simple conditional testing whether FNR == 1 to control the ';' delimiter output and ENDFILE to control the new line similar to:
awk '$2 ~ /^value[0-9][0-9][0-9]*$/ {
printf "%s%s", (FNR == 1) ? "" : ";", $2
}
ENDFILE {
print ""
}' file1.log file2.log
Example Use/Output
$ awk '$2 ~ /^value[0-9][0-9][0-9]*$/ {
printf "%s%s", (FNR == 1) ? "" : ";", $2
}
ENDFILE {
print ""
}' file1.log file2.log
value11;value12
value21;value22
Look things over and consider your actual input files and then either one of these two approaches should get you there.
If I understood you correctly, you want the values but search for the text[12] ie. to get the word after matching search word, not the matching search word:
$ awk -v s="^text[12]$" ' # set the search regex *
FNR==1 { # in the beginning of each file
b=b (b==""?"":"\n") # terminate current buffer with a newline
}
{
for(i=1;i<NF;i++) # iterate all but last word
if($i~s) # if current word matches search pattern
b=b (b~/^$|\n$/?"":";") $(i+1) # add following word to buffer
}
END { # after searching all files
print b # output buffer
}' *.log
Output:
value11;value12
value21;value22
* regex could be for example ^(text1|text2)$, too.
I have the below csv file
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
I need to extract the todays date that is in 3rd column of the second line
and also the account number which is in 3rd column of the 4th line.
Below is the new file that i have to create, those extracted values
from 3rd and 4th line needs to be appended at the end of the file.
New file will contain the data from the 4th line and n-1 line
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
Kindly could you please help me how to do the same in a shell script?
Here is what i tried, i am new to shell scripting, unable to combine all these
To extract the date from second row
sed -sn 2p test.csv| cut -d ',' -f 3
To extract the account no
sed -sn 3p test.csv| cut -d ',' -f 3
To extract the actual data
tail -n +5 test.csv | head -n -1>temp.csv
Try awk:
awk -F, 'NR==2{d=$3}NR==4{a=$3}NR>4{if (line) print line; line = $0 "," d "," a;}' Inputfile.csv
Eg:
$ cat file1
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
$ awk -F, 'NR==2{d=$3}NR==4{a=$3}NR>4{if (line) print line; line = $0 "," d "," a;}' file1
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
Misunderstood your meaning before I edit your question, updated my answer afterwards.
In the awk command:
NR means the line number, -F to assign separator, d store date a account.
just concatenate the line $0 with d and a.
You don't want last line, so I used line to delay print, last line won't print out (though it did saved to line, and can be used if a END block is given).
You can try Perl also
$ cat dawn.txt
,,,Test File,
,todays Date:,01/10/2018,Generation date,10/01/2019 11:20:58
Header 1,Header 2,Header 3,Header 4,Header 5
,My account no,100102GFC,,
A,B,C,D,E
A,B,C,D,E
A,B,C,D,E
TEST
$ perl -F, -lane ' $dt=$F[2] if $.==2 ; $ac=$F[2] if $.==4; if($.>4 and ! eof) { print "$_,$dt,$ac" } ' dawn.txt
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
$
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == 2 { date = $3 }
NR == 4 { acct = $3 }
NR>4 && NF>1 { print $0, date, acct }
$ awk -f tst.awk file
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
or, depending on your requirements and actual input data:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR == 2 { date = $3 }
NR == 4 { acct = $3 }
NR>4 {
if (out != "") {
print out
}
out = $0 OFS date OFS acct
}
$ awk -f tst.awk file
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
A,B,C,D,E,01/10/2018,100102GFC
In the Linux:
there are many .csvs' in the folder, I have to select those csv's file having column name {'PREDICT' = 646}.
check this link:
https://prnt.sc/gone85
what kind of query works?
Providing test data which was unprovided ):
$ cat > file1
ACTUAL PREDICT
1 2
3 646
$ cat > file2
ACTUAL PREDICT
1 2
3 666
Then some GNU awk (nextfile) to select those csv's file having column name {'PREDICT' = 646} or where there is column PREDICT with a value 646:
$ awk 'FNR==1{for(i=1;i<=NF;i++)if($i=="PREDICT")p=i}$p==646{print FILENAME;nextfile}' file1 file2
file1
Explained:
awk '
FNR==1 { # get the column number of PREDICT column for each file
for(i=1;i<=NF;i++)
if($i=="PREDICT")
p=i # set it to p
}
$p==646 { # if p==646, we have a match
print FILENAME # print the filename
nextfile # and move on to the next file
}' file1 file2 # all the candicate files
gnu awk solution without loop:
$ cat tst.awk
BEGIN{FS=","}
FNR==1 && s=substr($0,1,index($0,"PREDICT")) { # look for index of PREDICT
i=sub(/,/, "", s) + 1 # and count nr of times you
# can replace "," in preceding
# substring
}
s && $i==646 { print FILENAME; nextfile }
some input:
$ cat file1.csv
ACTUAL,PREDICT,COUNTRY,REGION,DIVISION,PRODUCTTYPE,PRODUCT,QUARTER,YEAR,MONTH
925,850,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,533,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,646,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
$ cat file2.csv
ACTUAL,PREDICT,COUNTRY,REGION,DIVISION,PRODUCTTYPE,PRODUCT,QUARTER,YEAR,MONTH
925,850,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,533,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
925,111,CANADA,EAST,EDUCATION,FURNITURE,SOFA,1,1993,12054
and:
$ cp file1.csv file3.csv
gives:
$ awk -f tst.awk *.csv
file1.csv
file3.csv
Or use a one-liner:
$ awk -F, 'FNR==1 && s=substr($0,1,index($0,"PREDICT")) {i=sub(/,/, "", s) + 1}s && $i==646 { print FILENAME; nextfile }' *.csv
file1.csv
file3.csv
I have a directory full of files like this:
[Location]
state=California
city=Palo Alto
[Outlet]
id=23
manager=John Doe
I want to write a small script, that outputs one line for each file like this:
John Doe,Palo Alto
How do I do that? I suspect some grep and looping. So far I have:
#!/bin/bash
echo Manager,City > result.txt
for f in *.config
do
cat "$f" | grep manager= >> result.txt
cat "$f" | grep city= >> result.txt
done
but that's of course incomplete since grep returns the whole line on its own line and I only want the part after the first = sign.
echo Manager,City > result.txt
for f in *.config; do
manager=$(awk -F= '$1=="manager" {print $2}' "$f")
city=$( awk -F= '$1=="city" {print $2}' "$f")
echo "$manager,$city"
done >> result.txt
awk -F= uses an equal sign as the field separator, and then checks for the desired variables ($1) and prints their values ($2). $(cmd) captures the output of a command and yields strings that can be assigned to the two variables $manager and $city.
Similar to John Kugelman's answer but using grep.
echo Manager,City > result.txt
for file in *.config; do
name=$(grep -oP '(?<=manager\=).*' "$file")
location=$(grep -oP '(?<=city\=).*' "$file")
echo "$name,$location"
done >> result.txt
You can do this with a single awk command, as per the following transcript:
pax> cat 1.config
[Location]
state=California
city=Palo Alto
[Outlet]
id=23
manager=John Doe
pax> cat 2.config
[Location]
state=Western Australia
city=Perth
[Outlet]
id=24
manager=Pax Diablo
pax> awk '
/^city=/ {gsub (/^city=/, "", $0); city=$0}
/^manager=/{gsub(/^manager=/, "", $0); print $0 "," city}
' *.config
John Doe,Palo Alto
Pax Diablo,Perth
Note that this assumes the city comes before the manager, and that all files have both city and manager. If those assumptions are incorrect, the awk script becomes a little more complex but it's still doable.
In that case, it becomes something like:
awk '
FNR==1 {city = ""; mgr = ""}
/^city=/ {gsub (/^city=/, "", $0); city = $0}
/^manager=/ {gsub (/^manager=/, "", $0); mgr = $0}
{if (city!="" && mgr!=""){
print mgr "," city; city = ""; mgr = "";
}}
' *.config
What this does is to make the order irrelevant. It resets the city and manager variables to empty string at the start of each file and just stores them in the cases where it finds the relevant lines. After every line, if both are set, it prints and clears them.
I have 2 CSV files:
file_1 columns: id,user_id,message_id,rate
file_2 columns: id,type,timestamp
The relation between the files is that file_1.message_id = files_2.id.
I want to create a 3rd file that will have the following columns:
file_1.id,file_1.user_id,file_1.message_id,file_1.rate,file_2.timestamp
Any ideas on how to do this in Linux?
You can use the join command like this:
join -t, -1 3 -2 1 -o 1.1 1.2 1.3 1.4 2.3 <(sort -t, -k 3,3 file1) <(sort file2)
It first sorts the files (file1 is sorted by the 3rd field) and then joins them using the 3rd field of file1 and the 1st field of file2. It then outputs the fields you need.
Seems to be a job for SQLite. Using the SQLite shell:
create table f1(id,user_id,message_id,rate);
create table f2(id,type,timestamp);
.separator ,
.import 'file_1.txt' f1
.import 'file_2.txt' f2
CREATE INDEX i1 ON f1(message_id ASC); -- optional
CREATE INDEX i2 ON f2(id ASC); -- optional
.output 'output.txt'
.separator ,
SELECT f1.id, f1.user_id, f1.message_id, f1.rate, f2.timestamp
FROM f1
JOIN f2 ON f2.id = f1.message_id;
.output stdout
.q
Note that if there is a single error in the number of commas in a single line the import stage will fail. You can prevent the rest of the script from running with .bail on at the script beginning.
If you want unmatched ids you can try:
SELECT f1.* FROM f1 LEFT JOIN f2 on f2.id = f1.message_id WHERE f2.id IS NULL
Which will select every row from f1 for which no corresponding row in f2 has been found.
You can try this:
1. Change all lines to start with the key:
awk -F',' { print $3 " file1 " $1 " " $2 " " $4 } < file1 > temp
awk -F',' { print $1 " file2 " $2 " " $3 } < file2 >> temp
Now the lines look like:
message_id file1 id user_id rate
id file2 type timestamp
Sort temp by the first two columns. Now related lines are adjacent, with file1 first
sort -k 1,1 -k 2,2 < temp > temp2
Run awk to read the lines. In file1 lines save the fields, in file2 lines print them.
With awk you can try something like this -
awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
Test:
[jaypal:~/Temp] cat file_1 # Contents of File_1
id,user_id,message_id,rate
1,3334,424,44
[jaypal:~/Temp] cat file_2 # Contents of File_2
id,type,timestamp
424,rr,22222
[jaypal:~/Temp] awk -F, 'NR==FNR{a[$3]=$0;next} ($1 in a){print a[$1]","$3 > "file_3"}' file_1 file_2
[jaypal:~/Temp] cat file_3 # Contents of File_3 made by the script
1,3334,424,44,22222