Splunk search bunch of Strings and display table of _raw - search

I want to search a set of strings using OR (any better way is appreciated). Is there a way to assign name to Strings.
index=blah host=123 "ERROR" ("FILE1" OR "FILE2" OR "FILE3" ) | rex
field=_raw ".errorDesc\":\"(?.)\",\"errorCode.*" | table
_time RESP_JSON
Now, I want to add Filename as another column in table. If File is not present show empty values for rest of columns
Note: fileName is not a field, its just a string in _raw field
Splunk ::
[12/12/2015:12:12:12.123] ERROR occured while processing FILE1. errorDesc":"{field:123,code:124}","errorCode
[12/12/2015:13:13:12.123] ERROR occured while processing FILE3. errorDesc":"{field:125,code:124}","errorCode
eg Output:
File -------------------_time ----------------------- RESP_JSON
FILE1 ----- 12/12/2015:12:12:12.123 ----- {field:123,code:124}
FILE2
FILE3 ----- 12/12/2015:13:13:12.123 ----- {field:125,code:124}
No log entry for File2 is present, so empty row with just file Name is displayed

Have u tried below to extract the filename?
index=blah host=123 "ERROR" ("FILE1" OR "FILE2" OR "FILE3" ) | rex field=_raw "(?<filename>). errorDesc" | table _time RESP_JSON filename
Regarding the first question of naming search terms have you looked at macros or using subsearchs with lookups?

Give this a shot:
index=blah host=123 "ERROR" ("FILE1" OR "FILE2" OR "FILE3" ) | rex "processing\s+(?<filename>[^\.]+)\.\s+" | table _time RESP_JSON filename
It's the same search as above, just a different regex extraction.

Related

How to extract two part-numerical values from a line in shell script

I have multiple text files in this format. I would like to extract lines matching this pattern "pass filters and QC".
File1:
Before main variant filters, 309 founders and 0 nonfounders present.
0 variants removed due to missing genotype data (--geno).
9302015 variants removed due to minor allele threshold(s)
(--maf/--max-maf/--mac/--max-mac).
7758518 variants and 309 people pass filters and QC.
Calculating allele frequencies... done.
I was able to grep the line, but when I tried to assign to line variable it just doesn't work.
grep 'people pass filters and QC' File1
line="$(echo grep 'people pass filters and QC' File1)"
I am new to shell script and would appreciate if you could help me do this.
I want to create a tab separated file with just
"File1" "7758518 variants" "309 people"
GNU awk
gawk '
BEGIN { patt = "([[:digit:]]+ variants) .* ([[:digit:]]+ people) pass filters and QC" }
match($0, patt, m) {printf "\"%s\" \"%s\" \"%s\"\n", FILENAME, m[1], m[2]}
' File1
You are almost there, just remove double quotes and echo from your command:
line=$(grep 'people pass filters and QC' File1)
Now view the value stored in variable:
echo $line
And if your file structure is same, i.e., it will always be in this form: 7758518 variants and 309 people pass filters and QC, you can use awk to get selected columns from output. So complete command would be like below:
OIFS=$IFS;IFS=$'\n';for i in $line;do echo $i;echo '';done | awk -F "[: ]" '{print $1"\t"$2" "$3"\t"$5" "$6}';IFS=$OIFS
Explanation:
IFS means internal field separator, and we are setting it to newline character, because we need to use it in for loop.
But before that, we are taking it's backup in another variable OIFS, so we can restore it later.
We are using a for loop to iterate through all the matched strings, and using awk to select, 1st, 2nd, 3rd , 4th and 5th column as per your requirement.
But please note, if your file structure varies, we may need to use a different technique to extract "7758518 variants" and "309 people" part.

Deleting complete record in File 1 when a string from file 2 is found

I have one file which has data:
ID Name
1 abc
2 def
....
I have another file which has the strings:
ID
1
9
10
...
I want to search the data file with strings in the string file(having just ID) and if the string is found in the data file then delete ONLY that Record.
So in the above example:
Only the record with ID=1 should get deleted from the data file.
This has to be done on a unix system.
I have tried this:
grep -v -F /path/to/string/file/string.txt /path/to/data/file/data.csv
This lists out the content but does not edit/delete the record in the data file.
Thanks
You can use this awk solution:
awk 'NR==FNR{keep[$1]; next} !($1 in keep)' string.txt data.csv
2 def
Reference: Effective AWK Programming

Finding duplicate entries across very large text files in bash

I am working with very large data files extracted from a database. There are duplicates across these files that I need to remove. If there are duplicates they will exist across files not within the same file. The files contain entries that look like the following:
File1
623898/bn-oopi-990iu/I Like Potato
982347/ki-jkhi-767ho/Let's go to Sesame Street
....
File2
568798/jj-ytut-786hh/Hello Mike
982347/ki-jkhi-767ho/Let's go to Sesame Street
....
So the Sesame Street line will have to be removed possibly even across 5 files but at least remain in one of them. From what I have been able to grab so far I can perform the following cat * | sort | uniq -cd to give me each duplicated line and the number of times they have been duplicated. But have no way of getting the file name. cat * | sort | uniq -cd | grep "" * doesn't work. Any ideas or approaches for a solution would be great.
Expanding your original idea:
sort * | uniq -cd | awk '{print $2}' | grep -Ff- *
i.e. form the output, print only the duplicate strings, then search all the files for them (list of things to search from taken form -, i.e. stdin), literally (-F).
Something along these lines might be useful:
awk '!seen[$0] { print $0 > FILENAME ".new" } { seen[$0] = 1 }' file1 file2 file3 ...
twalberg's solution works perfectly but if your files are really large it could exhaust the available memory because it creates one entry in an associative array per encountered unique record. If it happens, you can try a similar approach where there is only one entry per duplicate record (I assume you have GNU awk and your files are named *.txt):
sort *.txt | uniq -d > dup
awk 'BEGIN {while(getline < "dup") {dup[$0] = 1}} \
!($0 in dup) {print >> (FILENAME ".new")} \
$0 in dup {if(dup[$0] == 1) {print >> (FILENAME ".new");dup[$0] = 0}}' *.txt
Note that if you have many duplicates it could also exhaust the available memory. You can solve this by splitting the dup file in smaller chunks and run the awk script on each chunk.

Bash CSV sorting and unique-ing

a Linux question: I have the CSV file data.csv with the following fields and values
KEY,LEVEL,DATA
2.456,2,aaa
2.456,1,zzz
0.867,2,bbb
9.775,4,ddd
0.867,1,ccc
2.456,0,ttt
...
The field KEY is a float value, while LEVEL is an integer. I know that the first field can have repeated values, as well as the second one, but if you take them together you have a unique couple.
What I would like to do is to sort the file according to the column KEY and then for each unique value under KEY keep only the row having the higher value under LEVEL.
Sorting is not a problem:
$> sort -t, -k1,2 data.csv # fields: KEY,LEVEL,DATA
0.867,1,ccc
0.867,2,bbb
2.456,0,ttt
2.456,1,zzz
2.456,2,aaa
9.775,4,ddd
...
but then how can I filter the rows so that I get what I want, which is:
0.867,2,bbb
2.456,2,aaa
9.775,4,ddd
...
Is there a way to do it using command line tools like sort, uniq, awk and so on? Thanks in advance
try this line:
your sort...|awk -F, 'k&&k!=$1{print p}{p=$0;k=$1}END{print p}'
output:
kent$ echo "0.867,1,bbb
0.867,2,ccc
2.456,0,ttt
2.456,1,zzz
2.456,2,aaa
9.775,4,ddd"|awk -F, 'k&&k!=$1{print p}{p=$0;k=$1}END{print p}'
0.867,2,ccc
2.456,2,aaa
9.775,4,ddd
The idea is, because your file is already sorted, just go through the file/input from top, if the first column (KEY) changed, print the last line, which is the highest value of LEVEL of last KEY
try with your real data, it should work.
also the whole logic (with your sort) could be done by awk in single process.
Use:
$> sort -r data.csv | uniq -w 5 | sort
given your floats are formatted "0.000"-"9.999"
Perl solution:
perl -aF, -ne '$h{$F[0]} = [#F[1,2]] if $F[1] > $h{$F[0]}[0]
}{
print join ",", $_, #{$h{$_}} for sort {$a<=>$b} keys %h' data.csv
Note that the result is different from the one you requested, the first line contains bbb, not ccc.

input file isn't being read correctly scripting awk

Hi I have two input files.
input1.txt:
id above
id below
id still
id getting
input2.txt
id above
value above the sky
id below
value under the chair
im trying to do an awk command and it shows up empty.
awk -f find.awk input1.txt input2.txt
I know my awk works because im inputting 2 different txt files and all the outputs are correct and visible.
the difference between the different input2.txt files is...
when i go to use notepad on a windows machine the whole file turns out to be one string, while if you use any txt editor, it's formatted with separate lines.
example input2.txt on notepad.
id above value above the sky id below value under the chair
I can't just reparse this input by id, because my real txt file has more data ... which is inconsistent so i can't just search for a string or reg expression.
find.awk
FNR==NR { id[$0]; next }
$0 in id { f=1 }
f
NF==0 { f=0 }
and idea on why my awk isn't working?
Run "dos2unix" on your input files before running awk on them. man dos2unix.

Resources