I am trying to use sha256sum hashing command to hash email address.
$ echo -n example#gmail.com | sha256sum | awk '{print $1}'
264e53d93759bde067fd01ef2698f98d1253c730d12f021116f02eebcfa9ace6
Now I want to apply the same on an input file with email_address only,but the below shows a different hash and looks like CAT is the culprit here,let me know how to overcome the issue.
$ echo -n | cat test.txt | sha256sum | awk '{print $1}'
5c98fab97a397b50d060913638c18f7fd42345248bb973c486b6347232e8013e
ideally ,I would like to see below if the test.txt has only one record example#gmail.com (it can have n number of email address)
example#gmail.com|264e53d93759bde067fd01ef2698f98d1253c730d12f021116f02eebcfa9ace6
It's a bit unclear, but I think your test.txt file looks like this:
example#gmail.com
rkj#stackoverflow.com
sean#stackoverflow.com
And you want to produce the output:
example#gmail.com|264e53d93759bde067fd01ef2698f98d1253c730d12f021116f02eebcfa9ace6
rkj#stackoverflow.com|2a583d8e55db9bac7247cac8dc4b52780010583217844e864e159c458ce0185c
sean#stackoverflow.com|797f02327b00f12486f2ed5e85e61680ecb75a0f969b54a627e329506aaf595c
If that is the case, this will do it:
while read email; do SHA=$(echo -n $email | sha256sum | awk '{print $1}'); echo "$email|$SHA"; done < test.txt
In your initial command, you are using echo -n which prints the arguments without a trailing newline. So you are getting the hash of just the e-mail address itself. Once you start working with a file, every line is going to have a newline on it, so for each line you have to strip off the newline and get the hash. That is effectively what my suggested solution is doing.
Related
I have a file with content as follows and want to validate the content as
1.I have entries of rec$NUM and this field should be repeated 7 times only.
for example I have rec1.any_attribute this rec1 should come only 7 times in whole file.
2.I need validating script for this.
If records for rec$NUM are less than 7 or Greater than 7 script should report that record.
FILE IS AS FOLLOWS :::
rec1:sourcefile.name=
rec1:mapfile.name=
rec1:outputfile.name=
rec1:logfile.name=
rec1:sourcefile.nodename_col=
rec1:sourcefle.snmpnode_col=
rec1:mapfile.enc=
rec2:sourcefile.name=abc
rec2:mapfile.name=
rec2:outputfile.name=
rec2:logfile.name=
rec2:sourcefile.nodename_col=
rec2:sourcefle.snmpnode_col=
rec2:mapfile.enc=
rec3:sourcefile.name=abc
rec3:mapfile.name=
rec3:outputfile.name=
rec3:logfile.name=
rec3:sourcefile.nodename_col=
rec3:sourcefle.snmpnode_col=
rec3:mapfile.enc=
Please Help
Thanks in Advance... :)
Simple awk:
awk -F: '/^rec/{a[$1]++}END{for(t in a){if(a[t]!=7){print "Some error for record: " t}}}' test.rc
grep '^rec1' file.txt | wc -l
grep '^rec2' file.txt | wc -l
grep '^rec3' file.txt | wc -l
All above should return 7.
The commands:
grep rec file2.txt | cut -d':' -f1 | uniq -c | egrep -v '^ *7'
will success if file follows your rules, fails (and returns the failing record) if it doesn't.
(replace "uniq -c" by "sort -u" if record numbers can be mixed).
I am having following syntax for one of my file.Could you please anyone explain me what is this command doing
path = /document/values.txt
where we have different username specified e.g username1 = john,username2=marry
cat ${path} | grep -e username1 | cut -d'=' -f2`
my question here is cat command is reading from the file value of username1 but why why we need to use cut command?
Cat is printing the file. The file has username1=something in one of the lines. The cut command splits this and prints out the second argument.
your command was not written well. the cat is useless.
you can do:
grep -e pattern "$path"|cut ...
you can of course do it with single process with awk if you like. anyway the line in your question smells not good.
awk example:
awk -F'=' '/pattern/{print $2}' inputFile
cut -d'=' -f2`
This cut uses -d'=' that means you use '=' as 'field delimiter' and -f2 will take only de second field.
So in this case you want only the value after the "=" .
Suppose I have a file input.txt with few columns and few rows, the first column is the key, and a directory dir with files which contain some of these keys. I want to find all lines in the files in dir which contain these key words. At first I tried to run the command
cat input.txt | awk '{print $1}' | xargs grep dir
This doesn't work because it thinks the keys are paths on my file system. Next I tried something like
cat input.txt | awk '{system("grep -rn dir $1")}'
But this didn't work either, eventually I have to admit that even this doesn't work
cat input.txt | awk '{system("echo $1")}'
After I tried to use \ to escape the white space and the $ sign, I came here to ask for your advice, any ideas?
Of course I can do something like
for x in `cat input.txt` ; do grep -rn $x dir ; done
This is not good enough, because it takes two commands, but I want only one. This also shows why xargs doesn't work, the parameter is not the last argument
You don't need grep with awk, and you don't need cat to open files:
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' input.txt dir/*
Nor do you need xargs, or shell loops or anything else - just one simple awk command does it all.
If input.txt is not a file, then tweak the above to:
real_input_generating_command |
awk 'NR==FNR{keys[$1]; next} {for (key in keys) if ($0 ~ key) {print FILENAME, $0; next} }' - dir/*
All it's doing is creating an array of keys from the first file (or input stream) and then looking for each key from that array in every file in the dir directory.
Try following
awk '{print $1}' input.txt | xargs -n 1 -I pattern grep -rn pattern dir
First thing you should do is research this.
Next ... you don't need to grep inside awk. That's completely redundant. It's like ... stuffing your turkey with .. a turkey.
Awk can process input and do "grep" like things itself, without the need to launch the grep command. But you don't even need to do this. Adapting your first example:
awk '{print $1}' input.txt | xargs -n 1 -I % grep % dir
This uses xargs' -I option to put xargs' input into a different place on the command line it runs. In FreeBSD or OSX, you would use a -J option instead.
But I prefer your for loop idea, converted into a while loop:
while read key junk; do grep -rn "$key" dir ; done < input.txt
Use process substitution to create a keyword "file" that you can pass to grep via the -f option:
grep -f <(awk '{print $1}' input.txt) dir/*
This will search each file in dir for lines containing keywords printed by the awk command. It's equivalent to
awk '{print $1}' input.txt > tmp.txt
grep -f tmp.txt dir/*
grep requires parameters in order: [what to search] [where to search]. You need to merge keys received from awk and pass them to grep using the \| regexp operator.
For example:
arturcz#szczaw:/tmp/s$ cat words.txt
foo
bar
fubar
foobaz
arturcz#szczaw:/tmp/s$ grep 'foo\|baz' words.txt
foo
foobaz
Finally, you will finish with:
grep `commands|to|prepare|a|keywords|list` directory
In case you still want to use grep inside awk, make sure $1, $2 etc are outside quote.
eg. this works perfectly
cat file_having_query | awk '{system("grep " $1 " file_to_be_greped")}'
// notice the space after grep and before file name
Is there a way to perform a grep based on the results of a previous grep, rather than just piping multiple greps into each other. For example, say I have the log file output below:
ID 1000 xyz occured
ID 1001 misc content
ID 1000 misc content
ID 1000 status code: 26348931276572174
ID 1000 misc content
ID 1001 misc content
To begin with, I'd like to grep the whole log file file to see if "xyz occured" is present. If it is, I'd like to get the ID number of that event and grep through all the lines in the file with that ID number looking for the status code.
I'd imagined that I could use xargs or something like that but I can't seem to get it work.
grep "xyz occured" file.log | awk '{ print $2 }' | xargs grep "status code" | awk '{print $NF}'
Any ideas on how to actually do this?
A general answer for grep-ing the grep-ed output:
grep 'patten1' *.txt | grep 'pattern2'
notice that the second grep is not pointing at a file.
More about cool grep stuff here
You're almost there. But while xargs can sometimes be used to do what you want (depending on how the next command takes its arguments), you aren't actually using it to grep for the ID you just extracted. What you need to do is take the output of the first grep (containing the ID code) and use that in the next grep's expression. Something like:
grep "^ID `grep 'xyz occured' file.log | awk '{print $2}'` status code" file.log
Obviously another option would be to write a script to do this in one pass, a-la Ed's suggestion.
Yet another way
for x in `grep "xyz occured" file.log | cut -d\ -f2`
do
grep $x file.log
done
The thing I like about this method is if you wanted to you could write the output to a file for each status code.
grep $x file.log >> /var/tmp/$x.out
This is all about retrieve the files in a narrowed search scope. In your case the search scope is determined by a file content.
I have found this problem more often while reducing the search scope through many searches (applying filters to the previous grep results).
Trying to find general answer:
Generate a list with the result of the first grep:
grep pattern | awk -F':' '{print $1}'
Second grep into the list of files like here
xargs grep -i pattern
apply this cascading filter the times you need just adding awk to get only the filenames and xargs to pass the filenames to grep -i
For example:
grep 'pattern1' | awk -F':' '{print $1}' | xargs grep -i 'pattern2'
Just use awk:
awk '{info[$2] = info[$2] $0 ORS} /xyz occured/{ids[$2]} END{ for (id in ids) printf "%s",info[id]}' file.log
or:
awk '/status code/{code[$2]=$NF} /xyz occured/{ids[$2]} END{ for (id in ids) print code[id]}' file.log
depending what you really want to output. Some expected output in your question would help.
Grep the result of a previous Grep:
Given this file contents:
ID 1000 xyz occured
ID 1001 misc content
ID 1000 misc content
ID 1000 status code: 26348931276572174
ID 1000 misc content
ID 1001 misc content
This command:
grep "xyz" file.log | awk '{ print $2 }' > f.log; grep `cat f.log` file.log;
returns this:
ID 1000 xyz occured
ID 1000 misc content
ID 1000 status code: 26348931276572174
ID 1000 misc content
It looks for "xyz" in file.log places the result in f.log. Then greps for that ID in file.log. If the outer grep returns multiple ID numbers, then the inner grep will only search the first ID number and error out on the others.
I have a number of log files in a directory. I am trying to write a script to search all the log files for a string and echo the name of the files and the line number that the string is found.
I figure I will probably have to use 2 grep's - piping the output of one into the other since the -l option only returns the name of the file and nothing about the line numbers. Any insight in how I can successfully achieve this would be much appreciated.
Many thanks,
Alex
$ grep -Hn root /etc/passwd
/etc/passwd:1:root:x:0:0:root:/root:/bin/bash
combining -H and -n does what you expect.
If you want to echo the required informations without the string :
$ grep -Hn root /etc/passwd | cut -d: -f1,2
/etc/passwd:1
or with awk :
$ awk -F: '/root/{print "file=" ARGV[1] "\nline=" NR}' /etc/passwd
file=/etc/passwd
line=1
if you want to create shell variables :
$ awk -F: '/root/{print "file=" ARGV[1] "\nline=" NR}' /etc/passwd | bash
$ echo $line
1
$ echo $file
/etc/passwd
Use -H. If you are using a grep that does not have -H, specify two filenames. For example:
grep -n pattern file /dev/null
My version of grep kept returning text from the matching line, which I wasn't sure if you were after... You can also pipe the output to an awk command to have it ONLY print the file name and line number
grep -Hn "text" . | awk -F: '{print $1 ":" $2}'