Shell script to mail script output in table format [duplicate] - linux

This question already has an answer here:
how to mail script output in table format
(1 answer)
Closed 7 years ago.
I am new to shell script. I need your help on below scenario.
Script: wc file1 file2 file3 file4
results :
1488 2977 2248 file1
123 345 657 file2
123 896 456 file3
567 987 124 file4
Now I need to mail this result in below format with header name
Here,2nd column is always default value.
Filename Destname rowcount bytesize
file1 default 1488 2248
file2 default 123 657
file3 default 123 456
file4 default 567 124
Please some one help me to write this script.

$ wc file1 file2 file3 file4 |
awk 'BEGIN{print "Filename Destname rowcount bytesize"}
$NF=="total"{exit}
{print $NF, "default", $1, $3}' file |
column -t |
mail -s "table" you#host.tld
Output:
Filename Destname rowcount bytesize
file1 default 1488 2248
file2 default 123 657
file3 default 123 456
file4 default 567 124

Related

Print the missing words and the file name - linux

I have two files in the given format:
File 1:
India 215.0
country 165.0
Indian 163.0
s 133.0
Maoist 103.0
Nepal 89.0
group 85.0
Kathmandu 85.0
File 2:
Nepal 89.0
would 88.0
Kathmandu 85.0
rule 82.0
king 80.0
parliament 79.0
card 79.0
I want to print the words that are present in one file but not the other. The file in which each word is found should also be printed next to the word. For example, I want the output to be:
India 215.0, file 1
country 165.0, file 1
group 85.0, file 1
....
....
would 88.0, file 2
I tried using:
grep -v file1 file2
I get the words that are not present in file2, but I want the words that are present in file1 and not file2 and vice-versa, with their respective file names. How can I achieve this? Please help!
# print out all the rows only in file2 and append filename
$ awk 'NR==FNR{a[$1]++;next} !($1 in a){print $0, FILENAME}' file1 file2
would 88.0 file2
rule 82.0 file2
king 80.0 file2
parliament 79.0 file2
card 79.0 file2
# print all the rows only in file1 and append filename
$ awk 'NR==FNR{a[$1]++;next} !($1 in a){print $0, FILENAME}' file2 file1
India 215.0 file1
country 165.0 file1
Indian 163.0 file1
s 133.0 file1
Maoist 103.0 file1
group 85.0 file1
The default field separator is space, $1 is the first column.

How to append a column for the result set in shell script

I need a script for the below scenario. I am very new to shell script.
wc file1 file2
the above query results with following result
40 149 947 file1
2294 16638 97724 file2
Now I need to get result as follows: 1st column, 3rd column ,4th column of above result set and new column with default values
40 947 file1 DF.tx1
2294 97724 file2 DF.rb2
Here the last column values is always known values i.e for file1 DF.tx1 and file2 DF.rb2.
If the give filenames in any order the default values should not change.
Please help me to write this script. Thanks in advance!!
You can use awk:
wc file1 file2 |
awk '$4 != "total"{if ($4 ~ /file1/) f="DF.tx1"; else if ($4 ~ /file2/) f="DF.rb2";
else if ($4 ~ /file3/) f="foo.bar"; print $1, $3, $4, f}'
1 12 file1 DF.tx1
9 105 file2 DF.rb2
5 15 file3 foo.bar

how to mail script output in table format

I have the script output like below.
Current result:
Filename Destname rowcount bytesize
file1 default 1488 2248
file2 default 123 657
file3 default 123 456
file4 default 567 124
Actual result to be like below (if possible with borders):
Filename Destname rowcount bytesize
file1 default 1488 2248
file2 default 123 657
file3 default 123 456
file4 default 567 124
I need to mail above content in same format.
#!/bin/bash
input="/path/to/your/file.txt"
tmpfile="/path/to/tmpfile.html"
echo 'Content-Type: text/html; charset="us-ascii" ' > "$tmpfile"
awk 'BEGIN{print "<html><body><table border=1>"} {print "<tr>";for(i=1;i<=NF;i++)print "<td>" $i"</td>";print "</tr>"} END{print "</table></body></html>"}' "$input" >> "$tmpfile"
mail -s "test" abc#xyz.com < "$tmpfile"
Source: http://www.unix.com/302556864-post5.html

script to join 2 separate text files and also add specified text

hi there i want to creat a bash script that does the following:
i have 2 texts files one is adf.ly links and the other Recipie names
i want to creat a batch scrript that takes each line from each text file and do the following
<li>**Recipie name line 1 of txt file**</li>
<li>**Recipie name line 2 of txt file**</li>
ect ect and save all the results to another text file called LINKS.txt
someone please help or point me in direction of linux bash script
this awk one-liner will do the job:
awk 'BEGIN{l="<li>%s</li>\n"}NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
more clear version (same script):
awk 'BEGIN{l="<li>%s</li>\n"}
NR==FNR{a[NR]=$0;next}
{printf l, a[FNR],$0}' file1 file2
example:
kent$ seq -f"%g from file1" 7 >file1
kent$ seq -f"%g from file2" 7 >file2
kent$ head file1 file2
==> file1 <==
1 from file1
2 from file1
3 from file1
4 from file1
5 from file1
6 from file1
7 from file1
==> file2 <==
1 from file2
2 from file2
3 from file2
4 from file2
5 from file2
6 from file2
7 from file2
kent$ awk 'BEGIN{l="<li>%s</li>\n"};NR==FNR{a[NR]=$0;next}{printf l, a[FNR],$0}' file1 file2
<li>1 from file2</li>
<li>2 from file2</li>
<li>3 from file2</li>
<li>4 from file2</li>
<li>5 from file2</li>
<li>6 from file2</li>
<li>7 from file2</li>
EDIT for the comment of OP:
if you have only one file: (the foo here is just dummy text)
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1
output from same file1 example:
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
<li>foo</li>
if you want to save the output to a file:
awk 'BEGIN{l="<li>foo</li>\n"}{printf l,$0}' file1 > newfile
Try doing this :
$ cat file1
aaaa
bbb
ccc
$ cat file2
111
222
333
$ paste file1 file2 | while read a b; do
printf '<li>%s</li>\n' "$a" "$b"
done | tee newfile
Output
<li>111</li>
<li>222</li>
<li>333</li>

shell script for log analysis

I am getting the logs in a specific format on my linux server as
id \t IP \t login-id \t login-error Code \t attempts
I want to know all possible login-error codes which a user might have encountered.
The sample file is:
123 10.12.34.234 anshul 11 1
432 10.12.34.234 ra 11 2
342 10.12.34.234 anshul 12 1
445 10.12.34.234 yahoo 3 1
and the output should be:
anshul: 11,12
I have tried:
cat aaa | sort +2 -3 | grep anshul | awk -F"\t" {' print $4'}
This would print
11
12
But I want the output in the format as anshul: 11,12
Can we store the value in some variables and display as it is required.
Also the problem with this code is it was catch all the anshul whether it anshulg or anshuln or anshulp? Any suggestion to solve this.
I have done the sorting on login just to verify if the data I am getting is correct or not, since all the unique names would be sorted to single chunk.
1) Simple solution, but you will get extra , at the end:
cat aaa | grep "anshul" | awk '{print $4}' | tr '\n' ','
output: 11,12,
2) without extra ,:
tmp=`cat aaa | grep "anshul" | awk '{print $4}' | tr '\n' ','`
echo ${tmp%?}
output: 11,12
Of course you can easily convert this to a script that takes username as a parameter and output something like "user: anshul error(s): 11,12"
#% cat t
123 10.12.34.234 anshul 11 1
432 10.12.34.234 ra 11 2
342 10.12.34.234 anshul 12 1
445 10.12.34.234 yahoo 3 1
One line Perl.
perl -ane 'BEGIN{$x='anshul';}push #{$X{#F[2]}}, $F[3];END{print "$x: ",join(",",#{$X{$x}}),"\n";}' < t
Gives:
anshul: 11,12
awk '{a[$3]=a[$3]","$4;next}END{for(i in a)print i,substr(a[i],2)}' <your_file>|grep anshul
Or you can directly use awk without a grep.
awk '{a[$3]=a[$3]","$4;next}END{print "anshul",substr(a["anshul"],2)}' <your_file>
Tested below:
> cat temp
123 10.12.34.234 anshul 11 1
432 10.12.34.234 ra 11 2
342 10.12.34.234 anshul 12 1
445 10.12.34.234 yahoo 3 1
> awk '{a[$3]=a[$3]","$4;next}END{for(i in a)print i,substr(a[i],2)}' temp
anshul 11,12
ra 11
yahoo 3
> awk '{a[$3]=a[$3]","$4;next}END{for(i in a)print i,substr(a[i],2)}' temp|grep anshul
anshul 11,12
>
> awk '{a[$3]=a[$3]","$4;next}END{print "anshul",substr(a["anshul"],2)}' temp
anshul 11,12

Resources