I want to create a bash script which will take entries from a text file and process it based on the requirement.
I am trying to find out the last login time for each of the mail accounts. I have a text file email.txt contains all the email addresses line by line and need to check the last login time for each of these account using the below command :
cat /var/log/maillog | grep 'mail1#address.com' | grep 'Login' | tail -1 > result.txt
So the result would be inside result.txt
Thanks,
I think you are looking for something like this.
while read email_id ; do grep "$email_id" /var/log/maillog | grep 'Login' >> result.txt ; done < email.txt
To get all the email addresses in parallel (rather than repeatedly scan what is probably a very large log file).
grep -f file_with_email_addrs /var/log/maillog
Weather this is worthwile depends on the size of the log file and the number of e-mail addresses you are getting. A small log file or a small number of users will probably render this mute. But a big log file and a lot of users will make this a good first step. But will then require some extra processing.
I am unsure about the format of the log lines or the relative distribution of Logins in the file, so what happens next depends on more information.
# If logins are rare?
grep Login /var/log/maillog | grep -f file_with_email_addrs | sort -u -k <column of email>
# If user count is small
grep -f file_with_email_addrs /var/log/maillog | awk '/Login/ {S[$<column of email>] = $0;} END {for (loop in S) {printf("%s\n",S[loop]);}'
Your question is unclear, do you want to look for the last login for 'mail_address1' which is read from a file?
If so:
#/bin/bash
while read mail_address1;
do;
# Your command line here
cat /var/log/maillog | grep $mail_address1 | grep 'Login' | tail -1 >> result.txt
done; < file_with_email_adddrs
PS: I've changed the answer to reflect the suggestions (extremely valid ones) in the comments that advise against using a for i in $(cat file) style loop to loop through a file.
Related
STEP1
Like I said in title,
I would like to save output of
tail -f example | grep "DESIRED"
to different file
I have tried
tail -f example | grep "DESIRED" | tee -a different
tail -f example | grep "DESIRED" >> different
all of them not working
and I have searched similar questions and read several experts suggesting buffered
but I cannot use it.....
Is there any other way I can do it?
STEP2
once above is done, I would like to make "different" (filename from above) to time varying. I want to keep change its name in every 30minutes.
For example like
20221203133000
20221203140000
20221203143000
...
I have tried
tail -f example | grep "DESIRED" | tee -a $(date +%Y%m%d%H)$([ $(date +%M) -lt 30 ] && echo 00 || echo 30)00
The problem is since I did not even solve first step, I could not test the second step. But I think this command will only create one file based on the time I run the command,,,, Could I please get some advice?
Below code should do what you want.
Some explanations: as you want bash to execute some "code" (in your case dumping to a different file name) you might need two things running in parallel: the tail + grep, and the code that would decide where to dump.
To connect the two processes I use a name fifo (created with mkfifo) in which what is written by tail + grep (using > tmp_fifo) is read in the while loop (using < tmp_fifo). Then once in a while loop, you are free to output to whatever file name you want.
Note: without line-buffered (like in your question) grep will work, will just wait until it has more data (prob 8k) to dump to the file. So if you do not have lots of data generated in "example" it will not dump it until it is enough.
rm -rf tmp_fifo
mkfifo tmp_fifo
(tail -f input | grep --line-buffered TEXT_TO_CHECK > tmp_fifo &)
while read LINE < tmp_fifo; do
CURRENT_NAME=$(date +%Y%m%d%H)
# or any other code that determines to what file to dump ...
echo $LINE >> ${CURRENT_NAME}
done
I created a script called monitornsuaccounts.sh that should append its output file to useraccountstatus.log. useraccountstatus.log is in the directory /var/local/nsu/logs/.
The output of this script should state every username and the following information about each username: username, last login, user home directory and associated groups. Preferably there should be columns with each information.
The command I use for the usernames is sudo cat /etc/passwd | grep ‘/home’. Last is to find the last login of each user. Groups is to the find the group of each user. When I run the command, the output file only shows the data I need for my current user rather than all users. Any recommendations that anyone has would be greatly appreciated.
#!/bin/bash
usernames=sudo cat /etc/passwd | grep ‘/home’
echo “$usernames” > /home/daniel/names.txt
mlast=$(cat names.txt | xargs -n1 last)
mgroup=$(cat names.txt | xargs -n1 groups)
cat names.txt > /var/local/nsu/logs/useraccountstatus.log
echo “$mlast” >>/var/local/nsu/logs/useraccountstatus.log
echo “$mgroup” >>/var/local/nsu/logs/useraccountstatus.log
There are a lot of issues in your script.
Your definition of users. Are you sure that this is what you want? For example: root does not have a directory under /home.
Watch your quotes. cat /etc/passwd | grep ‘/home’ returns nothing, while cat /etc/passwd | grep 'home' returns a list of stanzas in /etc/passwd
You'll probably want just a list of usernames, not a list of stanzas. Something along the line of
cat /etc/passwd | grep 'home' | sed 's/:.*//'
Why sudo in sudo cat /etc/passwd?
Look at your assignment in the
usernames=sudo cat /etc/passwd | grep ‘/home’
This does not make sense. You might try to do a
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
And that is just the first line of the script.
Anyway, if your script does not work as intended, you will need to do some debugging. First question, especially if you are inexperienced, is "do the commands that I write give the result that I expect?" So in your case, you should have tried cat /etc/passwd | grep ‘/home’ and you would have seen that it does not give you the expected results. Even with the correct quotes, you'll get a list of stanzas, which is also not what you expected. Have you looked at /home/daniel/names.txt and was the content of the file what you wanted? I guess not: it was empty.
Just a quick hint, to get you started in the right direction (although there are still some issues and pepole might object to the backtics)
#!/bin/bash
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
mlast=`echo $usernames | xargs -n1 last`
mgroup=`echo $usernames| xargs -n1 groups`
echo $usernames > /var/local/nsu/logs/useraccountstatus.log
echo "$mlast" >>/var/local/nsu/logs/useraccountstatus.log
echo "$mgroup" >>/var/local/nsu/logs/useraccountstatus.log
You will want to polish this and make the output more useful.
I need to retrieve last 100 lines of logs from the log file.
I tried the sed command
sed -n -e '100,$p' logfilename
Please let me know how can I change this command to specifically retrieve the last 100 lines.
You can use tail command as follows:
tail -100 <log file> > newLogfile
Now last 100 lines will be present in newLogfile
EDIT:
More recent versions of tail as mentioned by twalberg use command:
tail -n 100 <log file> > newLogfile
"tail" is command to display the last part of a file, using proper available switches helps us to get more specific output. the most used switch for me is -n and -f
SYNOPSIS
tail [-F | -f | -r] [-q] [-b number | -c number | -n number] [file ...]
Here
-n number :
The location is number lines.
-f : The -f option causes tail to not stop when end of file is
reached, but rather to wait for additional data to be appended to the
input. The -f option is ignored if the
standard input is a pipe, but not if it is a FIFO.
Retrieve last 100 lines logs
To get last static 100 lines
tail -n 100 <file path>
To get real time last 100 lines
tail -f -n 100 <file path>
You can simply use the following command:-
tail -NUMBER_OF_LINES FILE_NAME
e.g tail -100 test.log
will fetch the last 100 lines from test.log
In case, if you want the output of the above in a separate file then you can pipes as follows:-
tail -NUMBER_OF_LINES FILE_NAME > OUTPUT_FILE_NAME
e.g tail -100 test.log > output.log
will fetch the last 100 lines from test.log and store them into a new file output.log)
Look, the sed script that prints the 100 last lines you can find in the documentation for sed (https://www.gnu.org/software/sed/manual/sed.html#tail):
$ cat sed.cmd
1! {; H; g; }
1,100 !s/[^\n]*\n//
$p
$ sed -nf sed.cmd logfilename
For me it is way more difficult than your script so
tail -n 100 logfilename
is much much simpler. And it is quite efficient, it will not read all file if it is not necessary. See my answer with strace report for tail ./huge-file: https://unix.stackexchange.com/questions/102905/does-tail-read-the-whole-file/102910#102910
I know this is very old, but, for whoever it may helps.
less +F my_log_file.log
that's just basic, with less you can do lot more powerful things. once you start seeing logs you can do search, go to line number, search for pattern, much more plus it is faster for large files.
its like vim for logs[totally my opinion]
original less's documentation : https://linux.die.net/man/1/less
less cheatsheet : https://gist.github.com/glnds/8862214
len=`cat filename | wc -l`
len=$(( $len + 1 ))
l=$(( $len - 99 ))
sed -n "${l},${len}p" filename
first line takes the length (Total lines) of file
then +1 in the total lines
after that we have to fatch 100 records so, -99 from total length
then just put the variables in the sed command to fetch the last 100 lines from file
I hope this will help you.
Is there a way to figure out where in a file a program is reading from? It seems like might be doable with strace or dtrace?
To clarify the question and give motivation, say I have a 10GB log file and am counting the number of unique lines:
$ cat log.txt | sort | uniq | wc -l
Can I check where in the file cat is currently at, effectively giving the progress of the command? Using lsof, I can't seem to get the offset of last file read, which I think is what would do the trick:
$ lsof log.txt
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
cat 16021 erik 3r REG 0,22 13416118210 1078133219
Edit: I apologize, the example I gave is too narrow and misses the point. Ideally, for an arbitrary program, I would like to see where in the file reads are occurring (regardless of pipe).
You can do what you want with the progress command. It shows the progress of coreutils tools such as cat or other programs in reading their file.
File and offset information is available in Linux in /proc/<PID>/fd and /proc/<PID>/fdinfo.
Instead of cat :
pv log.txt | sort | uniq | wc -l
Piping with pv :
SIZE=$( ls -l log.txt | awk '{print $5}'); cat log.txt | sort | pv -s $SIZE | uniq | wc -l
If the example is truly your use case, then I'd recommend pipe viewer.
I have a Linux driver running in the background that is able to return the current system data/stats. I view the data by running a console utility (let's call it dump-data) in a console. All data is dumped every time I run dump-data. The output of the utility is like below
Output:
- A=reading1
- B=reading2
- C=reading3
- D=reading4
- E=reading5
...
- variableX=readingX
...
The list of readings returned by the utility can be really long. Depending on the scenario, certain readings would be useful while everything else would be useless.
I need a way to grep only the useful readings whose names might have have nothing in common (via a bash script). I.e. Sometimes I'll need to collect A,D,E; and other times I'll need C,D,E.
I'm attempting to graph the readings over time to look for trends, so I can't run something like this:
# forgive my pseudocode
Loop
dump-data | grep A
dump-data | grep D
dump-data | grep E
End Loop
to collect A,D,E as that would actually give me readings from 3 separate calls of dump-data as that would not be accurate.
If you want to save all result of grep in the same file, you can just join all expressions in one:
grep -E 'expr1|expr2|expr3'
But if you want to have results (for expr1, expr2 and expr3) in separate files, things are getting more interesting.
You can do this using tee >(command).
For example, here I process the same pipe with thre different commands:
$ echo abc | tee >(sed s/a/_a_/ > file1) | tee >(sed s/b/_b_/ > file2) | sed s/c/_c_/ > file3
$ grep "" file[123]
file1:_a_bc
file2:a_b_c
file3:ab_c_
But the command seems to be too complex.
I would better save dump-data results to a file and then grep it.
TEMP=$(mktemp /tmp/dump-data-XXXXXXXX)
dump-data > ${TEMP}
grep A ${TEMP}
grep B ${TEMP}
grep C ${TEMP}
You can use dump-data | grep -E "A|D|E". Note the -E option of grep. Alternatively you could use egrep without the -E option.
you can simply use:
dump-data | grep -E 'A|D|E'
awk '/MY PATTERN/{print > "matches-"FILENAME;}' myfile{1,3}
thx Guru at Stack Exchange