I would like to run a shell script to parse a dynamic log file in a folder and search for the keyword error on the latest logfile - linux

I'm currently moving from Windows administrator to Linux. I'm trying to monitor a folder when there will be new log files created everyday.
For example:
log-02-17-2023
log-02-18-2023
How can we figureout the latest file using the modified time of the file in that folder and after finding the latest log file I would want it to parse that file and search for the keyword error.
I kind of have something written in perl, but its still in progress. Is it possible to execute the *.sh file and it should show me the parsed output of that keyword error and the line number and echo/print the complete line containing the error as an output from the latest file. Is it possible?
If I do this I can get it to display the file names that has this keyword error, but I would also want it to display the keywork error line as well as it should pick only the latest logfile.
$ find /var/monitor/ -type f -name "*" -exec grep -l "error" {} \+ 2>/dev/null
Output:
/var/monitor/logfile
/var/monitor/logfile-16022023
Is it possible to get any simple sh script doing this process? I know this is not a code donating forums but it would be greatly appreciable for any quick help and I promise to mark the answer immediately. Thanks in advance.

Try this:
ls -dlst /var/monitor/* | awk '{print $10}' | xargs grep -s "error" | head -1
To put it in a shell.sh:
#!/bin/bash
cmd="ls -dlst /var/monitor/* | awk '{print $10}' | xargs grep -s "error" | head -1"
bash -c $cmd

Related

how to run grep from script and store output in a file in the destination directory from bash script

I am trying to filter out lines from a file through a bash script. I am able to find the path of the file from script location by running the command
Fgff=`find $D -maxdepth 1 -type f -name "*.gff"`
I can add a column to the found .gff file by running the command
sed -i '1 s/$/\tsample/; 1! s/$/\t'${D##*/}'/' $Fpsi
However if I try to filter the file and write the output in another file in the same folder then its not working.
grep 'ENSG00000155657\|ENSG00000198947' $Fgff > "$Fgff$filtered"
I want to know why grep is not working?
How can I filter all the lines having substring ENSG00000155657 or ENSG00000198947 in file apple.gff at ./dira/dirb/apple.gff and store it in ./dira/dirb/applefiltered.gff?
thanks
Providing that your $Fgff contains the correct filename, your grep command does exactly what you requested, searching for the string 'ENSG0000015565(7\|E)NSG00000198947' while you probably wanted '(ENSG00000155657)\|(ENSG00000198947)'.

Using cut in Linux Mint Terminal more precisely

In the directory /usr/lib on Linux Mint there are files, among other things, that goes by the name of xxx.so.d where xxx is their name, and d being a number. The assignment is to find all files with .so file ending and write out their name, xxx. The code I got so far is
ls | grep "\.so\." | cut -d "." -f 1
The problem now is that cut cuts of some filenames short, as an example there is an file called libgimp-2.0.so.0, where the wanted output would be libgimp-2.0 since that part is infront of .so
Is there anyway to make cut cut at ".so" instead of the first .?
The answer given by pacholik can give you wrong files (ie: 'xyz.socket' will appear on your list). To correct his script:
for i in *.so.*; do echo "${i%%.so*}"; done
Another way to do this (easier to read in my opinion) is to use a little Perl:
ls | grep "\.so\." | perl -n0e "print ((split(/\.so/))[0], \"\n\")"
Sorry, I don't think there is a way to use only "cut" as you asked.
for i in *.so*; do echo "${i%.so*}"; done
just a bash parameter substitution
http://www.tldp.org/LDP/abs/html/parameter-substitution.html
Just use sed instead:
ls | grep -v ".socket" | grep .so | sed "s/.so.*//"
This will delete everything behind the first found .so in the file names. So also files named xxx.so.so would work.
Depend on the size of the directory probably using find could be the best option, as a start point give a try to this:
find . -iname "*.so.*" -exec basename {} \; | cut -d "." -f 1
Like cut there are many other options, like sed, awk that could help you achieve in some cases the same result in a faster way.

Copying files from text list

I have a list of sample names in text format e.g.
Sample1
Sample2
etc....
Im trying to find and copy files with these names and a specific extension using the below one liner
find ./ | egrep fq.gz | fgrep -f list.txt | perl -ne 'chomp; system "cp $_ /data/copy_of_files/"'
No errors are thrown up but nothing is copied.
This line works until I pass the output to perl (list of correct files prints in termial from the fgrep) so I think my issue is with the perl section...
any suggestions?
Both your original command and this command worked for me:
find . -name '*.gz' | fgrep -f list.txt | \
perl -ne 'chomp; system("cp $_ <DIR>");'
Have you verified that your user or group have write permissions to /data/copy_of_files?
Suggestions:
Don't use Perl here, but instead, create a pipeline that just spits
out the cp commands. As soon as you're satisfied with what you
see, append | sh -x.
Be absolutely sure that none of your file names contain whitespace or other characters special to the shell. If some do, but only a little (e.g. only spaces), you may get by with appropriate quoting in the cp commands, but if anything is possible in filenames, a different approach will be required, and I would probably write the whole thing in Perl using File::Find::Rule.

Need help editing multiple files using sed in linux terminal

I am trying to do a simple operation here. Which is to cut a few characters from one file (style.css) do a find a replace on another file (client_custom.css) for more then 100 directories with different names
When I use the following command
for d in */; do sed -n 73p ~/assets/*/style.css | cut -c 29-35 | xargs -I :hex: sed -i 's/!BGCOLOR!/:hex:/' ~/assets/*/client_custom.css $d; done
It keeps giving me the following error for all the directories
sed: couldn't edit dirname/: not a regular file
I am confused on why its giving me that error message explicitly gave the full path to the file. It works perfectly fine without a for loop.
Can anyone please help me out with this issue?
sed doesn't support folders as input.
for d in */;
puts folders into $d. If you write sed ... $d, then BASH will put the folder name into the arguments of sed and the poor tool will be confused.
Also ~/assets/*/client_custom.css since this will expand to all the files which match this pattern. So sed will be called once with all file names. You probably want to invoke sed once per file name.
Try
for f in ~/assets/*/client_custom.css; do
... | sed -i 's/!BGCOLOR!/:hex:/' $f
done
or, even better:
for f in ~/assets/*/client_custom.css; do
... | sed 's/!BGCOLOR!/:hex:/' "${f}.in" > "${f}"
done
(which doesn't overwrite the output file). This way, you can keep the "*.in" files, edit them with the patterns and then use sed to "expand" all the variables.

E-Mail Directory Listing in Linux [duplicate]

This question already has answers here:
How do I send a file as an email attachment using Linux command line?
(25 answers)
Closed 8 years ago.
I am on Linux OS 2.1. Options like mailx or uuencode are not configured on these servers. My objective is to email the list of files (with date and time) from a directory which were updated within one day. I have managed to make a script which let's me do that but when I recieve the email, all the lines appear as one continuous output. There are no breaks in the output. Outlook ignores the line breaks. Now this list has to go to some big users and I can't ask them to fix the setting in outlook to ignore the line breaks. Can this be achieved from the script which I am using
This is the script that I am using.
!/bin/bash
dir=/path-to-dir
cd $dir
find . -maxdepth 1 -type f -mtime -1 -exec ls -lrth {} \;> /tmp/filelist
cat /tmp/filelist | awk -F/ '{print $1,$2}' |awk '{print $6,$7,$8,$10}' | mail -s "Today's Directory List" email#address.com
I am to send this directory list once a day, hence will set a cronjob task to execute the script.
I even tried sending the file as attachment but uuencode is not confiugred on the server.
Hence I am looking for help with this.
Thanks
Your issue may be the end-of-line character difference between unix and windows.
Try changing:
awk '{print $6,$7,$8,$10}'
To:
awk '{print $6,$7,$8,$10,"\r"}'
and see if that helps.
Add 2 extra spaces at the beginning of each line to trick Outlook into not removing line-breaks. You can do this easily in the last awk of the pipeline.
It's probably worth including the "\r", suggested in another answer, as well so that it has the CR-LF line terminator that Outlook probably expects.
So this is the final script which is working fine for me. Thanks to Emmet for suggesting the use of extra space in front to trick outlook.
!/bin/bash
dir=/path-to-dir
cd $dir
find . -maxdepth 1 -type f -mtime -1 -exec ls -lrth {} \;|awk -F/ '{print $1,$2}' | awk '{ print $6,$7,$8,$10,"\r" }'> /tmp/filelist
awk '$0=" "$0' /tmp/filelist > /tmp/list.txt
mail -s "Today's Directory List" email#address.com </tmp/list.txt
Thanks again everyone.

Resources