Storing application logs for a day - linux

I have an application running in Production. Where we create the logs file. The maximum count of log file is set to 10 and maximum debug write is set to some value such that when a log file becomes of 6MB a new log file is created..
So, we have logs rolling over with file names like :-
<file_name>.log
<file_name>.log.1
<file_name>.log.2
...
<file_name>.log.10
What my problem is that only logs for 15 minutes can be found in these 10 log files.
I know I can update my code base to use DailyRollingFileAppender. But what I'm looking for is a short term solution to store logs for a day such that it can be done without any code changes or minimal code/configuartion changes. For exmaple may be I can acheive this via some cron job or linux command.. etc.
Note:- I'm running this application on Linux OS in production.
Any quick help is highly appreciated.
~Thanks

You may do this create a shell script and adding it to cron jobs.
NOW_DATE=$(date +"%m-%d-%Y-%H-%M")
cd /var/log/OLD_LOGS
mkdir /var/log/OLD_LOGS/$NOW_DATE
cd /var/log/
mv server.log.* /var/log/OLD_LOGS/$NOW_DATE/
mv *.zip /var/log/OLD_LOGS/$NOW_DATE/
cp server.log /var/log/OLD_LOGS/$NOW_DATE/
cd /var/log/OLD_LOGS/$NOW_DATE
x=$(ls -l |wc -l)
if [ $x -le 1 ] then
SUBJECT="There is an issue with generating server log - less number of files"
EMAIL="support#abc.com"
EMAILMESSAGE="/tmp/errormsg.txt"
/bin/mail -s "$SUBJECT" "$EMAIL" < $EMAILMESSAGE
fi
cd /var/log/OLD_LOGS/
zip -r $NOW_DATE.zip $NOW_DATE
rm -r -f $NOW_DATE
find /var/log/ -type f -mtime +180 -exec rm {} \;

If the application tries to create .log.11 and if it overwrites the old ones as per the script conditions, there is no possibility of having the logs for a day.
What i understand is application logging is much such that all the 10 files have loggings for last 15 minutes and the new lines are again overwritten on them.
Application logic should be modified to make sure it captures a day logs. Also, please make sure to zip the files at regular intervals so that you can save some disk space.

Related

How to run few nohup in same time?

I have tool trimmings content from file. I need to use it on 33 files. One file is processing 2 hours.
I want to try run it 33 Times on same tame coz one instance used one core, on my machine I have 128 cores.
So I wrote script:
#!/bin/bash
FILES=/home/ab/raw/*
for f in $FILES
do
base = ${f##*/}
nohup /home/ab/trimmer -a /home/ab/trimmer/adapters.fa -o "OUT$base" $f
done
So main line:
I run trimmer, -a it's file with patterns to delete, -o it's new file as outpu (OUT+basename) and last $f is processing file.
My intention was that the script would run separate tasks for each file.
But unfortunately, after run it, only one nohup will be launched. In htop it's still only one core working at 100%.
How can I fix it?

How to monitor a directory with time and file using inotifywait?

I need to listen a directory, I can listen if a file is created in this directory and it works
inotifywait -m -r -e moved_to -e create "$DIR" --format "%f" | while read f
do
if [[ $f = *.csv ]] ; then
do something
fi
done
But, if this file not created, I need to send emails at 12:00 and 19:00 (at 19:00) I need to kill the process.
So, how can I monitor file and system time using inotifywait? I tried using double conditions in while, but doesn't work
Use incrontab(5) with incrond(8). Of course you should install it and combine with test(1).
My opinion is that scripting languages like Guile, Python, Lua are better suited for such tasks. You'll find extensions to them related to inotify(7).
I need to send emails at 12:00 and 19:00 (at 19:00)
That is more a job for crontab(5)

Rsync files across a dodgy network link - hangs instead of timeout

I am trying to keep 3 large directories (9G, 400G, 800G) in sync between our home site and another in a land far, far away across a network link that is a bit dodgy (slow and drops occasionally). Data was copied onto disks prior to installation so the rsync only needs to send updates.
The problem I'm having is the rsync hangs for hours on the client side.
The smaller 9G job completed, the 400G job has been in limbo for 15 hours - no output to the log file in that time, but has not timed out.
What I've done to setup for this (after reading many forum articles about rsync/rsync server/partial since I am not really a system admin)
I setup rsync server (/etc/rsyncd.conf) on our home system, entred it into xinetd and wrote a script to run rsync on the distant server, it loops if rsync fails in an attempt to deal with the dodgy network. The rsync command in the script looks like this:
rsync -avzAXP --append root#homesys01::tools /disk1/tools
Note the "-P" option is equivalent to "--progress --partial"
I can see in the log file that rsync did fail at one point and the loop restarted rsync, data was transferred after that based on entries in the log file, but the last update to the log file was 15 hours ago, and the rsync process on the client is still running.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done
So I expected these jobs to take a long time, I expected to see the occasional failure message in the log files (which I have) and to see rsync restart (which it has). Was not expecting rsync to just hang for 15 hours or more with no progress and no timeout error.
Is there a way to tell if rsync on the client is hung versus dealing with the dodgy network?
I set no timeout in the /etc/rsyncd.conf file. Should I and how do I determin a reasonable timeout setting?
I set rsync up to be available through xinetd, but don't always see the "rsync --daemon" process running. It restarts if I run rsync from the remote system. But shouldn't it be always running?
Any guidance or suggestions would be appreciated.
to tell the rsync client working status , with verbose option and keep a log file
change this line
rsync -avzAXP --append root#homesys01::tools /disk1/tools
to
rsync -avzAXP --append root#homesys01::tools /disk1/tools >>/tmp/rsync.log.`date +%F`
this would produce one log file per day under /tmp directory
then you can use tail -f command to trace the most recent log file ,
if it is rolling , it is working
see also
rsync - what means the f+++++++++ on rsync logs?
to understand more about the log
I thought I would post my final solution, in case it can help anyone else. I added --timeout 300 and --append-verify. The timeout eliminates the case of rsync getting hung indefinitely, the loop will restart it after the timeout. The append-verify is necessary to have it check any partial file it updated.
Note the following code is in a shell script and the output is redirected to a log file.
CNT=0
while [ 1 ]
do
rsync -avzAXP --append-verify --timeout 300 root#homesys01::tools /disk1/tools
STATUS=$?
if [ $STATUS -eq 0 ] ; then
echo "Successful completion of tools rsync."
exit 0
else
CNT=`expr ${CNT} + 1`
echo " Rsync of tools failure. Status returned: ${STATUS}"
echo " Backing off and retrying(${CNT})..."
sleep 180
fi
done

split scp of backup files to different smb shares based on date

I backup files to Tar files once a day and grab from our Ubuntu servers using a backup shell script and put them in a share. We only have 5TB shares but can have several.
At the moment we need more as we backup 30 days worth of Tar files.
I need a method where the first 10 days go to share one, next ten to share tow, next 11 to share three
Currently each Server VM runs the following script to backup and tar folders and place then in another folder ready to be grabbed by the backup server
!/bin/bash
appname=myapp.com
dbname=mydb
dbuser=myDBuser
dbpass=MyDBpass
datestamp=`date +%d%m%y`
rm -f /var/mybackupTars/* > /dev/null 2>&1
mysqldump -u$dbuser -p$dbpass $dbname > /var/mybackups/$dbname-$datestamp.sql && gzip /var/mybackupss/$dbname-$datestamp.sql
tar -zcf /var/mybackups/myapp-$datestamp.tar.gz /var/www/myapp > /dev/null 2>&1
tar -zcf /var/mydirectory/myapp-$datestamp.tar.gz /var/www/html/myapp > /dev/null 2>&1
I then grab the backups using a script on the backup server and put them in a share
#!/bin/bash
#
# Generate a list of myapps to grab
df|grep myappbackups|awk -F/ '{ print $NF }'>/tmp/myapplistlistsmb
# Get each app in turn
for APPNAME in `cat /tmp/applistsmb`
do
cd /srv/myappbackups/$APPNAME
scp $APPNAME:* .
done
I know this is a tough one but I really need 3 shares with ten days worth in each share
I do not anticipate the backup script changing on each server VM that backs up to itself
Only maybe the grabber script that puts the dated backups in the share on the backup server
Or am I wrong??
Any help would be great

Linux - copying only new files from one server to another

I have a server where files are transferred thru FTP to a location. All files are there since transfer beginning (January 2015).
I want to make a new server and transfer the files from first server's location.
Basically, I need a cron job to run scp and transfer only new files since last run.
Connection between servers with ssh is working and I can transfer files without restiction between servers.
How can I achieve this in Ubuntu?
The possible duplicate with the other question doesn't stand because, on my destination server I will have just one file where I should keep the date of last cron run and the files which will be copied from first server will be parsed and deleted afterwards.
rsync will simply make sure that all files exists in both servers, correct?
I manage to set-up the cron job on remote computer using the following:
I created first a timestamp file which will keep the last timestamp when cron job run:
touch timestamp
Then I copy all files with ssh and scp commands:
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
Then I touch timestamp file with new modified time:
touch -m timestamp
The only problem with this script is that, if a file is copied to remote host during ssh run before touching timestamp second time, this file is ignored on later runs.
Later edit:
To be sure that there is no gap between timestamp file and actual run because of ssh command duration, the script was changed to:
touch timestamp_new
ssh username#remote find <files_path> -type f -newer timestamp | xargs -i scp -i username#remote:'{}' <local_path>
rm -rf timestamp
mv timestamp_new timestamp

Resources