deletion of file after 7 days - linux

I have to delete multiple files after 7 days regularly. And the deletion dates and location are different for each file.Yes, I can apply a cronjob for each folder separately but tat will involve many cronjobs (atleast 15).
In order to avoid this, I want to create a script which will go to each folder and delete the data.
For example:
-rw-r--r-- 1 csbackup other 20223605295 Jun 12 06:40 IO.tgz
As you can see IO.tgz was created on 12/06/2015 6:40... now I want to delete this file at 17/06/2015 00:00 hours... this is one reason I'm unable to use mtime as it will delete exactly after 7*24 hrs.
I was thinking to compare the timestamps of the file however, stat utility is not present on my machine. And its now even allowing me to install it.
Can anyone please guide me via a script which I can use to delete after n days

You can make a list of directories you want to search in a file.
# cat file
/data
/d01
/u01/files/
Now you can use for loop to remove the files which are there on those directories one by one.
for dir in $(cat file); do
find $dir -type f -mtime 7 |xargs rm -f
done

Related

How can I use a cronjob when another program makes the commands in the cronjob fail?

I have a cron job which cd into a directory and performs actions.
For example:
0 12,00 * * * cd /var/lib/test/0001 && cp *.zip /home/bobby/
However, the program that creates the .zip files in /var/lib/test/0001 changes the directory name every day. So on the second day, the directory is /var/lib/test/0002 and on the third day /var/lib/test/0003 and so on. This model cannot be changed.
Of course, when the directory migrates from 0001 to 0002, the cronjob fails.
Is there a way to use cron to cd into 000* and then 001* and so on so that the cp command will be run? Perhaps there is an alternative way? Thank you.
EDIT MARCH 13:
There is another issue that I am finding hard to solve.
I only want to cp files that are above a certain filesize. I want to copy .zip files to /home/bobby/ which are more than 28,000 bytes. If they are less than 28,000 bytes, then they don't get copied. How would I do this, thanks?
As before, this would happen in /var/lib/test/**** (where **** goes from 0000 to FFFF and increments every day).
You can do this with sample script:
dir=$(ls -tr1 /var/lib/test|tail -1)
cd /var/lib/test/$dir && cp *.zip /home/bobby/
ls get the list of files sort by time in reverse order so the last one is the last directory. And then we use it further.

mysqldump problem in Crontab and bash file

I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.

Cron Job to auto delete folder older than 7 days Linux

I am having issue storing my server backup on a storage VPS. My server is not deleting old backup folders and the storage is getting full and the backup fails in mid way. My runs once every week.
Can anyone help me create a cron job script on that deletes folder older than 7 days and runs one day before backup and delete old folders.
Any help appreciated.
For example, the description of crontab for deleting files older than 7 days under the /path/to/backup/ every day at 4:02 AM is as follows.
02 4 * * * find /path/to/backup/* -mtime +7 -exec rm {} \;
Please make sure before executing rm whether targets are intended files. You can check the targets by specifying -ls as the argument of find.
find /path/to/backup/* -mtime +7 -ls
mtime means the last modification timestamp and the results of find may not be the expected file depending on the backup method.

Batch rename log files from one time format to another

I'm looking for a way to batch rename almost 1,000 log files created by an Eggdrop bot. A few years ago, I had to setup my bot from scratch, and neglected to set the log format properly, so all of these files now have the format:
channelname.log.%d%b%Y (channelname.log.14Jan2014)
I want to rename all those files to match all my old log files, which are in the format of:
channelname.log.%Y%m%d (channelname.log.20140101)
I've already made the change in my eggdrop.conf file, but I would like to rename all the newer log files to match the format of the old ones.
This is on a Linux shell, so some sort of bash command would be ideal. Thanks!
find . -type f -name '*.log.*[^0-9-]*' -print0 | while read -d '' -r logfile; do
mv "${logfile}" "${logfile/.log.*/.log.`date -d ${logfile#*.log.} +%Y-%m-%d`}"
done
Assuming it's in a locale date knows how to handle.

Debian: Cron bash script every 5 minutes and lftp

We have to run a script every 5 minutes for downloading data from an FTP server. We have arranged the FTP script, but now we want to download automatic every 5 minutes the data.
We can use: "0 * * * * /home/kbroeren/import.ch"
where import the ftp script is for downloading the data files.
The point is, the data files become every 5 minutes available on the FTP server. Sometimes this where will be a minute offset. It would be nice to download the files when they become a couple of seconds be available on the FTP server. Maybe a function that scans the ftp file folder if the file is allready available, and then download the file, if not... the script will retry it again in about 10 seconds.
One other point to fix is the time of the FTP script. there are 12k files in the map. We should only the newest every time we run the script. Now scanning the folder takes about 3 minutes time thats way too long. The filename of all the datafiles contains date and time, is there a possibility to make a dynamic filename to download the right file every 5 minutes ?
Lot os questions, i hope someone could help me out with this!
Thank you
Kevin Broeren
Our FTP script:
#!/bin/bash
HOST='ftp.mysite.com'
USER='****'
PASS='****'
SOURCEFOLDER='/'
TARGETFOLDER='/home/kbroeren/datafiles'
lftp -f "
open $HOST
user $USER $PASS
LCD $SOURCEFOLDER
mirror --newer-than=now-1day --use-cache $SOURCEFOLDER $TARGETFOLDER
bye
"
find /home/kbroeren/datafiles/* -mtime +7 -exec rm {} \;
Perhaps you might want to give a try to curlftpfs. Using this FUSE filesystem you can mount an FTP share into your local filesystem. If you do so, you won't have to download the files from FTP and you can iterate over the files as if they were local. You can give it a try following these steps:
# Install curlftpfs
apt-get install curlftpfs
# Make sure FUSE kernel module is loaded
modprobe fuse
# Mount the FTP Directory to your datafiles directory
curlftpfs USER:PASS#ftp.mysite.com /home/kbroeren/datafiles -o allow_other,disable_eprt
You are now able to process these files as you wish. You'll always have the most recent files in this directory. But be aware of the fact, that this is not a copy of the files. You are working directly on the FTP server. For example removing a file from /home/kbroeren/datafiles will remove it from the FTP server.
If this works foor you, you might want to write this information into /etc/fstab, to make sure the directory is mounted with each start of the mashine:
curlftpfs#USER:PASS#ftp.mysite.com /home/kbroeren/datafiles fuse auto,user,uid=USERID,allow_other,_netdev 0 0
Make sure to change USERID to match the UID of the user who needs access to this files.

Resources