crontab and deleting older and specific file names - cron

I have a script that makes files every day like:
last-2014-08-01.csv
out-2014-08-01.csv
Following this idea:
http://www.idevelopment.info/data/Oracle/DBA_tips/Unix/UNIX_7.shtml
30 12 * * * /u01/app/oracle/bin/rman_backup.pl > /u01/app/oracle/log/rman_backup_$(date +\%Y\%m\%d).log 2>&1
I came with this idea for a cron:
#daily find /app/calculo/api/last-$(date +\%Y-)*.csv -mtime +30 -delete
#daily find /app/calculo/api/out-$(date +\%Y-)*.csv -mtime +30 -delete
What is missing is that is not deleting any file when I test it.
thanks

Can you double check if you quoted the percent sign correctly?
https://unix.stackexchange.com/questions/29578/how-can-i-execute-date-inside-of-a-cron-tab-job
In general I'd suggest you to write a shell script which contains all the logic and just to call it from cron. You will be free from quoting problems and debugging will also be easier.
#daily /path/to/cleanup-csv.bash
Be sure to remember to set the $PATH environment variable correctly in your shell script. The crontab sets minimal number of directories to $PATH.

Related

Shell script does not run completely when run by cron

The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/

mysqldump problem in Crontab and bash file

I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.

linux bash find if not execute

I need to create a bash script to run another script command if none of the files in the directory have been created within 30 mins.
I am not sure of the code I need but it needs to find and execute if not matched. -
find /folder/to/watch/* if-not 30 mins -exec fixscript.sh or something?
When the script is ran I want it to check the files in the folder , if a file has not been created within the last 30 mins then to run the fixscript.sh
Thanks in advance
I don't think this can be combined to a single find statement. The following would work, with the caveat that if a file is modified, it would be detected as "newer than 30 minutes".
if [ `find /folder/to/watch -mmin -30 | wc -l` -eq 0 ]; then
/path/to/fixscript.sh
fi
Linux/Unix does not have an independent file creation attribute. Some filesystems might have it, though, but it can't be accessed from shell without c code and call to stat(). This uses "file modified" timestamp, which gets changed on not only file creation but also file edit.
Hannu
In Linux you don't have metadata for creation time, but only for last modification time, last file status change and last access time.
You can do:
if test ! [`find "your_file" -mmin +30`]
then
`path/to/your/script.sh`
fi
the -mmin option is giving you last modification time.

I have to write an automation script

I have to backup and delete old log files every month. I will delete files older than 6 months and backup files older than 2 months in as a zip file.
I am trying to write a script that will automate and do it every month instead of me doing it manually every time.
I have UNIX commands on how to do it, but I need to put it into script file which will run automatically on the day specified.
You can schedule a cronJob for daily , which runs command inside script such as
find foldername -mtime +120 -name "*.log" -exec gzip {} \;
Above will take care of archiving all files older than 120 days. Part inside quotes after name can be modified as per your requirement and so does the +120.
find foldername -mtime +180 -name "*" -exec rm {} \;
Above will remove all file inside foldername older than 180 days.
For automation part , you can look at wiki link provided in the answer below. Though i will include it in my answer too.
You can use crontab to schedule commands (https://en.wikipedia.org/wiki/Cron)
You can use crontab to schedule commands (https://en.wikipedia.org/wiki/Cron)
You can add entry typing crontab -e and use it to schedule jobs, after adding your unix commands to a script.
For example, if you have a /home/test/test.sh file, you can run it everyday by adding the below to your crontab :
0 0 * * * /home/test/test.sh

Linux find command questions

I do not have a working Linux system to try these commands out with so I am asking on here if what I am planning on doing is the correct thing to do. (Doing this while I am downloading an ISO via a connection that I think dial-up is faster).
1, I am trying to find all files with the .log extension in the /var/log directory and sub-directories, writing the standard out to logdata.txt and standard out to logerrors.txt
I believe the command would be:
$ find /var/log/ -name *.log 1>logdata.txt 2>/home/username/logs/logerrors.txt.
2, Find all files with .conf in the /etc directory. standard out will be a file called etcdata and standard error to etcerrors.
$ find /etc -name *.conf 1>etcdata 2>etcerrors
3, find all files that have been modified in the last 30 minutes in the /var directory. standard out is to go into vardata and errors into varerrors.
Would that be:
$ find /var -mmin 30 1>vardata 2>varerrors.
Are these correct? If not what am I doing wrong?
1, I am trying to find all files with the .log extension in the /var/log directory and sub-directories, writing the standard out to logdata.txt and standard out to logerrors.txt
Here you go:
find /var/log/ -name '*.log' >logdata.txt 2>/home/username/logs/logerrors.txt
Notes:
You need to quote '*.log', otherwise the shell will expand them before passing to find.
No need to write 1>file, >file is enough
2, Find all files with .conf in the /etc directory. standard out will be a file called etcdata and standard error to etcerrors.
As earlier:
find /etc -name \*.conf >etcdata 2>etcerrors
Here I escaped the * another way, for the sake of an example. This is equivalent to '*.conf'.
3, find all files that have been modified in the last 30 minutes in the /var directory. standard out is to go into vardata and errors into varerrors.
find /var -mmin -30 >vardata 2>varerrors
I changed -mmin 30 to -mmin -30. This way it matches files modified within 30 minutes. Otherwise it matches files were modified exactly 30 minutes ago.
When using wildcards in the command, you need to make sure that they do not get interpreted by the shell. So, it is better to include the expression with wildcards in quotes. Thus, the first one will be:
find /var/log/ -name "*.log" 1>logdata.txt 2>/home/username/logs/logerrors.txt
Same comment on the second one where you should have "*.conf".

Resources