I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.
Related
The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/
I have an every-hour rsync cron task that is used to add new files to the backup server. Directory structure is the following: /myfiles/year/month/date where year, month and date are actual dates of files. Cron task is defined as the file in the /etc/cron.d
The problem is that I have to indicate a "root" /myfiles directory to make rsync replicate my folder structure in the backup location with every new day. Amount of files is substantial - up to 1000 files a day, so rsync needs to iterate through all yearly files to build a copy list while it's not needed at all because I need to copy today's only files. As of April, it takes ~25 minutes even with --ignore-existing option.
Can someone help me to create a script or whatever to add a current year, month and date to the working rsync path in the cron task, if possible? The final result should look like that:
0 * * * * root rsync -rt --ignore-existing /myfiles/2020/04/26 user#myserver:/myfiles/2020/04/26
where /2020/04/26 is variable part that is changing every day.
I have very limited experience with *nix systems so I feel that is possible but basically have no clue how to start.
To add an actual date to the path one can use the date utility or the builtin printf from the bash shell.
Using date
echo "/myfiles/$(date +%Y/%m/%d)"
Using printf
echo "/myfiles/$(printf '%(%Y/%m/%d)T')"
In your case when using the builtin printf you need to define the shell as bash in the cron entry.
0 * * * * root rsync -rt --ignore-existing "/myfiles/$(printf '\%(\%Y/\%m/\%d)T')" "user#myserver:/myfiles/$(printf '\%(\%Y/\%m/\%d)T')"
Using date either define the PATH to include where the date utility is or just use an absolute path
0 * * * * root rsync -rt --ignore-existing "/myfiles/$(/bin/date +\%Y/\%m/\%d)" "user#myserver:/myfiles/$(/bin/date +\%Y/\%m/\%d)"
The date syntax should work on both GNU and BSD date.
The % needs to be escaped inside the cron entry.
See the local documentation on your cron(5) on how to add the PATH and SHELL variables. Although the SHELL normally can be SHELL=/bin/bash and PATH to PATH=/sbin:/bin:/usr/sbin:/usr/bin
I'd like to monitor a directory for new files daily using a linux bash script.
New files are added to the directory every 4 hours or so. So I'd like to at the end of the day process all the files.
By process I mean convert them to an alternative file type then pipe them to another folder once converted.
I've looked at inotify to monitor the directory but can't tell if you can make this a daily thing.
Using inotify I have got this code working in a sample script:
#!/bin/bash
while read line
do
echo "close_write: $line"
done < <(inotifywait -mr -e close_write "/home/tmp/")
This does notify when new files are added and it is immediate.
I was considering using this and keeping track of the new files then processing them at all at once, at the end of the day.
I haven't done this before so I was hoping for some help.
Maybe something other than inotify will work better.
Thanks!
You can use a daily cron job: http://linux.die.net/man/1/crontab
Definitely should look into using a cronjob. Edit your cronfile and put this in:
0 0 * * * /path/to/script.sh
That means run your script at midnight everyday. Then in your script.sh, all you would do is for all the files, "convert them to an alternative file type then pipe them to another folder once converted".
Your cron job (see other answers on this page) should keep a list of the files you have already processed and then use comm -3 processed-list all-list to get the new files.
man comm
Its a better alternative to
awk 'FNR==NR{a[$0];next}!($0 in a)' processed-list all-list
and probably more robust than using find since you record the ones that you have actually processed.
To collect the files by the end of day, just use find:
find $DIR -daystart -mtime -1 -type f
Then as others pointed out, set up a cron job to run your script.
I have a server at hostname.com/files. Whenever a file has been uploaded I want to download it.
I was thinking of creating a script that constantly checked the files directory. It would check the timestamp of the files on the server and download them based on that.
Is it possible to check the files timestamp using a bash script? Are there better ways of doing this?
I could just download all the files in the server every 1 hour. Would it therefore be better to use a cron job?
If you have a regular interval at which you'd like to update your files, yes, a cron job is probably your best bet. Just write a script that does the checking and run that at an hourly interval.
As #Barmar commented above, rsync could be another option. Put something like this in the crontab and you should be set:
# min hour day month day-of-week user command
17 * * * * user rsync -av http://hostname.com/ >> rsync.log
would grab files from the server in that location and append the details to rsync.log on the 17th minute of every hour. Right now, though, I can't seem to get rsync to get files from a webserver.
Another option using wget is:
wget -Nrb -np -o wget.log http://hostname.com/
where -N re-downloads only files newer than the timestamp on the local version, -b sends
the process to the background, -r recurses into directories and -o specifies a log file. This works from an arbitrary web server. -np makes sure it doesn't go up into a parent directory, effectively spidering the entire server's content.
More details, as usual, will be in the man pages of rsync or wget.
--- file makebackup.sh
#!/bin/bash
DATE='date'
mysqldump --all-databases | gzip -9 > /backup/temp_db.gz
tar -Pcf /backup/temp_ftp.tar /public_html/
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
sleep 60 && /backup/upload.sh $DATE
--- file upload.sh
#!/usr/bin/expect -f
# connect via scp
spawn scp /backup/temp_backup.tar root#mybackup.com:/home/backup_$argv.tar
#######################
expect {
-re ".*es.*o.*" {
exp_send "yes\r"
exp_continue
}
-re ".*sword.*" {
exp_send "mypassword\r"
}
}
interact
Why this does not work, i don't want to use sleep i need to know when last tar is over and execute file upload.sh. Instead it always executes as soon as last tar file starts.
&& does not do anything even if i remove sleep 60
As you say 'Instead it always executes as soon as last tar file starts', normally that means there is an '&' at the end of the line OR are you sure the tar is really working? Are you looking at an old tar.gz that was created early on? Make sure it is a new tar file that is correct size.
Edit I'm not saying you have to delete files, just dbl-check that what is being put into the final tar makes sense.
Are you checking the sizes of input files to your final tar cmd? (/home/temp_db.gz /backup/temp_ftp.tar)? Edit By this I mean, that an uncompressed tar file (temp_ftp.tar) should be just slightly larger than the sum of sizes of all files it contains. If you know that you have 1 Meg of files that compose temp_ftp.tar, and the file is 1.1 Meg, that is good, if it is .5 Meg, then that is bad. (Also consider gziping the whole thing to reduce transmission time to your remote host). Your compressed db file, hard to say, presumably that is working, if the file size is something like 25 bytes, then that indicates an error in creating the file.
Otherwise what you are saying really seems impossible. It is one of these things, or something else is bollixing things up.
One way to debug how long the last tar is taking is to wrap the command in two date commands, i.e.
date
tar -Pcf /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
rc=$?
date
printf "return code from quick tar was ${rc}\n"
Also, per your title, 'check previous step', I've added capturing the return code from tar and printing the value.
Again, to reinforce what you know already, in a linux shell script, there is no way (excepting a background job with the '&' char) for one command to start executing before the previous one has completed.
EDIT ownership and permissions on your files might be screwing things up is ownership and permissions on your files. use \
ls -l /backup/temp_backup.tar /home/temp_db.gz /backup/temp_ftp.tar
to confirm that your userID owns the files and and that you can write to them. If you want to, edit your posting to include that information.
Also, your headline says 'cron' : are you capturing all of the possible output of this script to help debug the situation? Turn on shell debugging with set -vx near the top of makebackup.sh. Add debugging output to your tar cmd '-v'.
Capture the cron output of your whole process like
59 23 31 12 * { /path/to/makebackup.sh 2>&1 ; } > /tmp/makebackup.`/bin/date +\%Y\%m\%d.\%H\%M\%S.trace_log
And be sure you don't find any error messages.
( Crontab sample, min hr day mon (day-of-week, 0-6 or *) , change date/time to meet your testing needs)
Your expect script uses '\r', don't you want '\n' in the Unix/linux environment. If you're a Windows based server, then you want '\r\n' .
Edit does the expect script work, have you proved to your satisifaction that files are being copied, are they the same size on the backup site, does the date change?
If you expect backups to save your systems someday, you have to develop a better understanding of how the whole process should work and if it is working as expected. Depending on your situation and availability of alternate computers, you should schedule a test of your restoring your backups to see if they will really work. As you're using -P to preserve full-path info, you'll really need to be careful not to overwrite your working system with old files.
To summarize my advise, double-check everything.
I hope this helps.