I want to make a backup of my history.txt file, where I store some information of my system. I would like to do this using the crontab and this is what I have now in my crontab:
0 * * * * cp -a history.txt "history-$(date + "%Y%m%d-%h%m%s")" ; mv "history-$(date + "%Y%m%d-%h%m%s")" l-systems
Like you can see i want to preform the backup every hour and I want to give the file a name with the date. First I make a cp of the file and rename it. After that I try to move the new file in a directory called l-systems. This doesn't work right now, can someone help?
I would advise you to make a backup shell script like so:
#!/bin/sh
DATE=$(date + "%Y%m%d-%h%m%s")
cp -a history.txt "history-$DATE"
mv "history-$DATE" l-systems
The call this script from crontab. Since we use the same variable twice it will always be the same for both commands regardless of how long each takes
Related
The script in file modBackup.sh does not run completely when started by cron, the result is a corrupted tar.gz file that is half the size of this one if I run manually. In any case, its size is many times smaller than the one started manually, but still creates some content that can not be opened normally, archive is damaged
file modBackup.sh:
#!/bin/sh
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
Тhe behavior of the automatic one seems to be interrupted and does not end.
When I run it manualy, the script creates a genuine archive as [current date]-modified.tar.gz
Here is the crontab -e:
00 18 * * 1-5 /home/myScripts/modBackup.sh
Edit:
There is no information in the logs except that crond has started
neither in the mail log, nor in the cron, nor in the messages
(I use very old CentOS :( but I don't think this is the reason for the error).
For testing only: I added %H%M of the file name in the script and did the following:
I ran it manually: sh /home/myScripts/modBackup.sh
and set with crontab -e to run a two minutes later the same command
After a few minutes, two files appeared that grew at the same time, but then the one created by cronjob
stopped growing
(two files).
I use the same GUI tool (Archive Manager) to open in both cases.
Тhe file, created by manually starting the script, opens (manually started), but the other one, from cronjob cannot, even after I changed the extension, the error is 'unexpected EOF in archive' (auto started)
Suggesting to include the users's environment context with $PATH and other critical environment variables for the application to work.:
modBackup.sh:
#!/bin/sh
source ~/.profile
find /home/share/ -mmin -720 -type f -exec tar -rvf /mnt/archives/`date +%F`-modified.tar.gz "{}" +
I found that in the cron environment the "find" command misinterprets filenames containing specific characters, even with the explicit change of the encoding with add at the beginning of the script line "export LANG = en_US.UTF-8; LC_CTYPE=...". With many other combinations and attempts I had no success.
That's why I left the "find" command and use the tar command with an option to archive modified files. This way works perfect now:
fromDate = $(date --date = '15 hours ago')
/bin/tar -N "$fromDate" -zcf /mnt/archives/`date +% F-% H% M`-share.modified.tar.gz /home/share/
I have an every-hour rsync cron task that is used to add new files to the backup server. Directory structure is the following: /myfiles/year/month/date where year, month and date are actual dates of files. Cron task is defined as the file in the /etc/cron.d
The problem is that I have to indicate a "root" /myfiles directory to make rsync replicate my folder structure in the backup location with every new day. Amount of files is substantial - up to 1000 files a day, so rsync needs to iterate through all yearly files to build a copy list while it's not needed at all because I need to copy today's only files. As of April, it takes ~25 minutes even with --ignore-existing option.
Can someone help me to create a script or whatever to add a current year, month and date to the working rsync path in the cron task, if possible? The final result should look like that:
0 * * * * root rsync -rt --ignore-existing /myfiles/2020/04/26 user#myserver:/myfiles/2020/04/26
where /2020/04/26 is variable part that is changing every day.
I have very limited experience with *nix systems so I feel that is possible but basically have no clue how to start.
To add an actual date to the path one can use the date utility or the builtin printf from the bash shell.
Using date
echo "/myfiles/$(date +%Y/%m/%d)"
Using printf
echo "/myfiles/$(printf '%(%Y/%m/%d)T')"
In your case when using the builtin printf you need to define the shell as bash in the cron entry.
0 * * * * root rsync -rt --ignore-existing "/myfiles/$(printf '\%(\%Y/\%m/\%d)T')" "user#myserver:/myfiles/$(printf '\%(\%Y/\%m/\%d)T')"
Using date either define the PATH to include where the date utility is or just use an absolute path
0 * * * * root rsync -rt --ignore-existing "/myfiles/$(/bin/date +\%Y/\%m/\%d)" "user#myserver:/myfiles/$(/bin/date +\%Y/\%m/\%d)"
The date syntax should work on both GNU and BSD date.
The % needs to be escaped inside the cron entry.
See the local documentation on your cron(5) on how to add the PATH and SHELL variables. Although the SHELL normally can be SHELL=/bin/bash and PATH to PATH=/sbin:/bin:/usr/sbin:/usr/bin
I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.
I wrote backup script for my computer. The backup scenario is like this:
Whole directories under root are bound into tar.gz twice a day(3AM, and 12AM), and this archive is going to be uploaded to google-drive using gdrive app. every 3AM.
and here is the script
#!/bin/bash
#Program: arklab backup script version 2.0
#Author: namil son
#Last modified date: 160508
#Contact: 21100352#handong.edu
#It should be executed as a super user
export LANG=en
MD=`date +%m%d`
TIME=`date +%y%m%d_%a_%H`
filename=`date +%y%m%d_%a_%H`.tar.gz
HOST=$HOSTNAME
backuproot="/local_share/backup/"
backup=`cat $backuproot/backup.conf`
gdriveID="blablabla" #This argument should be manually substituted to google-drive directory ID for each server.
#Start a new backup period at January first and June first.
if [ $MD = '0101' -o $MD = '0601' ]; then
mkdir $backuproot/`date +%y%m`
rm -rf $backuproot/`date --date '1 year ago' +%y%m`
echo $backuproot/`date +%y%m` > $backuproot/backup.conf #Save directory name for this period in backup.conf
backup=`cat $backuproot/backup.conf`
gdrive mkdir -p $gdriveID `date +%y%m` > $backup/dir
awk '{print $2}' $backup/dir > dirID
rm -f $backup/dir
fi
#make tar ball
tar -g $backup/snapshot -czpf $backup/$filename / --exclude=/tmp/* --exclude=/mnt/* --exclude=/media/* --exclude=/proc/* --exclude=/lost+found/* --exclude=/sys/* --exclude=/local_share/backup/* --exclude=/home/* \
--exclude=/share/*
#upload backup file using gdrive under the path written in dirID
if [ `date +%H` = '03' ]; then
gdrive upload -p `cat $backup/dirID` $backup/$filename
gdrive upload -p `cat $backup/dirID` $backup/`date --date '15 hour ago' +%y%m%d_%a_%H`.tar.gz
fi
Here is the problem!
When run this script on crontab, it works pretty well except for uploading tar ball to google-drive, though whole script works perfectly when run the script manually. Only the uploading process is not working when it is runned on crontab!
Can anybody help me?
Crontab entry is like this:
0 3,12 * * * sh /local_share/backup/backup2.0.sh &>> /local_share/backup/backup.sh.log
I have same case.
This is my solution
Change your command gdrive to absolute path to gdrive command
Example:
Don't set cron like this
0 1 * * * gdrive upload abc.tar.gz
Use absolute path
0 1 * * * /usr/local/bin/gdrive upload abc.tar.gz
It will work perfectly
I had the exact same issue with minor differences. I'm using gdrive on a CentOS system. Setup was fine. As root, I set up gdrive. From the command line, 'drive list' worked fine. I used the following blog post to set up gdrive:
http://linuxnewbieguide.org/?p=1078
I wrote a PHP script to do a backup of some directories. When I ran the PHP script as root from the command line, everything worked and uploaded to Google Drive just fine.
So I threw:
1 1 * * * php /root/my_backup_script.php
Into root's crontab. The script executed fine, but the upload to Google Drive wasn't working. I did some debugging, the line:
drive upload --file /root/myfile.bz2
Just wasn't working. The only command-line return was a null string. Very confusing. I'm no unix expert, but I thought when crontab runs as a user, it runs as a user (in this case root). To test, I did the following, and this is very insecure and not recommended:
I created a file with the root password at /root/.rootpassword
chmod 500 .rootpassword
Changed the crontab line to:
1 1 * * * cat /root/.rootpassword | sudo -kS php /root/my_backup_script.php
And now it works, but this is a horrible solution, as the root password is stored in a plain text file on the system. The file is readable only by root, but it is still a very bad solution.
I don't know why (again, no unix expert) I have to have root crontab run a command as sudo to make this work. I know the issue is with the gdrive token generated during gdrive setup. When crontab runs the token is not matching and the upload fails. But when you have crontab sudo as root and run the php script, it works.
I have thought of a possible solution that doesn't require storing the root password in a text file on the system. I am tired right now and haven't tried it. I have been working on this issue for about 4 days, trying various Google Drive backup solutions... all failing. It basically goes like this:
Run the gdrive setup all within the PHP/Apache interpreter. This will (perhaps) set the gdrive token to apache. For example:
Create a PHP script at /home/public_html/gdrive_setup.php. This file needs to step through the entire gdrive and token setup.
Run the script in a browser, get gdrive and the token all set up.
Test gdrive, write a PHP script something like:
$cmd = exec("drive list");
echo $cmd;
Save as gdrive_test.php and run in a browser. If it outputs your google drive files, it's working.
Write up your backup script in php. Put it in a non-indexable web directory and call it something random like 2DJAj23DAJE123.php
Now whenever you pull up 2DJAj23DAJE123.php in a web browser your backup should run.
Finally, edit crontab for root and add:
1 1 * * * wget http://my-website.com/non-indexable-directory/2DJAj23DAJE123.php >/dev/null 2>&1
In theory this should work. No passwords are stored. The only security hole is someone else might be able to run your backup if they executed 2DJAj23DAJE123.php.
Further checks could be added, like checking the system time at the start of 2DJAj23DAJE123.php and make sure it matches the crontab run time, before executing. If the times don't match, just exit the script and do nothing.
The above is all theory and not tested. I think it should work, but I am very tired from this issue.
I hope this was helpful and not overly complicated, but Google Drive IS complicated since their switch over in authentication method earlier this year. Many of the posts/blog posts you find online will just not work.
Sometimes the problem occurs because of the config path of the gdrive, means gdrive cannot find the default configuration so in order to avoid such problems we add --config flag
gdrive upload --config /home/<you>/.gdrive -p <google_drive_folder_id> /path/to/file_to_be_uploaded
Source: GDrive w/ CRON
I have had the same issue and fixed by indicating where the drive command file is.
Ex:
/usr/sbin/drive upload --file xxx..
Scenario: I need to create a cron job that scans through a directory and sftps each file to another machine
bash script : /home/user/sendFiles.sh
cron interval : 1 minute
directory: /home/user/myfiles
sftp destination: 10.10.10.123
Create the cron job
crontab -u user 1 * * * /home/user/sendFiles.sh
Create the Script
#!/bin/bash
/usr/bin/scp -r user#10.10.10.123:/home/user/myfiles .
#REMOVE FILES AFTER ALL HAVE BEEN SENT
rm -rf *
Problem: Not exactly sure if that cron tab is correct or how to sftp an entire directory with the cron tab
If is going to be executed on a cronjob, I'm assuming its in order to sync the dir.
In that case, I would use rdiff-backup to make an incremental backup. That way, only the things that change will transferred.
This system will use ssh for transferring the data, but using rdiff-backup instead of a plain scp. The major benefit of doing it this way is speed; is faster to transfer only the parts that have changed.
This is the command to do a copy over ssh using rdiff-backup:
rdiff-backup /some/local-dir hostname.net::/whatever/remote-dir
Add that to a cronjob, making sure the user that executes the rdiff backup has the ssh keys, so it does not require a password.
(About ssh keys: Read about ssh keys here: http://www.linuxproblem.org/art_9.html Once is set, try to do a regular ssh to see if you can log without password.)
Something like this:
* * * * * rdiff-backup /some/local-dir hostname.net::/whatever/remote-dir
will do the copy every minute. (your example, 1 * * * * will execute every first minute of each hour; that is, once every hour, instead of once every minute)
Keep in mind that can cause problems if the transfer is huge it hasn't finished to transfer. But I guess that you want is to do transfers of not so huge files. Or that your network speed is large. Otherwise, change it to do the transfer every 5 minutes by using */5 * * * * instead.
And finally, read more about rdiff-backup here : http://www.nongnu.org/rdiff-backup/examples.html
rdiff-backup is a good option, there is also rsync
rsync -az user#10.10.10.123:/home/user/myfiles .
I notice you are also deleting files, is this simply because you don't want to recopy them? rsync will only copy updated files.
You might also be interested in unison which does a two way sync
These are both good answers; if you stick with scp you may want to make a slight change to your script:
#!/bin/bash
/usr/bin/scp -r user#10.10.10.123:/home/user/myfiles .
#REMOVE FILES AFTER ALL HAVE BEEN SENT
cd /home/user/myfiles # make sure you're in the right directory before rm
rm -rf *