Would this Cron job be possible? - linux

I am using Red Hat Linux 5 version and my application is a Java EE Application .
We allow users to upload the Pictures in our website .
These Pictures will be stored inside a Folder in our server .
Now my question is that , on daily basis at a Particular time , i want to move all the images from that folder and move to another folder , where the folder name would be the day it has been moved .
Please let me know if this possible .
Thank you very much

man cron
man crontab
Write a small bashscript, which has your desired behaviour. Add it to your crontab or how cronjobs are realized in your distribution. (I'm using arch linux, so I do not want to give specific instructions, because of differences between distributions...)
Or use a java cron implementation and write everything in java.

You will have to create a cron job to do so, as well as a shell script.
In cron:
# The first minute of the first hour of day run the script
1 1 * * * /scripts/move_images
In /scripts/move_image
#!/bin/bash
# Pick date (YYYY-MM-DD)
date=`date +%Y-%m-%d`
# Create new dir
mkdir -p /local_of_new_folder/$date
# Move all images from old folder to new folder
mv /old_folder/* /local_of_new_folder/$date
Change mode of the script to be a executable
chmod +x /scripts/move_image
Sorry about my English, i'm Brazilian
:)

Related

mysqldump problem in Crontab and bash file

I have created a cron tab to backup my DB each 30 minutes...
*/30 * * * * bash /opt/mysqlbackup.sh > /dev/null 2>&1
The cron tab works well.. Each 30 minutes I have my backup with the script bellow.
#!/bin/sh
find /opt/mysqlbackup -type f -mtime +2 -exec rm {} +
mysqldump --single-transaction --skip-lock-tables --user=myuser --
password=mypass mydb | gzip -9 > /opt/mysqlbackup/$(date +%Y-%m-%d-%H.%M)_mydb.sql.gz
But my problem is that the rm function to delete old data isn't working.. this is never deleted.. Do you know why ?
and also... the name of my backup is 2020-02-02-12.12_mydb.sql.gz?
I always have a ? at the end of my file name.. Do you know why ?
Thank you for your help
The question mark typically indicates a character that can't be displayed; the fact that it's at the end of a line makes me think that your script has Windows line endings rather than Unix. You can fix that with the dos2unix command:
dos2unix /path/to/script.sh
It's also good practice not to throw around MySQL passwords on the CLI or store them in executable scripts. You can accomplish this by using MySQL Option files, specifically the file that defines user-level options (~/.my.cnf).
This would require us to figure out which user is executing that cronjob, however. My assumption is that you did not make that definition inside the system-level crontab; if you had, you'd actually be trying to execute /opt/mysqlbackup.sh > /dev/null 2>&1 as the user bash. This user most likely doesn't (and shouldn't) exist, so cron would fail to execute the script entirely.
As this is not the case (you say it's executing the mysqldump just fine), this makes me believe you have the definition in a user-level crontab instead. Once we figure out which user that actually is as I asked for in my comment, we can identify the file permissions issue as well as create the aforementioned MySQL options file.
Using find with mtime is not the best choice. If for some reason mysqldump stops creating backups, then in two days all backups will be deleted.
You can use my Python script "rotate-archives" for smart delete backups. (https://gitlab.com/k11a/rotate-archives). The script adds the current date at the beginning of the file or directory name. Like 2020-12-31_filename.ext. Subsequently uses this date to decide on deletion.
Running a script on your question:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=0-0-48 archives_dir=/mnt/archives
In this case, 48 new archives will always be saved. Old archives in excess of this number will be deleted.
An example of more flexible archives deletion:
rotate-archives.py test_mode=off age_from-period-amount_for_last_timeslot=7-5,31-14,365-180-5 archives_dir=/mnt/archives
As a result, there will remain archives from 7 to 30 days old with a time interval between archives of 5 days, from 31 to 364 days old with time interval between archives 14 days, from 365 days old with time interval between archives 180 days and the number of 5.

uploading file to google-drive using gdrive is not working on crontab

I wrote backup script for my computer. The backup scenario is like this:
Whole directories under root are bound into tar.gz twice a day(3AM, and 12AM), and this archive is going to be uploaded to google-drive using gdrive app. every 3AM.
and here is the script
#!/bin/bash
#Program: arklab backup script version 2.0
#Author: namil son
#Last modified date: 160508
#Contact: 21100352#handong.edu
#It should be executed as a super user
export LANG=en
MD=`date +%m%d`
TIME=`date +%y%m%d_%a_%H`
filename=`date +%y%m%d_%a_%H`.tar.gz
HOST=$HOSTNAME
backuproot="/local_share/backup/"
backup=`cat $backuproot/backup.conf`
gdriveID="blablabla" #This argument should be manually substituted to google-drive directory ID for each server.
#Start a new backup period at January first and June first.
if [ $MD = '0101' -o $MD = '0601' ]; then
mkdir $backuproot/`date +%y%m`
rm -rf $backuproot/`date --date '1 year ago' +%y%m`
echo $backuproot/`date +%y%m` > $backuproot/backup.conf #Save directory name for this period in backup.conf
backup=`cat $backuproot/backup.conf`
gdrive mkdir -p $gdriveID `date +%y%m` > $backup/dir
awk '{print $2}' $backup/dir > dirID
rm -f $backup/dir
fi
#make tar ball
tar -g $backup/snapshot -czpf $backup/$filename / --exclude=/tmp/* --exclude=/mnt/* --exclude=/media/* --exclude=/proc/* --exclude=/lost+found/* --exclude=/sys/* --exclude=/local_share/backup/* --exclude=/home/* \
--exclude=/share/*
#upload backup file using gdrive under the path written in dirID
if [ `date +%H` = '03' ]; then
gdrive upload -p `cat $backup/dirID` $backup/$filename
gdrive upload -p `cat $backup/dirID` $backup/`date --date '15 hour ago' +%y%m%d_%a_%H`.tar.gz
fi
Here is the problem!
When run this script on crontab, it works pretty well except for uploading tar ball to google-drive, though whole script works perfectly when run the script manually. Only the uploading process is not working when it is runned on crontab!
Can anybody help me?
Crontab entry is like this:
0 3,12 * * * sh /local_share/backup/backup2.0.sh &>> /local_share/backup/backup.sh.log
I have same case.
This is my solution
Change your command gdrive to absolute path to gdrive command
Example:
Don't set cron like this
0 1 * * * gdrive upload abc.tar.gz
Use absolute path
0 1 * * * /usr/local/bin/gdrive upload abc.tar.gz
It will work perfectly
I had the exact same issue with minor differences. I'm using gdrive on a CentOS system. Setup was fine. As root, I set up gdrive. From the command line, 'drive list' worked fine. I used the following blog post to set up gdrive:
http://linuxnewbieguide.org/?p=1078
I wrote a PHP script to do a backup of some directories. When I ran the PHP script as root from the command line, everything worked and uploaded to Google Drive just fine.
So I threw:
1 1 * * * php /root/my_backup_script.php
Into root's crontab. The script executed fine, but the upload to Google Drive wasn't working. I did some debugging, the line:
drive upload --file /root/myfile.bz2
Just wasn't working. The only command-line return was a null string. Very confusing. I'm no unix expert, but I thought when crontab runs as a user, it runs as a user (in this case root). To test, I did the following, and this is very insecure and not recommended:
I created a file with the root password at /root/.rootpassword
chmod 500 .rootpassword
Changed the crontab line to:
1 1 * * * cat /root/.rootpassword | sudo -kS php /root/my_backup_script.php
And now it works, but this is a horrible solution, as the root password is stored in a plain text file on the system. The file is readable only by root, but it is still a very bad solution.
I don't know why (again, no unix expert) I have to have root crontab run a command as sudo to make this work. I know the issue is with the gdrive token generated during gdrive setup. When crontab runs the token is not matching and the upload fails. But when you have crontab sudo as root and run the php script, it works.
I have thought of a possible solution that doesn't require storing the root password in a text file on the system. I am tired right now and haven't tried it. I have been working on this issue for about 4 days, trying various Google Drive backup solutions... all failing. It basically goes like this:
Run the gdrive setup all within the PHP/Apache interpreter. This will (perhaps) set the gdrive token to apache. For example:
Create a PHP script at /home/public_html/gdrive_setup.php. This file needs to step through the entire gdrive and token setup.
Run the script in a browser, get gdrive and the token all set up.
Test gdrive, write a PHP script something like:
$cmd = exec("drive list");
echo $cmd;
Save as gdrive_test.php and run in a browser. If it outputs your google drive files, it's working.
Write up your backup script in php. Put it in a non-indexable web directory and call it something random like 2DJAj23DAJE123.php
Now whenever you pull up 2DJAj23DAJE123.php in a web browser your backup should run.
Finally, edit crontab for root and add:
1 1 * * * wget http://my-website.com/non-indexable-directory/2DJAj23DAJE123.php >/dev/null 2>&1
In theory this should work. No passwords are stored. The only security hole is someone else might be able to run your backup if they executed 2DJAj23DAJE123.php.
Further checks could be added, like checking the system time at the start of 2DJAj23DAJE123.php and make sure it matches the crontab run time, before executing. If the times don't match, just exit the script and do nothing.
The above is all theory and not tested. I think it should work, but I am very tired from this issue.
I hope this was helpful and not overly complicated, but Google Drive IS complicated since their switch over in authentication method earlier this year. Many of the posts/blog posts you find online will just not work.
Sometimes the problem occurs because of the config path of the gdrive, means gdrive cannot find the default configuration so in order to avoid such problems we add --config flag
gdrive upload --config /home/<you>/.gdrive -p <google_drive_folder_id> /path/to/file_to_be_uploaded
Source: GDrive w/ CRON
I have had the same issue and fixed by indicating where the drive command file is.
Ex:
/usr/sbin/drive upload --file xxx..

Debian: Cron bash script every 5 minutes and lftp

We have to run a script every 5 minutes for downloading data from an FTP server. We have arranged the FTP script, but now we want to download automatic every 5 minutes the data.
We can use: "0 * * * * /home/kbroeren/import.ch"
where import the ftp script is for downloading the data files.
The point is, the data files become every 5 minutes available on the FTP server. Sometimes this where will be a minute offset. It would be nice to download the files when they become a couple of seconds be available on the FTP server. Maybe a function that scans the ftp file folder if the file is allready available, and then download the file, if not... the script will retry it again in about 10 seconds.
One other point to fix is the time of the FTP script. there are 12k files in the map. We should only the newest every time we run the script. Now scanning the folder takes about 3 minutes time thats way too long. The filename of all the datafiles contains date and time, is there a possibility to make a dynamic filename to download the right file every 5 minutes ?
Lot os questions, i hope someone could help me out with this!
Thank you
Kevin Broeren
Our FTP script:
#!/bin/bash
HOST='ftp.mysite.com'
USER='****'
PASS='****'
SOURCEFOLDER='/'
TARGETFOLDER='/home/kbroeren/datafiles'
lftp -f "
open $HOST
user $USER $PASS
LCD $SOURCEFOLDER
mirror --newer-than=now-1day --use-cache $SOURCEFOLDER $TARGETFOLDER
bye
"
find /home/kbroeren/datafiles/* -mtime +7 -exec rm {} \;
Perhaps you might want to give a try to curlftpfs. Using this FUSE filesystem you can mount an FTP share into your local filesystem. If you do so, you won't have to download the files from FTP and you can iterate over the files as if they were local. You can give it a try following these steps:
# Install curlftpfs
apt-get install curlftpfs
# Make sure FUSE kernel module is loaded
modprobe fuse
# Mount the FTP Directory to your datafiles directory
curlftpfs USER:PASS#ftp.mysite.com /home/kbroeren/datafiles -o allow_other,disable_eprt
You are now able to process these files as you wish. You'll always have the most recent files in this directory. But be aware of the fact, that this is not a copy of the files. You are working directly on the FTP server. For example removing a file from /home/kbroeren/datafiles will remove it from the FTP server.
If this works foor you, you might want to write this information into /etc/fstab, to make sure the directory is mounted with each start of the mashine:
curlftpfs#USER:PASS#ftp.mysite.com /home/kbroeren/datafiles fuse auto,user,uid=USERID,allow_other,_netdev 0 0
Make sure to change USERID to match the UID of the user who needs access to this files.

Keep files updated from remote server

I have a server at hostname.com/files. Whenever a file has been uploaded I want to download it.
I was thinking of creating a script that constantly checked the files directory. It would check the timestamp of the files on the server and download them based on that.
Is it possible to check the files timestamp using a bash script? Are there better ways of doing this?
I could just download all the files in the server every 1 hour. Would it therefore be better to use a cron job?
If you have a regular interval at which you'd like to update your files, yes, a cron job is probably your best bet. Just write a script that does the checking and run that at an hourly interval.
As #Barmar commented above, rsync could be another option. Put something like this in the crontab and you should be set:
# min hour day month day-of-week user command
17 * * * * user rsync -av http://hostname.com/ >> rsync.log
would grab files from the server in that location and append the details to rsync.log on the 17th minute of every hour. Right now, though, I can't seem to get rsync to get files from a webserver.
Another option using wget is:
wget -Nrb -np -o wget.log http://hostname.com/
where -N re-downloads only files newer than the timestamp on the local version, -b sends
the process to the background, -r recurses into directories and -o specifies a log file. This works from an arbitrary web server. -np makes sure it doesn't go up into a parent directory, effectively spidering the entire server's content.
More details, as usual, will be in the man pages of rsync or wget.

Cron job in Ubuntu

This should be pretty straight forward but I can't seem to get getting it working despite reading several tutorials via Google, Stackoverflow and the man page.
I created a cron to run every 1 min (for testing) and all it basically does is spit out the date.
crontab -l
* * * * * /var/test/cron-test.sh
The cron-test file is:
echo "$(date): cron job run" >> test.log
Waiting many minutes and I never see a test.log file.
I can call the "test.sh" manually and get it to output & append.
I'm wondering what I'm missing? I'm also doing this as root. I wonder if I'm miss-understanding something about root's location? Is my path messed up because it's appending some kind of home directory to it?
Thanks!
UPDATE -----------------
It does appear that I'm not following something with the directory path. If I change directory to root's home directory:
# cd
I see my output file "test.log" with all the dates printed out every minute.
So, I will update my question to be, what am I miss-understanding about the /path? Is there a term I need to use to have it start from the root directory?
Cheers!
UPDATE 2 -----------------
Ok, so I got what I was missing.
The script to setup crontab was working right. It was finding the file relative to the root directory. ie:
* * * * * /var/test/cron-test.sh
But the "cron-test.sh" file was not set relative to the root directory. Thus, when "root" ran the script, it dumped it back into "root's" home directory. My thinking was that since the script was being run in "/var/test/" that the file would also be dumped in "/var/test/".
Instead, I need to set the location in the script file to dump it out correctly.
echo "$(date): cron job run" >> /var/test/test.log
And that works.
You have not provided any path for test.log so it will be created in the current path(which is the home directory of the user by default).
You should update your script and provide the full path, e.g:
echo "$(date): cron job run" >> /var/log/test.log
To restate the answer you gave yourself more explicitely: cronjobs are started in the home directory of the executing user, in your case root. That's why a relative file ended up in ~root.

Resources