I have a QNAP NAS running Google Drive sync so that my QNAP, Computers and Google Drive are all in Sync.
When I create a file on my work computer and get home to the QNAP I get an access denied error on the file I created at work.
If I view the permissions I can see they are set incorrectly. From the QNAP web manager I simply right click the folder containing my files and set permissions to "Reapply and apply to subfolders/files".
How would one go about doing the above via a cron job that runs say every 5 minutes?
I had a similar problem myself and also made a cron job for it.
start of with making a script in a easy to find place.
I used "/share/MD0_DATA/" because all the shares live here.
Create a file like perms.sh and add the following:
#!/bin/bash
cd /share/MD0_DATA/(folder you want to apply this)
chmod -R 775 *
chown -R nobody:nogroup *
I used the nobody:nogroup just for example you can use any user and group you want.
Now you need to add this script to crontab.
To see whats in your crontab use:
crontab -l
to edit the crontab use:
crontab -e
This editor works like vi if you don't like vi and want to access the file directly edit:
/etc/config/crontab
Add this line to your crontab:
*/5 * * * * /share/MD0_DATA/perms.sh
The 5 represents a 5 minute interval.
Then you need to let crontab know about the new commands:
crontab /etc/config/crontab
I hope this helped you.
Related
I configured my server as cPanel, CloudLinux, LiteSpeed, CWAF, CageFS, CXS.
All my services are running smoothly.
However, I can create a cronjob from one user and access other users' files symbolically.
For example, I can read the config.php file in user2's public_html folder by adding a cron to user1 as follows.
ln -s /home/user2/public_html/config.php config.txt
When cron runs in this way, a shortcut in the form of config.txt occurs on user1. When we view this config.txt file, the contents of the config.php file on user2 appear.
This is a very large vulnerability, how can I prevent this?
My English is not good. Forgive me.
thanks
How exactly are you reading this file after the symlink has been created? Because it doesn't work on any of the cPanel servers I've tested.
Additionally, the cronjob is executed as the user, so I'm not sure how this would allow an escalation to happen, because it would be similar to executing it in a shell anyway.
If you're within the user1's jail (su - user1), add a cronjob such as:
0 * * * * ln -s /home/user2/public_html/wp-config.php /home/user1/config.txt
Whenever the symlink is actually created, and you then do a cat /home/user1/config.txt as user1, you'll end up with a 'No such file or directory':
cat: config.txt: No such file or directory
Why? Because you're creating a symlink that points to a file that doesn't exist (within CageFS).
But if you're absolutely sure that it's possible (despite not being able to replicate it), then report it to CloudLinux, because it would clearly be something they'd have to fix.
Heck, I'm surprised you didn't create a ticket with them in the first place, and instead decide to go to Stackoverflow to bring up your issue.
I wrote backup script for my computer. The backup scenario is like this:
Whole directories under root are bound into tar.gz twice a day(3AM, and 12AM), and this archive is going to be uploaded to google-drive using gdrive app. every 3AM.
and here is the script
#!/bin/bash
#Program: arklab backup script version 2.0
#Author: namil son
#Last modified date: 160508
#Contact: 21100352#handong.edu
#It should be executed as a super user
export LANG=en
MD=`date +%m%d`
TIME=`date +%y%m%d_%a_%H`
filename=`date +%y%m%d_%a_%H`.tar.gz
HOST=$HOSTNAME
backuproot="/local_share/backup/"
backup=`cat $backuproot/backup.conf`
gdriveID="blablabla" #This argument should be manually substituted to google-drive directory ID for each server.
#Start a new backup period at January first and June first.
if [ $MD = '0101' -o $MD = '0601' ]; then
mkdir $backuproot/`date +%y%m`
rm -rf $backuproot/`date --date '1 year ago' +%y%m`
echo $backuproot/`date +%y%m` > $backuproot/backup.conf #Save directory name for this period in backup.conf
backup=`cat $backuproot/backup.conf`
gdrive mkdir -p $gdriveID `date +%y%m` > $backup/dir
awk '{print $2}' $backup/dir > dirID
rm -f $backup/dir
fi
#make tar ball
tar -g $backup/snapshot -czpf $backup/$filename / --exclude=/tmp/* --exclude=/mnt/* --exclude=/media/* --exclude=/proc/* --exclude=/lost+found/* --exclude=/sys/* --exclude=/local_share/backup/* --exclude=/home/* \
--exclude=/share/*
#upload backup file using gdrive under the path written in dirID
if [ `date +%H` = '03' ]; then
gdrive upload -p `cat $backup/dirID` $backup/$filename
gdrive upload -p `cat $backup/dirID` $backup/`date --date '15 hour ago' +%y%m%d_%a_%H`.tar.gz
fi
Here is the problem!
When run this script on crontab, it works pretty well except for uploading tar ball to google-drive, though whole script works perfectly when run the script manually. Only the uploading process is not working when it is runned on crontab!
Can anybody help me?
Crontab entry is like this:
0 3,12 * * * sh /local_share/backup/backup2.0.sh &>> /local_share/backup/backup.sh.log
I have same case.
This is my solution
Change your command gdrive to absolute path to gdrive command
Example:
Don't set cron like this
0 1 * * * gdrive upload abc.tar.gz
Use absolute path
0 1 * * * /usr/local/bin/gdrive upload abc.tar.gz
It will work perfectly
I had the exact same issue with minor differences. I'm using gdrive on a CentOS system. Setup was fine. As root, I set up gdrive. From the command line, 'drive list' worked fine. I used the following blog post to set up gdrive:
http://linuxnewbieguide.org/?p=1078
I wrote a PHP script to do a backup of some directories. When I ran the PHP script as root from the command line, everything worked and uploaded to Google Drive just fine.
So I threw:
1 1 * * * php /root/my_backup_script.php
Into root's crontab. The script executed fine, but the upload to Google Drive wasn't working. I did some debugging, the line:
drive upload --file /root/myfile.bz2
Just wasn't working. The only command-line return was a null string. Very confusing. I'm no unix expert, but I thought when crontab runs as a user, it runs as a user (in this case root). To test, I did the following, and this is very insecure and not recommended:
I created a file with the root password at /root/.rootpassword
chmod 500 .rootpassword
Changed the crontab line to:
1 1 * * * cat /root/.rootpassword | sudo -kS php /root/my_backup_script.php
And now it works, but this is a horrible solution, as the root password is stored in a plain text file on the system. The file is readable only by root, but it is still a very bad solution.
I don't know why (again, no unix expert) I have to have root crontab run a command as sudo to make this work. I know the issue is with the gdrive token generated during gdrive setup. When crontab runs the token is not matching and the upload fails. But when you have crontab sudo as root and run the php script, it works.
I have thought of a possible solution that doesn't require storing the root password in a text file on the system. I am tired right now and haven't tried it. I have been working on this issue for about 4 days, trying various Google Drive backup solutions... all failing. It basically goes like this:
Run the gdrive setup all within the PHP/Apache interpreter. This will (perhaps) set the gdrive token to apache. For example:
Create a PHP script at /home/public_html/gdrive_setup.php. This file needs to step through the entire gdrive and token setup.
Run the script in a browser, get gdrive and the token all set up.
Test gdrive, write a PHP script something like:
$cmd = exec("drive list");
echo $cmd;
Save as gdrive_test.php and run in a browser. If it outputs your google drive files, it's working.
Write up your backup script in php. Put it in a non-indexable web directory and call it something random like 2DJAj23DAJE123.php
Now whenever you pull up 2DJAj23DAJE123.php in a web browser your backup should run.
Finally, edit crontab for root and add:
1 1 * * * wget http://my-website.com/non-indexable-directory/2DJAj23DAJE123.php >/dev/null 2>&1
In theory this should work. No passwords are stored. The only security hole is someone else might be able to run your backup if they executed 2DJAj23DAJE123.php.
Further checks could be added, like checking the system time at the start of 2DJAj23DAJE123.php and make sure it matches the crontab run time, before executing. If the times don't match, just exit the script and do nothing.
The above is all theory and not tested. I think it should work, but I am very tired from this issue.
I hope this was helpful and not overly complicated, but Google Drive IS complicated since their switch over in authentication method earlier this year. Many of the posts/blog posts you find online will just not work.
Sometimes the problem occurs because of the config path of the gdrive, means gdrive cannot find the default configuration so in order to avoid such problems we add --config flag
gdrive upload --config /home/<you>/.gdrive -p <google_drive_folder_id> /path/to/file_to_be_uploaded
Source: GDrive w/ CRON
I have had the same issue and fixed by indicating where the drive command file is.
Ex:
/usr/sbin/drive upload --file xxx..
Scenario: I need to create a cron job that scans through a directory and sftps each file to another machine
bash script : /home/user/sendFiles.sh
cron interval : 1 minute
directory: /home/user/myfiles
sftp destination: 10.10.10.123
Create the cron job
crontab -u user 1 * * * /home/user/sendFiles.sh
Create the Script
#!/bin/bash
/usr/bin/scp -r user#10.10.10.123:/home/user/myfiles .
#REMOVE FILES AFTER ALL HAVE BEEN SENT
rm -rf *
Problem: Not exactly sure if that cron tab is correct or how to sftp an entire directory with the cron tab
If is going to be executed on a cronjob, I'm assuming its in order to sync the dir.
In that case, I would use rdiff-backup to make an incremental backup. That way, only the things that change will transferred.
This system will use ssh for transferring the data, but using rdiff-backup instead of a plain scp. The major benefit of doing it this way is speed; is faster to transfer only the parts that have changed.
This is the command to do a copy over ssh using rdiff-backup:
rdiff-backup /some/local-dir hostname.net::/whatever/remote-dir
Add that to a cronjob, making sure the user that executes the rdiff backup has the ssh keys, so it does not require a password.
(About ssh keys: Read about ssh keys here: http://www.linuxproblem.org/art_9.html Once is set, try to do a regular ssh to see if you can log without password.)
Something like this:
* * * * * rdiff-backup /some/local-dir hostname.net::/whatever/remote-dir
will do the copy every minute. (your example, 1 * * * * will execute every first minute of each hour; that is, once every hour, instead of once every minute)
Keep in mind that can cause problems if the transfer is huge it hasn't finished to transfer. But I guess that you want is to do transfers of not so huge files. Or that your network speed is large. Otherwise, change it to do the transfer every 5 minutes by using */5 * * * * instead.
And finally, read more about rdiff-backup here : http://www.nongnu.org/rdiff-backup/examples.html
rdiff-backup is a good option, there is also rsync
rsync -az user#10.10.10.123:/home/user/myfiles .
I notice you are also deleting files, is this simply because you don't want to recopy them? rsync will only copy updated files.
You might also be interested in unison which does a two way sync
These are both good answers; if you stick with scp you may want to make a slight change to your script:
#!/bin/bash
/usr/bin/scp -r user#10.10.10.123:/home/user/myfiles .
#REMOVE FILES AFTER ALL HAVE BEEN SENT
cd /home/user/myfiles # make sure you're in the right directory before rm
rm -rf *
Have made backup script that does well: makes backup zip-file and then uploads it via ftp to another server. It's located here: /home/www/web5/backup/backup
Then I decided to put this script into crontab to be done automatically.
I'm doing (as root)
crontab -e
On the blank row I put:
*/1 * * * * /home/www/web5/backup/backup
Escape key, :wq!, Enter
I set it to be done each minute to test it.
Then went to the FTP folder, where script uploads the files. I'm waiting, but nothing happens: catalog is empty after each refresh in my Total Commander.
But when I execute /home/www/web5/backup/backup manually (as root as well), it works just fine and I see the new file at FTP.
What's wrong? This server is kind of heritage, so I might know not everything about it. Where to check first? OS is
Linux s090 2.6.18.8-0.13-default
(kind of very old CentOS).
Thanks for any help!
UPD: /home/www/web5/backup/backup has chmod 777
UPD2: /var/log/cron doesn't exist. But /var/log/ directory exists and contains logs of apache, mail, etc.
*/1 may be the problem. Just use *.
* * * * * /home/www/web5/backup/backup
Also, make sure /home/www/web5/backup/backup is executable with chmod 775 /home/www/web5/backup/backup
Check /var/log/cron as well. That may show errors leading to a fix.
From Crontab – Quick Reference
Crontab Environment
cron invokes the command from the user’s HOME
directory with the shell, (/usr/bin/sh). cron supplies a default
environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so
in the crontab entry or in a script called by the entry.
This should be pretty straight forward but I can't seem to get getting it working despite reading several tutorials via Google, Stackoverflow and the man page.
I created a cron to run every 1 min (for testing) and all it basically does is spit out the date.
crontab -l
* * * * * /var/test/cron-test.sh
The cron-test file is:
echo "$(date): cron job run" >> test.log
Waiting many minutes and I never see a test.log file.
I can call the "test.sh" manually and get it to output & append.
I'm wondering what I'm missing? I'm also doing this as root. I wonder if I'm miss-understanding something about root's location? Is my path messed up because it's appending some kind of home directory to it?
Thanks!
UPDATE -----------------
It does appear that I'm not following something with the directory path. If I change directory to root's home directory:
# cd
I see my output file "test.log" with all the dates printed out every minute.
So, I will update my question to be, what am I miss-understanding about the /path? Is there a term I need to use to have it start from the root directory?
Cheers!
UPDATE 2 -----------------
Ok, so I got what I was missing.
The script to setup crontab was working right. It was finding the file relative to the root directory. ie:
* * * * * /var/test/cron-test.sh
But the "cron-test.sh" file was not set relative to the root directory. Thus, when "root" ran the script, it dumped it back into "root's" home directory. My thinking was that since the script was being run in "/var/test/" that the file would also be dumped in "/var/test/".
Instead, I need to set the location in the script file to dump it out correctly.
echo "$(date): cron job run" >> /var/test/test.log
And that works.
You have not provided any path for test.log so it will be created in the current path(which is the home directory of the user by default).
You should update your script and provide the full path, e.g:
echo "$(date): cron job run" >> /var/log/test.log
To restate the answer you gave yourself more explicitely: cronjobs are started in the home directory of the executing user, in your case root. That's why a relative file ended up in ~root.