rsync code will run, but not in cron - cron

I have a web server (odin) and a backup server (jofur). On jofur, I can run the following code to rsync my web directories (via key authentication) from odin to jofur:
rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
If I enter this into the command line, everything rsyncs perfectly:
myuser#jofur:~$ rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
receiving incremental file list
sent 23 bytes received 1921 bytes 1296.00 bytes/sec
total size is 349557271 speedup is 179813.41
I want this to run every morning, so I edited my crontab to read this:
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
This doesn't work. The following message is deposited in /var/mail/myuser:
Could not create directory '/home/myuser/.ssh'.
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]
I'm not sure what this error means. I'm wary of futzing blindly with permissions because I don't want to leave any backdoors open. Any suggestions?

Its hard to tell whether cron is using the wrong rsync binary or whether rsync requires some variable which is not being set in cron. Please set the stdout/stderr as shown below and pass on the output of the log file
Also, try doing a "which rsync" from the command line ; this will tell you which rsync you are using from the command line.
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin > /tmp/cron_output.log 2>&1
EDIT :
Can you create a shell script called SOME_DIR/cron_job_rsync.sh which contains the following. Make sure you set the execute bit.
#!/bin/sh
/usr/sbin/rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
And modify the cronjob as shown below
0 4 * * * SOME_DIR/cron_job_rsync.sh >/tmp/cron_output.log 2>&1

I had a similar issue. Mine was the HOME directory was encrypted.
If your user is logged, it works the known_hosts.
But when it's a cron, the cron uses the right user BUT it does not have access to your $HOME/~/.ssh directory because is encrypted :-(

i got the same error just like you.
I finally found user home directory is an 'mount point', when logged in, it changed.
You can use the shell command 'mount' to check if you have the same way to use home directory.
So, i logged in and 'cd /', then do
```
cp -ar ${HOME}/.ssh /tmp/
sudo umount ${HOME}
mv /tmp/.ssh ${HOME}
```
There is may failed, because you need to check the ${HOME} if you have the right to write, if not, try sudo or add writable to ${HOME}.
After that, every thing being fine.

Please follow the below steps to avoid the error
http://umasarath52.blogspot.in/2013/09/solved-rsync-not-executing-via-cron.html

I resolved this issue by communicating with the administrators for my server. Here is what they told me:
For advanced security and performance, we use 1H (Hive) which
utilizes a chrooted environment for users. Libraries and binaries
should be copied to the chrooted environment to make them accessible.
They sent me a follow up email telling me that the "Relevent" packages have been installed. At that point, the problem was resolved. Unfortunately, I didn't get any additional information from them. The host was Arvixe, but I'm guessing that anyone using 1H (Hive) will encounter a similar problem. Hopefully this answer will be helpful.

Use the rrsync script together with a dedicated ssh key as follows:
REMOTE server
mkdir ~/bin
gunzip /usr/share/doc/rsync/scripts/rrsync.gz -c > ~/bin/rrsync
chmod +x ~/bin/rrsync
LOCAL computer
ssh-keygen -f ~/.ssh/id_remote_backup -C "Automated remote backup" #NO passphrase
scp ~/.ssh/id_remote_backup.pub devel#10.10.10.83:/home/devel/.ssh
REMOTE computer
cat id_remote_backup.pub >> authorized_keys
Prepend to the newly added line the following
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding
So that the result looks like
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAA...vp Automated remote backup
LOCAL
Put in your crontab the following script with x permission:
#!/bin/sh
echo ""
echo ""
echo "CRON:" `date`
set -xv
rsync -e "ssh -i $HOME/.ssh/id_remote_backup" -avzP devel#10.10.10.83:/ /home/user/servidor
Source: http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/

I did several steps to make it work.
Check your paths. For every command you'll use check which [command] and use that full path for the crontab
Open crontab as the user you want to run it with so it has access to that users ssh-key
Add (remember to user which) ssh-agent && [your ssh-command] so it can connect over ssh.
When authentication still fails at this point. Try to generate a passwordless ssh-key. This way you can skip the password prompting.
For debugging it is useful to add -vvv to the ssh command in rsync. It makes it clear what goes wrong.

Using the correct keyring solved the issue for me. Add the following line to your crontab:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
In total, your crontab (edited by calling crontab -e from your terminal) should now look as follows:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
0 4 * * * rsync -avz backups#odin.mydomain.net:/home/backups /home/myuser/odin
Background: It turns out that some Linux distributions use a keyring to protect your public-private key pair - so the key pair is password-protected without ever noticing you. Consequently, rsync is not able to open your ssh key for authentication.
Note that I also omitted the -e ssh; I think it is not necessary here.
Further Troubleshooting: rsync does not provide a lot of debugging output. What helped me identify the problem was to put a dummy scp command, which is much more verbose, in my crontab. A crontab entry for troubleshooting may look something like:
* * * * * scp -v backups#odin.mydomain.net:/home/backups/dummy.txt /home/myuser/odin/dummy.txt >> /home/myuser/odin/dummy.txt.log 2>&1
The above command will run every minute (great for developing) and it will copy a file /home/backups/dummy.txt to your local machine. All logs (stdout and stderr) are written to to /home/myuser/odin/dummy.txt.log. Inspect these logs to see where the error precisely comes from.
Reference: The troubleshooting explained above lead me to the solution: https://unix.stackexchange.com/a/332353/395749

Related

SCP not working in crontab but works on commandline

After much research, I couldn't find a solution but post this question.
I have a computer A and B both Ubuntu desktop. I want to copy file from A to B. Steps I followed.
1. ssh-keygen in computer A
2. Left password blank
3. Copied id_rsa.pub to computer B ~/.ssh/ from computer A
4. Renamed id_rsa.pub to authorized_keys in computer B
5. In computer A I did scp -i ~/.ssh/id_rsa -r /var/www/abc abc#ip:/home/abc/
If I do step 4 in commandline its working fine. But when I did same in crontab
22 10 * * * root scp -i ~/.ssh/id_rsa -r /var/www/abc abc#ip:/home/abc
Its doing nothing.
I have tried virtually every answer found related to the problem. The answer just came accidentally.
I typed username instead of root and it worked. I don't know how but it worked. Hope this will help people like me.
2 10 * * * root /usr/bin/scp -i /home/username/.ssh/id_rsa -r /var/www/abc abc#ip:/home/abc
2 10 * * * username /usr/bin/scp -i /home/username/.ssh/id_rsa -r /var/www/abc abc#ip:/home/abc
This is my solution. Made in Raspberry with Jessie OS.
Fix connection with the server with public key with no passphrase. You can find tutorials everywhere.
The thing is do it as the same user that shall create the crontab.
In my case I made the keys as PI (user on my Raspberry). Make sure you can login without password on your server.
Then I created my script that uploads all txt-files in a directory to the server every 5th minute.
ex:
"#!/bin/bash
scp /mnt/www/hus/*.txt xxxxxx.se#ssh.xxxxx.se:/www/images/hustemp"
Save it as xxxxxxx.sh in your home dir and make it executable (chmod +x xxxxxxx.sh).
Then itÅ› time to create the cronjob. I think you have to be in your home dir.Just run crontab -e (no sudo in front)and edit to what you want. In my case:
*/5 * * * * /home/pi/upload.sh
It works perfect!
Good Luck
Anders
Why don't you try putting the the scp command in a bash script and put the bash script in the cron , also remember to put the shebang in your sh script as follows : #! /bin/bash (generally the path , confirm this by typing which bash in your shell). Also chmod a+x your sh script to make it executable and try the sh script from bash as ./script.sh and then put it in the crontab.
Why did the scp command not work in crontab?
The following post does a good job explaining the different kinds of problems one faces with cron jobs -
https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
In your case it's an environment problem. Crontab's environment is different from that of bash's.
Hope this helps.
In crontab, you have a mere execution of the command line without all the goodies of an interactive shell, that is a populated PATH variable, and all other bash tricks such as ~ interpretation (unsure for that last one).
So the rule is always use full paths in crontab:
22 10 * * * root /usr/bin/scp -i /home/username/.ssh/id_rsa -r /var/www/abc abc#ip:/home/abc
For those struggling with this issue, all the answers above didn't solve my problem. Actually, you have to do the scp once with the root account in the terminal, so that you get this message:
The authenticity of host 'XXXX (123.123.123.123)' can't be established.
ECDSA key fingerprint is SHA256:XXXXXXXXXXXxxxxXXXxxXXXxxXXxXxxXXX.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Then you type "yes" and your next crons will work like a charm.
Create a shell script entering the scp command in the root
Make the script executable
Put the script in crontab
PATH=/usr/bin
32 18 * * * cd /root/ ; (time ./infra.sh)
Step 5 doesn't work,maybe Step 3 and 4 doesn't work well.
3. Copied id_rsa.pub to computer B ~/.ssh/ from computer A
4. Renamed id_rsa.pub to authorized_keys in computer B
You should use the command "ssh-copy-id" to copy .pub file.

sh file not running on cron ubuntu

I am trying to run a shell script on crontab on Ubuntu platform. I have tried googling and other links but nothing has helped so far.
This is my crontab:
*/2 * * * * sudo bash /data/html/mysite/site_cleanup.sh
This is the content of my sh file:
#!/bin/sh
# How many days retention do we want ?
DAYS=0
# geting present day
now=$(date +"%m_%d_%Y")
# Where is the base directory
BASEDIR=/data/html/mysite
#where is the backup directory
BKPDIR=/data/html/backup
# Where is the log file
LOGFILE=$BKPDIR/log/mysite.log
# add to tar
tar -cvzf $now.tar.gz $BASEDIR
mv $now.tar.gz $BKPDIR
# REMOVE OLD FILES
echo `date` Purge Started >> $LOGFILE
find $BASEDIR -mtime +$DAYS | xargs rm
echo `date` Purge Completed >> $LOGFILE
The same script runs from a terminal and gives the desired result.
Generic troubleshooting for noninteractive shell scripts
Put set -x; exec 2>/path/to/logfile at the top of your script to log all subsequent commands to a file as they're run. If this doesn't work, you'll know that your script isn't being run at all; if it does, you'll know where it fails and how.
If this is a personal crontab
If you're running crontab -e as a user (without sudo), then the crontab being modified is one for commands run with that user's permissions. Check that file permissions allow that user to modify the content in question (which, if these files are in a cgi-bin directory, may require being run by the same user as the web server).
If your intent is to have commands run as root, rather than as your own user, be sure you use sudo when editing the crontab to edit the system crontab instead (but please take care as to your script's correctness in this case -- carelessness such as missing quotes or lack of appropriate precautions in xargs usage can cause a script to delete the wrong files if malicious filenames are created):
sudo crontab -e ## to edit the system (root) crontab
...or, if you're cleaning up files owned by the apache user (for example; check which account is correct for your own operating system and web server):
sudo -u apache crontab -e ## to edit the apache user's crontab
Troubleshooting for a system crontab
Do not attempt to put a sudo command within the commands run by cron; with sudo's default configuration, it requires a TTY (a keyboard and screen) to be attached to a session in order to run. Thus, your crontab line should not contain sudo, but instead should look like the following:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh
Your issue is likely coming from the sudo call from your user level cron. Unless you've gone through and edited the bashrc profile to allow that script to run without sudo it'll hang up every time.
So you can lookup how to run a script with no password by modifying the bashrc profile, remove the sudo call if you aren't doing something in your script that calls for Super User permissions, or as a last ditch, extremely bad idea you can call your script from root's cron by doing sudo crontab -e or sudo env EDITOR=nano crontab -e if you prefer nano as your editor.
try to add this line to the crontab of user root and without the sudo.
like this:
*/2 * * * * bash /data/html/mysite/site_cleanup.sh

Linux cron job fails to execute part of script which works in shell

I'm having a problem with a script that writes to a log file during a backup procedure. Tested perfectly when called from the root shell, but fails when run from the cron demon.
Backup is done over a series of partitions and the on-site admin will rotate the drives in the top dock weekly. In order to know where the most recent backup is located I've included the following lines
sudo hdparm -I /dev/sdb | grep 'Model Number'
sudo hdparm -I /dev/sdb | grep 'Serial Number'
I've tried this with a >> /batch/backup.log and without.
When the bash script is run from the command line, it works beautifully. But when the crontab calls the script the output from these lines is blank.
crontab entry: 00 00 * * * /batch/backup.bat >> /batch/backup.log
I have no idea why other than the possibility that cron can't handle the pipe or the grep or something.
I have isolated the lines in a test.bat but they remain blank.
The backup script uses the hdparm to spin down the drive at the end, but now I wonder if that's not working properly either if cron can't handle hdparm.
That is probably because hdparm is not in the PATH when the script is executed through cron. Although less likely, same might apply to grep as well.
Try replacing hdparm with /full/path/to/hdparm in your script.
You need to either put this in the root crontab, or you need to store your password in plain text and pipe it into the sudo command. That second option is obviously NOT RECOMMENDED. See https://askubuntu.com/questions/173924/how-to-run-a-cron-job-using-the-sudo-command
As #Paul hinted, it is also possible to create a directive in /etc/sudoers to override the need for a password for a specific user / host / command combination. See https://askubuntu.com/a/159009
Copying just a little bit from that answer:
If your user is called user and your host is called host you could add these lines to /etc/sudoers:
user host = (root) NOPASSWD: /sbin/shutdown
user host = (root) NOPASSWD: /sbin/reboot
This will allow the user user to run the desired commands on host without entering a password. All other sudoed commands will still require a password.
Edit the crontab entry as below
00 00 * * * /batch/backup.bat 1> /batch/backup.op 2> /batch/backup.err
Standard output will be redirected to /batch/backup.op
Standard error will be redirected to /batch/backup.err
Check the errors in /batch/backup.err and fix

ssh from crontab returning 'tcgetattr: Invalid argument'

I'm defining something like this in my crontab:
* * * * * ssh -tt otherhost whoami
And I'm getting the following output:
tcgetattr: Invalid argument
me
Running ssh with fewer -ttoptions leads to other errors besides tcgetattr.
The solution posted in why is the `tcgetattr` error seen when ssh is used for dumping the backup file on another server? doesn't really work well because in this case I'm using several ssh connections to run monitoring scripts on different hosts and I need to capture output sent to stderr and email it.
Any ideas on how to workaround this?
You could use something like this:
ssh -tt otherhost "your_monitoring_script 2>&1" 2> /dev/null
That way the errors from ssh go in the bucket, but the errors from your script are shown in stdout. For that to work you should mark errors from your script as "ERROR:" so that you can find them back if your script provides lots of output.

how to write a bash shell script to ssh to remote machine and change user and export a env variable and do other commands

I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.

Resources