Trying to create a script that will ssh into server, backup some files, sleep for 3 minutes, then remove the files.
While it's sleeping the same script is back to local and rsync the file. Then when the 3 minutes are up... file is removed.
Just trying this so as to not connect twice with ssh.
ssh $site "
tar -zcf $domain-$date.tar.gz $path;
{ sleep 3m && rm -f $domain-$date.tar.gz };
"
rsync -az $site:$domain-$date.tar.gz ~/WebSites/$domain/BackUp/$date;
I tried with command grouping with (), to create a sub command, but I think the variable would not be read. Not sure.
Your ssh command will sleep for 3 minutes and remove the files, then your script proceeds to try to rsync the files that got removed. There is no easy workaround for having your first ssh command sleep while your own script proceeds to run rsync.
Do either:
ssh into the server twice. After rsync completes, ssh into the server again and remove the files.
Tell rsync to remove the files after it's synced them. Add the --remove-source-files to rsync.
Related
This command myprogram.sh command in CygWin installed with Chocolatey, called from the Windows Command Line, with an alias server01 created at the .ssh folder, everything works fine:
# File myprogram.sh
ssh -p 66622 user#localhost << HERE
ssh server01 << EOF
command1
command2
EOF
HERE
Because i have several servers, i have to build several .sh files for different set of commands, so i have to create a lot of .sh files
But i've been unable to run the same instructions from a single line from the command line. Is that possible, in order to run these chain of instructions from a same place?
#!/bin/bash
array=(server1 server2 server3 .... serverN)
for i in ${array[#]}
do
echo ${i}
ssh -p 66622 user#${i} "command1"
done
you can change the "command1" to "command.sh"
I have a bash script in my cron that has a passwordless rsync command to pass files from my local system to a web server.
Bash Script Code:
rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/
Over the past few days, I have noticed that the connection is randomly refused sometimes. I have correctly set up openssh-client and openssh-server and have set up the password ssh login successfully so, I'm not sure what is causing the connection to randomly be refused sometimes.
Now, I am looking for some code to force the rsync code to rerun until the files are successfully passed to the web.
Rsync Code:
RC=1
while [[ $RC -ne 0 ]]
do
rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/
RC=$?
done
Is this the best method to try and circumvent the issue?
You don't need to store the return code of rsync, just do a loop like
#!/bin/bash
until rsync -avzhe "ssh -p2222" ---chmod=Du=rw,Dg=r,Do=r,Fu=rw,Fg=r,Fo=r -p /home/sysadmin/some_file_{0..10}.png username#web.server:public_html/some.directory/; do
sleep 5 # waiting for 5 seconds before re-starting the command.
done
I have a web server (odin) and a backup server (jofur). On jofur, I can run the following code to rsync my web directories (via key authentication) from odin to jofur:
rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
If I enter this into the command line, everything rsyncs perfectly:
myuser#jofur:~$ rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
receiving incremental file list
sent 23 bytes received 1921 bytes 1296.00 bytes/sec
total size is 349557271 speedup is 179813.41
I want this to run every morning, so I edited my crontab to read this:
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
This doesn't work. The following message is deposited in /var/mail/myuser:
Could not create directory '/home/myuser/.ssh'.
Host key verification failed.
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(605) [Receiver=3.0.9]
I'm not sure what this error means. I'm wary of futzing blindly with permissions because I don't want to leave any backdoors open. Any suggestions?
Its hard to tell whether cron is using the wrong rsync binary or whether rsync requires some variable which is not being set in cron. Please set the stdout/stderr as shown below and pass on the output of the log file
Also, try doing a "which rsync" from the command line ; this will tell you which rsync you are using from the command line.
0 4 * * * rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin > /tmp/cron_output.log 2>&1
EDIT :
Can you create a shell script called SOME_DIR/cron_job_rsync.sh which contains the following. Make sure you set the execute bit.
#!/bin/sh
/usr/sbin/rsync -avz -e ssh backups#odin.mydomain.net:/home/backups /home/myuser/odin
And modify the cronjob as shown below
0 4 * * * SOME_DIR/cron_job_rsync.sh >/tmp/cron_output.log 2>&1
I had a similar issue. Mine was the HOME directory was encrypted.
If your user is logged, it works the known_hosts.
But when it's a cron, the cron uses the right user BUT it does not have access to your $HOME/~/.ssh directory because is encrypted :-(
i got the same error just like you.
I finally found user home directory is an 'mount point', when logged in, it changed.
You can use the shell command 'mount' to check if you have the same way to use home directory.
So, i logged in and 'cd /', then do
```
cp -ar ${HOME}/.ssh /tmp/
sudo umount ${HOME}
mv /tmp/.ssh ${HOME}
```
There is may failed, because you need to check the ${HOME} if you have the right to write, if not, try sudo or add writable to ${HOME}.
After that, every thing being fine.
Please follow the below steps to avoid the error
http://umasarath52.blogspot.in/2013/09/solved-rsync-not-executing-via-cron.html
I resolved this issue by communicating with the administrators for my server. Here is what they told me:
For advanced security and performance, we use 1H (Hive) which
utilizes a chrooted environment for users. Libraries and binaries
should be copied to the chrooted environment to make them accessible.
They sent me a follow up email telling me that the "Relevent" packages have been installed. At that point, the problem was resolved. Unfortunately, I didn't get any additional information from them. The host was Arvixe, but I'm guessing that anyone using 1H (Hive) will encounter a similar problem. Hopefully this answer will be helpful.
Use the rrsync script together with a dedicated ssh key as follows:
REMOTE server
mkdir ~/bin
gunzip /usr/share/doc/rsync/scripts/rrsync.gz -c > ~/bin/rrsync
chmod +x ~/bin/rrsync
LOCAL computer
ssh-keygen -f ~/.ssh/id_remote_backup -C "Automated remote backup" #NO passphrase
scp ~/.ssh/id_remote_backup.pub devel#10.10.10.83:/home/devel/.ssh
REMOTE computer
cat id_remote_backup.pub >> authorized_keys
Prepend to the newly added line the following
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding
So that the result looks like
command="$HOME/bin/rrsync -ro ~/backups/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding ssh-rsa AAA...vp Automated remote backup
LOCAL
Put in your crontab the following script with x permission:
#!/bin/sh
echo ""
echo ""
echo "CRON:" `date`
set -xv
rsync -e "ssh -i $HOME/.ssh/id_remote_backup" -avzP devel#10.10.10.83:/ /home/user/servidor
Source: http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/
I did several steps to make it work.
Check your paths. For every command you'll use check which [command] and use that full path for the crontab
Open crontab as the user you want to run it with so it has access to that users ssh-key
Add (remember to user which) ssh-agent && [your ssh-command] so it can connect over ssh.
When authentication still fails at this point. Try to generate a passwordless ssh-key. This way you can skip the password prompting.
For debugging it is useful to add -vvv to the ssh command in rsync. It makes it clear what goes wrong.
Using the correct keyring solved the issue for me. Add the following line to your crontab:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
In total, your crontab (edited by calling crontab -e from your terminal) should now look as follows:
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
0 4 * * * rsync -avz backups#odin.mydomain.net:/home/backups /home/myuser/odin
Background: It turns out that some Linux distributions use a keyring to protect your public-private key pair - so the key pair is password-protected without ever noticing you. Consequently, rsync is not able to open your ssh key for authentication.
Note that I also omitted the -e ssh; I think it is not necessary here.
Further Troubleshooting: rsync does not provide a lot of debugging output. What helped me identify the problem was to put a dummy scp command, which is much more verbose, in my crontab. A crontab entry for troubleshooting may look something like:
* * * * * scp -v backups#odin.mydomain.net:/home/backups/dummy.txt /home/myuser/odin/dummy.txt >> /home/myuser/odin/dummy.txt.log 2>&1
The above command will run every minute (great for developing) and it will copy a file /home/backups/dummy.txt to your local machine. All logs (stdout and stderr) are written to to /home/myuser/odin/dummy.txt.log. Inspect these logs to see where the error precisely comes from.
Reference: The troubleshooting explained above lead me to the solution: https://unix.stackexchange.com/a/332353/395749
I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.
I have checked for a solution here but cannot seem to find one. I am dealing with a very slow wan connection about 300kb/sec. For my downloads I am using a remote box, and then I am downloading them to my house. I am trying to run a cronjob that will rsync two directories on my remote and local server every hour. I got everything working but if there is a lot of data to transfer the rsyncs overlap and end up creating two instances of the same file thus duplicate data sent.
I want to instead call a script that would run my rsync command but only if rsync isn't running?
The problem with creating a "lock" file as suggested in a previous solution, is that the lock file might already exist if the script responsible to removing it terminates abnormally.
This could for example happen if the user terminates the rsync process, or due to a power outage. Instead one should use flock, which does not suffer from this problem.
As it happens flock is also easy to use, so the solution would simply look like this:
flock -n lock_file -c "rsync ..."
The command after the -c option is only executed if there is no other process locking on the lock_file. If the locking process for any reason terminates, the lock will be released on the lock_file. The -n options says that flock should be non-blocking, so if there is another processes locking the file nothing will happen.
Via the script you can create a "lock" file. If the file exists, the cronjob should skip the run ; else it should proceed. Once the script completes, it should delete the lock file.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#your code in here
#delete lock file at end of your job
rm /home/myhomedir/rsyncjob.lock
To use the lock file example given by #User above, a trap should be used to verify that the lock file is removed when the script is exited for any reason.
if [ -e /home/myhomedir/rsyncjob.lock ]
then
echo "Rsync job already running...exiting"
exit
fi
touch /home/myhomedir/rsyncjob.lock
#delete lock file at end of your job
trap 'rm /home/myhomedir/rsyncjob.lock' EXIT
#your code in here
This way the lock file will be removed even if the script exits before the end of the script.
A simple solution without using a lock file is to just do this:
pgrep rsync > /dev/null || rsync -avz ...
This will work as long as it is the only rsync job you run on the server, and you can then run this directly in cron, but you will need to redirect the output to a log file.
If you do run multiple rsync jobs, you can get pgrep to match against the full command line with a pattern like this:
pgrep -f rsync.*/data > /dev/null || rsync -avz --delete /data/ otherhost:/data/
pgrep -f rsync.*/www > /dev/null || rsync -avz --delete /var/www/ otherhost:/var/www/
As a definite solution kill rsync processes before new one starts in crontab.