I'm trying to create automated backups of the mysql databases from my virtual host to my NAS storage.
I'm only just starting to learn shell commands so please bear with me - what I've found so far is:
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
but this seem to return the following error:
-bash: /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz: No such file or directory
Does anyone know how to overcome this problem and actually save it to the designated storage?
Change
cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz
to
cat > /path-to-the-directory-on-nas/`date +%Y-%m-%d_%H.%I.%S`.sql.gz
Make sure the folder already exists. At least worked on my Ubuntu :)
Check that the directory /path-to-the-directory-on-nas/ exists on the remote server.
If it is missing you can create it over ssh with the following command:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/
( using the -p if there is multiple directories tree that need to be created )
If you wanted to create the directory with a time stamp you should do the following:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/$(date '+%Y%M%D')/'
If you choose to include a timestamp in the directory path, you need to include it in the path that your mysqldump command uses.
Example:
Successfully create the file to a remote directory that exists on the remote system /var/tmp
$ date | ssh user#ipaddress 'cat > /var/tmp/file.txt'
$ ssh user#ipaddress cat /var/tmp/file.txt
Fri Oct 12 19:39:16 EST 2012
Failing with the same error you are getting, trying to write to a directory that dosn't exist.
$ date | ssh user#ipaddress 'cat > /var/Xtmp/file.txt'
bash: /var/Xtmp/file.txt: No such file or directory
You should debug further. First try
cat > /path-to-the-directory-on-nas/test.sql.gz.
After that you should try if the date works:
echo $(date +%Y-%m-%d_%H.%I.%S)
Then you'll know if the path exists or if date... fails. From your error msg it seems like the date is the problem but you need to be sure first. Then you could try to assign the date to a variable:
#!/bin/bash
filename=$(date +%Y-%m-%d_%H.%I.%S);
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$filename.sql.gz"
Replace
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
with
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/"$(date +%Y-%m-%d_%H.%I.%S)".sql.gz"
Related
I have 3 Linux machines, namely client, server1 and server2.
I am trying to achieve something like this. I would like to copy the latest file in a particular directory of server1 to server2. I will not be doing this by logging into server1 directly, but I always log on to client machine first. Let me list down the step by step approach which is happening now:
Log on to client machine using ssh
SSH into server1
Go to directory /home/user1 in server1 using the command ls /home/user1 -Art | tail -n 1
SCP the latest file to /home/user2 directory of server2
This manual operation is happening just fine. I have automated this using a one line script like below:
ssh user1#server1 scp -o StrictHostKeyChecking=no /home/user1/test.txt user2#server2:/home/user2
But as you can see, I am uploading the file /home/user1/test.txt. How can I modify this script to always upload the latest file in the directory /home/user1?
If zsh is available on server1, you could use its advanced globbing features to find the most recent file to copy:
ssh user1#server1 \
"zsh -c 'scp -o StrictHostKeyChecking=no /home/user1/*(om[1]) user2#server2:/home/user2'"
The quoting is important in case your remote shell on server1 is not zsh.
You can use SSH to list the last file and after user scp to copy the file as follow:
FILE=$(ssh user1#server1 "ls -tp $REMOTE_DIR |grep -v / | grep -m1 \"\""); scp user1#server1:$FILE user2#server2:/home/user2
Before launch these command you need to set the remote directory where to search for the last modified file as:
REMOTE_DIR=/home/user1
Hi I have a shell script which contains s3cmd command on ubuntu 12.04 LTS.
I configured cron for this shell script which works fine for local environment but don't push the file to s3. But when i run shell script manually, It pushes the file to s3 without any error. I checked log and found nothing for this. Here is my shell script.
#!/bin/bash
User="abc"
datab="abc_xyz"
pass="abc#123"
Host="abc1db.instance.com"
FILE="abc_rds`date +%d_%b_%Y`.tar.gz"
S3_BKP_PATH="s3://abc/db/"
cd /abc/xyz/scripts/
mysqldump -u $User $datab -h $Host -p$pass | gzip -c > $FILE | tee -a /abc/xyz/logs/app-bkp.log
s3cmd --recursive put /abc/xyz/scripts/$FILE $S3_BKP_PATH | tee -a /abc/xyz/logs/app-bkp.log
mv /abc/xyz/scripts/$FILE /abc/xyz/backup2015/Database/
#END
This is really weird. Any suggestion would be a great help.
Check if the user running configured in crontab has correct permissions and keys in the environment.
I am guessing the keys are configured in env file as they are not here in the script.
I have opened a remote ssh session from a script and on remote server there is a file containing version information.
I am trying to assign that version to variable and move current version contents to folder name same as version.
The main script is running in jenkins
I am doing something like this
ssh -i /home/user/.ssh/id_rsa -t -t remoteServer<<EOF
cd $WEB_DIR
VERSION=$(cat $WEB_DIR/version.info)
mv -f $WEB_DIR $BACKUP_DIR/$VERSION
exit
EOF
My VERSION variable is always empty. When I run same locally on that server it gives me version value. Something is different over remote ssh session within a script
Actually I found the way to do it in 2 steps.
$WEB_DIR is set as local variable set in main script.
$WEB_DIR="/usr/local/tomcat/webapps/ROOT"
OLD_VERSION=$(ssh -i /home/user/.ssh/id_rsa -tt user#remoteServer "cat $WEB_DIR/version.info")
ssh -i /home/user/.ssh/id_rsa -t -t user#remoteServer<<EOF
cd $WEB_DIR
mv -f $WEB_DIR $BACKUP_DIR/$OLD_VERSION
# I am executing more commands in here
exit
EOF
Use of double quotes "" in first command is must if want to use local variable.
I want to copy a directory (pscp) from windows server to Linux server
The target place name on Linux server has to be new each time.When i run the below command,
> pscp -p -l root
> -pw mypassword -r C:\ProgramFiles\Mybackups\root#linux_server:/root/mywindowsbackups/$(date)
The command substitution $(date) doesn't work.
can any one suggest how would i run this ?
Try the following:
:: This is stripping the `/` and `Day of the Week` from the date
set target_date=%date:/=-%
set target_date=%target_date:* =%
:: Copying the directory to the linux server based on this system's date
pscp -p -l root
-pw mypassword -r C:\ProgramFiles\Mybackups\root#linux_server:/root/mywindowsbackups/%target_date%
I have an EC2 instance running and I am able to SSH into it.
However, when I try to rsync, it gives me the error Permission denied (publickey).
The command I'm using is:
rsync -avL --progress -e ssh -i ~/mykeypair.pem ~/Sites/my_site/* root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
I also tried
rsync -avz ~/Sites/mysite/* -e "ssh -i ~/.ssh/id_rsa.pub" root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
Thanks,
I just received that same error. I had been consistently able to ssh with:
ssh -i ~/path/mykeypair.pem \
ubuntu#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com
But when using the longer rsync construction, it seemed to cause errors. I ended up encasing the ssh statement in quotations and using the full path to the key. In your example:
rsync -avL --progress -e "ssh -i /path/to/mykeypair.pem" \
~/Sites/my_site/* \
root#ec2-XX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com:/var/www/html/
That seemed to do the trick.
Below is what I used and it worked. Source was ec2 and target was home machine.
sudo rsync -azvv -e "ssh -i /home/ubuntu/key-to-ec2.pem" ec2-user#xx.xxx.xxx.xx:/home/ec2-user/source/ /home/ubuntu/target/
use rsync to copy files between servers
copy file from local machine to server
rsync -avz -e "ssh -i /path/to/key.pem" /path/to/file.txt <username>#<ip/domain>:/path/to/directory/
copy file from server to local machine
rsync -avz -e "ssh -i /path/to/key.pem" <username>#<ip/domain>:/path/to/directory/file.txt /path/to/directory/
note: use command with sudo if you are not a root user
After suffering a little bit, I believe this will help:
I am using the below command and it has worked without problems:
rsync -av --progress -e ssh /folder1/folder2/* root#xxx.xxx.xxx.xxx:/folder1/folder2
First consideration:
Use the --rsync-path
I prefer in a shell script:
#!/bin/bash
RSYNC = /usr/bin/rsync
$RSYNC [options] [source] [destination]
Second consideration:
Create a publick key by command below for communication between the servers in question. She will not be the same as provided by Amazon.
ssh-keygen -t rsa
Do not forget to enable permission on the target server in /etc/ssh/sshd_config (UBUNTU and CENTOS).
Sync files from one EC2 instance to another
http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html
Use -v option for verbose and better identify errors.
Third Consideration
If both servers are on EC2 make a restraint by security group
In the security group Server Destination:
inbound:
Source / TCP port
22 / IP Security (or group name) of the source server
This worked for me:
nohup rsync -zravu --partial --progress -e "ssh -i xxxx.pem" ubuntu#xx.xx.xx.xx:/mnt/data /mnt2/ &