How to do file transfer inside ssh command line? - linux

I have to transfer a file from server A to B and then needs to trigger a script at Server B. Server B is a Load balance server which will redirect you either Server B1 or B2 that we dont know.
I have achieved this as below.
sftp user#Server
put file
exit
then executing the below code to trigger the target script
ssh user#Server "script.sh"
But the problem here is as I said it is a load balance server, Sometimes I am putting file in one server and the script get triggers in another server. How to overcome this problem?
I am thinking some solutions like below
ssh user#server "Command for sftp; sh script.sh"
(i.e) in the same server call if I put and triggers it will not give me the above mentioned problem. How can I do sftp inside ssh connection? Otherwise any other suggestions?

if you're just copying a file up and then executing a script, and it can't happen as two separate commands you can do:
gzip -c srcfile | ssh user#remote 'gunzip -c >destfile; script.sh'
This gzips srcfile, sends it through ssh to the remote end, gunzips it on that side, then executes script.sh.
If you want more than one file, you can use tar rather than gzip:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh'
if you want to get the results back from the remote end and they're files, you can just replicate the tar after the script…
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh; tar czf - <remotedatafiles>' | tar xzf -
i.e. create a new pipe from ssh back to the local environment. This only works if script.sh doesn't generate any output. If it generates output, you have to redirect it, for example to /dev/null in order to prevent it messing up the tar:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh >/dev/null; tar czf - <remotedatafiles>' | tar xzf -

You can use scp command first to upload your file and then call remote command via ssh.
$ scp filename user#machine:/path/to/file && ssh user#machine 'bash -s' < script.sh
This example about uploading a local file, but there is no a problem to run it on server A.

You could create a fifo (Named Pipe) on the server, and start a program that tries to read from it. The program will block, it won't eat any CPU.
From sftp try to write the pipe -- you will fail, indeed, but the listening program will run, and check for uploaded files.
# ls -l /home/inp/alertme
prw------- 1 inp system 0 Mar 27 16:05 /home/inp/alertme
# date; cat /home/inp/alertme; date
Wed Jun 24 12:07:20 CEST 2015
<waiting for 'put'>
Wed Jun 24 12:08:19 CEST 2015

transfer testing with tar gzip compression, ssh default compression. using PV for as pipe meter (apt-get install pv)
testing on some site folder where is about 80k small images, total size of folder about 1.9Gb
Using non-standart ssh-port 2204
1) tar gzip, no ssh compression
tar cpfz - site.com|pv -b -a -t|ssh -p 2204 -o cipher=none root#removeip "tar xfz - -C /destination/"
pv meter started from 4Mb/sec, degradated down to 1.2MB/sec at end. PV shows about 1.3Gb transfered bytes (1.9GB total size of folder)
2) tar nozip, ssh compression:
tar cpf - site.com|pv -b -a -t|ssh -p 2204 root#removeip "tar xf - -C /destination/"
pv meter started from 8-9Mb/sec, degradated down to 1.8Mb/sec at end

Related

Loop in shell script not working on remote server

The code tries to ssh from my local server to remote server and runs some commands.
ssh root#$remoteip 'bash -s' <<END3
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
directory=$(dirname ${dir})
echo $dir >> dirstr.txt
mkdir -p $directory
chown $root:$root $directory
chmod 777 $directory
done
END3
the above creates a directory structure on remote server which is working fine.
I want to tar up the same directory structure. so i'm using same logic as above.
ssh root#$remoteip 'bash -s' <<END3
touch emptyfile
tar -cf gcda.tar emptyfile
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
tar -rf gcda.tar $dir
done
END3
The above piece of code should create a tar with all the directories included returned by for loop. i tried the code logic by copying the code to remote server and running it there and it worked. But if I ssh connect from my local server to remote server and try it is not enetring the for loop. it is not appending anything to tar file created with empty file in second line.
Try <<'END3'
Note the quotes around END3, they prevent shell substitutions inside the here-document. You want the $-signs to be transferred to the other side of the ssh connection, not interpreted locally. Same for the backticks.
Extracted from the comments as the accepted answer. Posting as community wiki

SCP from remote server to another remote server

I have a server "B" which can SCP files to/from server "A" and can also SCP files to/from server "C".
i.e.
A <-----> B <-----> C
Server "A" and server "C" cannot reach each other. Only server B can reach both.
I would like to transfer a file from A to C without (or minimal) storage on server B.
Is there a way of piping files across from A to C without storing it in B or with minimal steps?
Thanks.
From scp(1):
DESCRIPTION
... Copies between two remote hosts
are also permitted.
scp host1:foo.txt host2:foo.txt
You can do this without scp if you like. Log into machine 'B' and run this:
ssh userA#A 'cat /source/file' | ssh userC#C 'cat > /dest/file'
You should set up one or both of these ssh instances to use a key for login, so that you're not prompted for passwords by two ssh instances at the same time.
If you want the file copy process to be a little more error-proofed, or if you want to transfer more than one file at a time, you can use tar:
ssh userA#A 'cd /source/dir && tar cf - file1 file2...' |
ssh userC#C 'cd /dest/dir && tar xvf -'
If you'd rather run the command from A, then something like this should work:
tar cf - file... | ssh userB#B 'ssh userC#C "cd /dest/dir && tar xvf -" '
You could do it with a tunnel:
# Open a tunnel to server C
$ ssh -L 2222:<server C>:22 -N -l user <server B> &
# Copy the file to server C
$ scp <file> -P 2222 localhost:<remote filename>
Note that the tunnel is still running after step 2.

ssh mysqldump from oscent to remote server

I'm trying to create automated backups of the mysql databases from my virtual host to my NAS storage.
I'm only just starting to learn shell commands so please bear with me - what I've found so far is:
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
but this seem to return the following error:
-bash: /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz: No such file or directory
Does anyone know how to overcome this problem and actually save it to the designated storage?
Change
cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz
to
cat > /path-to-the-directory-on-nas/`date +%Y-%m-%d_%H.%I.%S`.sql.gz
Make sure the folder already exists. At least worked on my Ubuntu :)
Check that the directory /path-to-the-directory-on-nas/ exists on the remote server.
If it is missing you can create it over ssh with the following command:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/
( using the -p if there is multiple directories tree that need to be created )
If you wanted to create the directory with a time stamp you should do the following:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/$(date '+%Y%M%D')/'
If you choose to include a timestamp in the directory path, you need to include it in the path that your mysqldump command uses.
Example:
Successfully create the file to a remote directory that exists on the remote system /var/tmp
$ date | ssh user#ipaddress 'cat > /var/tmp/file.txt'
$ ssh user#ipaddress cat /var/tmp/file.txt
Fri Oct 12 19:39:16 EST 2012
Failing with the same error you are getting, trying to write to a directory that dosn't exist.
$ date | ssh user#ipaddress 'cat > /var/Xtmp/file.txt'
bash: /var/Xtmp/file.txt: No such file or directory
You should debug further. First try
cat > /path-to-the-directory-on-nas/test.sql.gz.
After that you should try if the date works:
echo $(date +%Y-%m-%d_%H.%I.%S)
Then you'll know if the path exists or if date... fails. From your error msg it seems like the date is the problem but you need to be sure first. Then you could try to assign the date to a variable:
#!/bin/bash
filename=$(date +%Y-%m-%d_%H.%I.%S);
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$filename.sql.gz"
Replace
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
with
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/"$(date +%Y-%m-%d_%H.%I.%S)".sql.gz"

linux tar command for remote machine

How can I create a .tar archive of a file (say /root/bugzilla) on a remote machine and store it on a local machine. SSH-KEYGEN is installed, so I can by pass authentication.
I am looking for something along the lines of:
tar -zcvf Localmachine_bugzilla.tar.gz /root/bugzilla
ssh <host> tar -zcvf - /root/bugzilla > bugzilla.tar.gz
avoids an intermediary copy.
See also this post for a couple of variants: Remote Linux server to remote linux server dir copy. How?
Something like:
ssh <host> tar -zcvf bugzilla.tar.gz /root/bugzilla
scp <host>:bugzilla.tar.gz Localmachine_bugzilla.tar.gz
Or, if you are compressing it just for the sake of transfer, scp compression option can be useful:
scp -R -C <host>:/root/bugzilla .
This is going to copy the whole /root/bugzilla directory using compression on the wire.

Copying files and dirs on remote server while excluding some of them

Server 1 is connected to Server 2 via SSH.
We know this:
I can execute a command such as
" ssh server2 "cp -rv /var/www /tmp" "
which will copy the entire /var/www dir to /tmp. However inside of /var/www we have the following structure(sample LS output below)
$ ls
/web1
/web2
/web3
file1.php
file2.php
file3.php
How can I execute a cp command that will exclude /web1, /web3, file1.php and file3.php (obviously just copying web2 and file2 is not an option since there are significantly more files than just 6)
Note: I am using this to backup Server2 prior to RSYNCing from Server1.
The first two poster's both have good suggestions about rsync. Here's a more complete outline of the process.
(1) You want to backup server 2 before you sync from server 1, so let's do that with rsync. Here's the command as seen from server 1 (assuming it has access to server 2):
ssh user#server2 "rsync $RSYNC_OPTS /var/www/ /path/to/backup"
(2) With server 2 backed up, let's now sync from server 1 (again, as seen from server 1)
rsync $RSYNC_OPTS /path/to/www/ user#server2:/var/www/
As long as you use sane RSYNC_OPTS, the backup and sync should both be reasonable. Richard had a reasonable suggestion for the options:
RSYNC_OPTS="--exclude-from rsync-exclude.txt --stats -avz --numeric-ids -e ssh"
If you want an accurate reproduction, I'd recommend --delete or --delete-after as well. Be sure to lookup details on any options you're unfamiliar with.
For this you should really be using rsync.
I tend to uye an rsync-exclude.txt file to specify what I don't want as it's more future proof.
/public_ftp/.ftpquota
/tmp
/var/local/backups/rsyncs
/backup/rsync
/proc
/dev
so a command could be
rsync --exclude-from rsync-exclude.txt --stats -avz -e ssh \
--numeric-ids /syncfrom/dir user#example.com:/backup/sync-to-dir
edit::
In the case of a local server you can still use rsync, however you could also use tar and exclude what you don't want.
(cd dir1;tar --exclude 'web2/*' -cf -) | (cd dir2; tar -xvf -)
or
find dir1 dir2 >exclude-files (cd dir1;tar --exclude-from exclude-files -cf -) | (cd dir2; tar -xvf -)
I do the same thing when deploying new releases to my webserver. Is it possible for you to use rsync over ssh? rsync allows you to specify an --exclude option and specify either the dirs/files on the command line or via a file. Here's a pretty good writeup that I've used in the past:
http://articles.slicehost.com/2007/10/10/rsync-exclude-files-and-folders
Since what you want to do is "copy all files except those", following your example you could do this :
ssh server2 "cp -rv /var/www/!(web1|web3|file1.php|file3.php) /tmp"
But remember that this is very ugly to backup your server :p

Resources