I have a server "B" which can SCP files to/from server "A" and can also SCP files to/from server "C".
i.e.
A <-----> B <-----> C
Server "A" and server "C" cannot reach each other. Only server B can reach both.
I would like to transfer a file from A to C without (or minimal) storage on server B.
Is there a way of piping files across from A to C without storing it in B or with minimal steps?
Thanks.
From scp(1):
DESCRIPTION
... Copies between two remote hosts
are also permitted.
scp host1:foo.txt host2:foo.txt
You can do this without scp if you like. Log into machine 'B' and run this:
ssh userA#A 'cat /source/file' | ssh userC#C 'cat > /dest/file'
You should set up one or both of these ssh instances to use a key for login, so that you're not prompted for passwords by two ssh instances at the same time.
If you want the file copy process to be a little more error-proofed, or if you want to transfer more than one file at a time, you can use tar:
ssh userA#A 'cd /source/dir && tar cf - file1 file2...' |
ssh userC#C 'cd /dest/dir && tar xvf -'
If you'd rather run the command from A, then something like this should work:
tar cf - file... | ssh userB#B 'ssh userC#C "cd /dest/dir && tar xvf -" '
You could do it with a tunnel:
# Open a tunnel to server C
$ ssh -L 2222:<server C>:22 -N -l user <server B> &
# Copy the file to server C
$ scp <file> -P 2222 localhost:<remote filename>
Note that the tunnel is still running after step 2.
Related
I have 3 Linux machines, namely client, server1 and server2.
I am trying to achieve something like this. I would like to copy the latest file in a particular directory of server1 to server2. I will not be doing this by logging into server1 directly, but I always log on to client machine first. Let me list down the step by step approach which is happening now:
Log on to client machine using ssh
SSH into server1
Go to directory /home/user1 in server1 using the command ls /home/user1 -Art | tail -n 1
SCP the latest file to /home/user2 directory of server2
This manual operation is happening just fine. I have automated this using a one line script like below:
ssh user1#server1 scp -o StrictHostKeyChecking=no /home/user1/test.txt user2#server2:/home/user2
But as you can see, I am uploading the file /home/user1/test.txt. How can I modify this script to always upload the latest file in the directory /home/user1?
If zsh is available on server1, you could use its advanced globbing features to find the most recent file to copy:
ssh user1#server1 \
"zsh -c 'scp -o StrictHostKeyChecking=no /home/user1/*(om[1]) user2#server2:/home/user2'"
The quoting is important in case your remote shell on server1 is not zsh.
You can use SSH to list the last file and after user scp to copy the file as follow:
FILE=$(ssh user1#server1 "ls -tp $REMOTE_DIR |grep -v / | grep -m1 \"\""); scp user1#server1:$FILE user2#server2:/home/user2
Before launch these command you need to set the remote directory where to search for the last modified file as:
REMOTE_DIR=/home/user1
Objective: I am trying to copy a folder and its files from HOST_C to HOST_A. ssh or scp can only be done through HOSTB due to keys.
Infrastructure:
HOST_A<-->HOST_B<-->HOST_C
Current procedure:
ssh to host_B
scp -r from folder at C to folder on B
exit ssh from B
scp -r from folder on B to folder on A
ssh to host_B again
rm -r folders created
I have made some attempts using ProxyCommand but without luck.
Any suggestions are welcome
You could connect from host B to host C with ssh, create a tar archive of the folder to copy and send the output to STDOUT and pipe all this to a second ssh session which connects to host A and unpacks the tar archive received on STDIN.
ssh host_C "cd /somewhere; tar czpf - folder" | ssh host_A "cd /somewhere; tar xzpf -"
A <--ssh--> B <--ssh--> C , The computer A can NOT copy file to C with scp directly.
I have install lrzsz in the computer C. How to transfer files between A and C.
You need nc installed on B then you can use B as a proxy and connect from A directly to C using:
ssh -o ProxyCommand "ssh user1#B nc %h %p 2>/dev/null" user2#C
Replace "user1" and "user2" with your login names on the respective computers.
Replace the outer ssh with scp and user2#C with the source and destination files using the syntax accepted by scp and you're good to go!
For example:
scp -o ProxyCommand "ssh user1#B nc %h %p 2>/dev/null" user2#C:file1.txt .
copies file1.txt from your home directory on C to the current local directory.
I didn't actually test the command lines as exposed above. For both ssh/scp commands I also added -i with the path to my private key to avoid ssh and scp asking for passwords.
To make the things pleasant to use, you can write in your ~/.ssh/config on A something like this:
Host = B
User = user1
IdentityFile = ~/.ssh/id_dsa_B
Host = C
User = user2
IdentityFile = ~/.ssh/id_dsa_C
ProxyCommand = ssh user1#B nc %h %p 2>/dev/null
Replace in the sample above "user1" and "user2" with your actual login names on the corresponding machines and "id_dsa_B" and "id_dsa_C" with your private key(s) you use for the two machines. I assume you know how to generate and use public/private keys.
Then you can simply use:
ssh C
scp C:file.txt .
If B's ssh server allows port forwarding, you can do, for example,
A$ ssh -N -L10022:C:22 B
and leave that running, then in a seperate shell,
A$ scp -P 10022 file.txt localhost:
Read more about ssh port fowarding. You don't have to use port 10022, any unused port will work.
I have to transfer a file from server A to B and then needs to trigger a script at Server B. Server B is a Load balance server which will redirect you either Server B1 or B2 that we dont know.
I have achieved this as below.
sftp user#Server
put file
exit
then executing the below code to trigger the target script
ssh user#Server "script.sh"
But the problem here is as I said it is a load balance server, Sometimes I am putting file in one server and the script get triggers in another server. How to overcome this problem?
I am thinking some solutions like below
ssh user#server "Command for sftp; sh script.sh"
(i.e) in the same server call if I put and triggers it will not give me the above mentioned problem. How can I do sftp inside ssh connection? Otherwise any other suggestions?
if you're just copying a file up and then executing a script, and it can't happen as two separate commands you can do:
gzip -c srcfile | ssh user#remote 'gunzip -c >destfile; script.sh'
This gzips srcfile, sends it through ssh to the remote end, gunzips it on that side, then executes script.sh.
If you want more than one file, you can use tar rather than gzip:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh'
if you want to get the results back from the remote end and they're files, you can just replicate the tar after the script…
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh; tar czf - <remotedatafiles>' | tar xzf -
i.e. create a new pipe from ssh back to the local environment. This only works if script.sh doesn't generate any output. If it generates output, you have to redirect it, for example to /dev/null in order to prevent it messing up the tar:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh >/dev/null; tar czf - <remotedatafiles>' | tar xzf -
You can use scp command first to upload your file and then call remote command via ssh.
$ scp filename user#machine:/path/to/file && ssh user#machine 'bash -s' < script.sh
This example about uploading a local file, but there is no a problem to run it on server A.
You could create a fifo (Named Pipe) on the server, and start a program that tries to read from it. The program will block, it won't eat any CPU.
From sftp try to write the pipe -- you will fail, indeed, but the listening program will run, and check for uploaded files.
# ls -l /home/inp/alertme
prw------- 1 inp system 0 Mar 27 16:05 /home/inp/alertme
# date; cat /home/inp/alertme; date
Wed Jun 24 12:07:20 CEST 2015
<waiting for 'put'>
Wed Jun 24 12:08:19 CEST 2015
transfer testing with tar gzip compression, ssh default compression. using PV for as pipe meter (apt-get install pv)
testing on some site folder where is about 80k small images, total size of folder about 1.9Gb
Using non-standart ssh-port 2204
1) tar gzip, no ssh compression
tar cpfz - site.com|pv -b -a -t|ssh -p 2204 -o cipher=none root#removeip "tar xfz - -C /destination/"
pv meter started from 4Mb/sec, degradated down to 1.2MB/sec at end. PV shows about 1.3Gb transfered bytes (1.9GB total size of folder)
2) tar nozip, ssh compression:
tar cpf - site.com|pv -b -a -t|ssh -p 2204 root#removeip "tar xf - -C /destination/"
pv meter started from 8-9Mb/sec, degradated down to 1.8Mb/sec at end
I can copy file via SSH by using SCP like this:
cd /root/dir1/dir2/
scp filename root#192.168.0.19:$PWD/
But if on remote server some directories are absent, in example remote server has only /root/ and havn't dir1 and dir2, then I can't do it and I get an error.
How can I do this - to copy file with creating directories which absent via SSH, and how to make it the easiest way?
The easiest way mean that I can get current path only by $PWD, i.e. script must be light moveable without any changes.
This command will do it:
rsync -ahHv --rsync-path="mkdir -p $PWD && rsync" filename -e "ssh -v" root#192.168.0.19:"$PWD/"
I can make the same directories on the remote servers and copy file to it via SSH by using SCP like this:
cd /root/dir1/dir2/
ssh -n root#192.168.0.19 "mkdir -p '$PWD'"
scp -p filename root#192.168.0.19:$PWD/