Objective: I am trying to copy a folder and its files from HOST_C to HOST_A. ssh or scp can only be done through HOSTB due to keys.
Infrastructure:
HOST_A<-->HOST_B<-->HOST_C
Current procedure:
ssh to host_B
scp -r from folder at C to folder on B
exit ssh from B
scp -r from folder on B to folder on A
ssh to host_B again
rm -r folders created
I have made some attempts using ProxyCommand but without luck.
Any suggestions are welcome
You could connect from host B to host C with ssh, create a tar archive of the folder to copy and send the output to STDOUT and pipe all this to a second ssh session which connects to host A and unpacks the tar archive received on STDIN.
ssh host_C "cd /somewhere; tar czpf - folder" | ssh host_A "cd /somewhere; tar xzpf -"
Related
We need to move all the files from particular folder to ftp server. We wrote a script but getting directory not found exception. Below is the script.
#!/bin/sh
HOST='192.168.1.100'
USER='ramesh'
PASSWD='rameshftp'
ftp -inv $HOST << EOF
user $USER $PASSWD
cd /home/Ramesh
put *.zip
bye
EOF
Our requirement is to copy all the files which resides in some directory in Suse Linux Server and copy to FTP server. for eg: Copy all the contents from "/home/Ramesh" directory and put into ftp server.
You can do this in one line with ncftp:
ncftpput -u username -p password ftp-host-name /path/to/remote/dir /path/to/local/dir/*
See http://linux.die.net/man/1/ncftp for more info
The code tries to ssh from my local server to remote server and runs some commands.
ssh root#$remoteip 'bash -s' <<END3
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
directory=$(dirname ${dir})
echo $dir >> dirstr.txt
mkdir -p $directory
chown $root:$root $directory
chmod 777 $directory
done
END3
the above creates a directory structure on remote server which is working fine.
I want to tar up the same directory structure. so i'm using same logic as above.
ssh root#$remoteip 'bash -s' <<END3
touch emptyfile
tar -cf gcda.tar emptyfile
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
tar -rf gcda.tar $dir
done
END3
The above piece of code should create a tar with all the directories included returned by for loop. i tried the code logic by copying the code to remote server and running it there and it worked. But if I ssh connect from my local server to remote server and try it is not enetring the for loop. it is not appending anything to tar file created with empty file in second line.
Try <<'END3'
Note the quotes around END3, they prevent shell substitutions inside the here-document. You want the $-signs to be transferred to the other side of the ssh connection, not interpreted locally. Same for the backticks.
Extracted from the comments as the accepted answer. Posting as community wiki
I have a server "B" which can SCP files to/from server "A" and can also SCP files to/from server "C".
i.e.
A <-----> B <-----> C
Server "A" and server "C" cannot reach each other. Only server B can reach both.
I would like to transfer a file from A to C without (or minimal) storage on server B.
Is there a way of piping files across from A to C without storing it in B or with minimal steps?
Thanks.
From scp(1):
DESCRIPTION
... Copies between two remote hosts
are also permitted.
scp host1:foo.txt host2:foo.txt
You can do this without scp if you like. Log into machine 'B' and run this:
ssh userA#A 'cat /source/file' | ssh userC#C 'cat > /dest/file'
You should set up one or both of these ssh instances to use a key for login, so that you're not prompted for passwords by two ssh instances at the same time.
If you want the file copy process to be a little more error-proofed, or if you want to transfer more than one file at a time, you can use tar:
ssh userA#A 'cd /source/dir && tar cf - file1 file2...' |
ssh userC#C 'cd /dest/dir && tar xvf -'
If you'd rather run the command from A, then something like this should work:
tar cf - file... | ssh userB#B 'ssh userC#C "cd /dest/dir && tar xvf -" '
You could do it with a tunnel:
# Open a tunnel to server C
$ ssh -L 2222:<server C>:22 -N -l user <server B> &
# Copy the file to server C
$ scp <file> -P 2222 localhost:<remote filename>
Note that the tunnel is still running after step 2.
I can copy file via SSH by using SCP like this:
cd /root/dir1/dir2/
scp filename root#192.168.0.19:$PWD/
But if on remote server some directories are absent, in example remote server has only /root/ and havn't dir1 and dir2, then I can't do it and I get an error.
How can I do this - to copy file with creating directories which absent via SSH, and how to make it the easiest way?
The easiest way mean that I can get current path only by $PWD, i.e. script must be light moveable without any changes.
This command will do it:
rsync -ahHv --rsync-path="mkdir -p $PWD && rsync" filename -e "ssh -v" root#192.168.0.19:"$PWD/"
I can make the same directories on the remote servers and copy file to it via SSH by using SCP like this:
cd /root/dir1/dir2/
ssh -n root#192.168.0.19 "mkdir -p '$PWD'"
scp -p filename root#192.168.0.19:$PWD/
I'm having issues with my Unix FTP script...
It's only transferring the first three files in the directory that I'm local cd'ing into during the FTP session.
Here's the bash script that I'm using:
#!/bin/sh
YMD=$(date +%Y%m%d)
HOST='***'
USER='***'
PASSWD=***
FILE=*.png
RUNHR=19
ftp -inv ${HOST} <<EOF
quote USER ${USER}
quote PASS ${PASSWD}
cd /models/rtma/t2m/${YMD}/${RUNHR}/
mkdir /models/rtma/t2m/${YMD}/
mkdir /models/rtma/t2m/${YMD}/${RUNHR}/
lcd /home/aaron/grads/syndicated/rtma/t2m/${YMD}/${RUNHR}Z/
binary
prompt
mput ${FILE}
quit
EOF
exit 0
Any ideas?
I had encountered same issue, I have to transfer 400K files but mput * or mput *.pdf was not moving all files in one go
tried timeout :fails
tried -r recursive :fails
tried increasing Data/control timeout in IIS :fails
tried -i
Prompt
scripting fails
Finally went to use portable filezilla connect to from source and transferred the all files