I would like a efficient method to scp a huge directory to a machine, while simultaneously compressing the directory. I need only the compressed directory in the destination machine.
Is it possible without having to do this in 2 steps manually?
Use tar:
tar cfz - /path/to/local|ssh user#remotehost 'cd /desired/location; tar xfz -'
the local tar will create/compress your file structure, and output it to stdout (- for the filename), which gets piped through ssh to a tar on the remote host, which reads the compressed stream from stdin (- filename, again) and extracts the contents
If you only want the compressed file written out, then
tar ... | ssh user#remotehoust 'cat - > file.tar.gz'
Related
The code tries to ssh from my local server to remote server and runs some commands.
ssh root#$remoteip 'bash -s' <<END3
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
directory=$(dirname ${dir})
echo $dir >> dirstr.txt
mkdir -p $directory
chown $root:$root $directory
chmod 777 $directory
done
END3
the above creates a directory structure on remote server which is working fine.
I want to tar up the same directory structure. so i'm using same logic as above.
ssh root#$remoteip 'bash -s' <<END3
touch emptyfile
tar -cf gcda.tar emptyfile
gcdadirs=`strings binary | egrep '.gcda$'`
for dir in ${gcdadirs[#]}; do
tar -rf gcda.tar $dir
done
END3
The above piece of code should create a tar with all the directories included returned by for loop. i tried the code logic by copying the code to remote server and running it there and it worked. But if I ssh connect from my local server to remote server and try it is not enetring the for loop. it is not appending anything to tar file created with empty file in second line.
Try <<'END3'
Note the quotes around END3, they prevent shell substitutions inside the here-document. You want the $-signs to be transferred to the other side of the ssh connection, not interpreted locally. Same for the backticks.
Extracted from the comments as the accepted answer. Posting as community wiki
I have to transfer a file from server A to B and then needs to trigger a script at Server B. Server B is a Load balance server which will redirect you either Server B1 or B2 that we dont know.
I have achieved this as below.
sftp user#Server
put file
exit
then executing the below code to trigger the target script
ssh user#Server "script.sh"
But the problem here is as I said it is a load balance server, Sometimes I am putting file in one server and the script get triggers in another server. How to overcome this problem?
I am thinking some solutions like below
ssh user#server "Command for sftp; sh script.sh"
(i.e) in the same server call if I put and triggers it will not give me the above mentioned problem. How can I do sftp inside ssh connection? Otherwise any other suggestions?
if you're just copying a file up and then executing a script, and it can't happen as two separate commands you can do:
gzip -c srcfile | ssh user#remote 'gunzip -c >destfile; script.sh'
This gzips srcfile, sends it through ssh to the remote end, gunzips it on that side, then executes script.sh.
If you want more than one file, you can use tar rather than gzip:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh'
if you want to get the results back from the remote end and they're files, you can just replicate the tar after the script…
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh; tar czf - <remotedatafiles>' | tar xzf -
i.e. create a new pipe from ssh back to the local environment. This only works if script.sh doesn't generate any output. If it generates output, you have to redirect it, for example to /dev/null in order to prevent it messing up the tar:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh >/dev/null; tar czf - <remotedatafiles>' | tar xzf -
You can use scp command first to upload your file and then call remote command via ssh.
$ scp filename user#machine:/path/to/file && ssh user#machine 'bash -s' < script.sh
This example about uploading a local file, but there is no a problem to run it on server A.
You could create a fifo (Named Pipe) on the server, and start a program that tries to read from it. The program will block, it won't eat any CPU.
From sftp try to write the pipe -- you will fail, indeed, but the listening program will run, and check for uploaded files.
# ls -l /home/inp/alertme
prw------- 1 inp system 0 Mar 27 16:05 /home/inp/alertme
# date; cat /home/inp/alertme; date
Wed Jun 24 12:07:20 CEST 2015
<waiting for 'put'>
Wed Jun 24 12:08:19 CEST 2015
transfer testing with tar gzip compression, ssh default compression. using PV for as pipe meter (apt-get install pv)
testing on some site folder where is about 80k small images, total size of folder about 1.9Gb
Using non-standart ssh-port 2204
1) tar gzip, no ssh compression
tar cpfz - site.com|pv -b -a -t|ssh -p 2204 -o cipher=none root#removeip "tar xfz - -C /destination/"
pv meter started from 4Mb/sec, degradated down to 1.2MB/sec at end. PV shows about 1.3Gb transfered bytes (1.9GB total size of folder)
2) tar nozip, ssh compression:
tar cpf - site.com|pv -b -a -t|ssh -p 2204 root#removeip "tar xf - -C /destination/"
pv meter started from 8-9Mb/sec, degradated down to 1.8Mb/sec at end
A bash scripting question. Suppose we have a calling host H and a remote server S. Is it possible (using a ssh remote invocation of tar from H to S) uncompress a file archive residing on S (and thus using computing resources of S) such that the files and directories of an archive are created only on H?
If your tarball is gzipped, you can remotely gunzip it and locally untar it with
ssh S gzip -dc < archive.tar.gz | tar xvf -
For this to actually be fast you need a very fast network and a very slow workstation.
You can't untar the archive remotely unless you have a shared filesystem (NFS, CIFS, ...).
How can I create a .tar archive of a file (say /root/bugzilla) on a remote machine and store it on a local machine. SSH-KEYGEN is installed, so I can by pass authentication.
I am looking for something along the lines of:
tar -zcvf Localmachine_bugzilla.tar.gz /root/bugzilla
ssh <host> tar -zcvf - /root/bugzilla > bugzilla.tar.gz
avoids an intermediary copy.
See also this post for a couple of variants: Remote Linux server to remote linux server dir copy. How?
Something like:
ssh <host> tar -zcvf bugzilla.tar.gz /root/bugzilla
scp <host>:bugzilla.tar.gz Localmachine_bugzilla.tar.gz
Or, if you are compressing it just for the sake of transfer, scp compression option can be useful:
scp -R -C <host>:/root/bugzilla .
This is going to copy the whole /root/bugzilla directory using compression on the wire.
when I untar doctrine
-rw-r--r-- 1 root root 660252 2010-10-16 23:06 Doctrine-1.2.0.tgz
I always get this error messages
root#X100e:/usr/local/lib/Doctrine/stable# tar -xvzf Doctrine-1.2.0.tgz
.
.
.
Doctrine-1.2.0/tests/ViewTestCase.php
Doctrine-1.2.0/CHANGELOG
gzip: stdin: decompression OK, trailing garbage ignored
Doctrine-1.2.0/COPYRIGHT
Doctrine-1.2.0/LICENSE
tar: Child returned status 2
tar: Error is not recoverable: exiting now
The untar operation works, but I always get this error messages.
Any clues what I do wrong?
I would try to unzip and untar separately and see what happens:
mv Doctrine-1.2.0.tgz Doctrine-1.2.0.tar.gz
gunzip Doctrine-1.2.0.tar.gz
tar xf Doctrine-1.2.0.tar
It's possible your tar file is not zipped. I just had this same error, but all I had was a plain old tar file. So try just removing the z from your flags. The z flag unzips your tar file as well as whatever other commands you requested with other flags. i.e. try:
tar -xvf Doctrine-1.2.0.tgz
Notice I removed the z from -xvzf
If you got "Error is not recoverable: exiting now" You might have specified incorrect path references.
[me#host ~]$ tar -xvf nameOfMyTar.tar -C /someSubDirectory/
tar: /someSubDirectory: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
[me#host ~]$
Make sure you provide correct relative or absolute directory references e.g.:
[me#host ~]$ tar -xvf ./nameOfMyTar.tar -C ./someSubDirectory/
./foo/
./bar/
[me#host ~]$
Try to get your archive using wget, I had the same issue when I was downloading archive through browser. Than I just copy archive link and in terminal use the command:
wget http://PATH_TO_ARCHIVE
The problem is that you do not have bzip2 installed. The tar program relies upon this external program to do compression.
For installing bzip2, it depends on the system you are using. For example, with Ubuntu that would be on Ubuntu
sudo apt-get install bzip2
The GNU tar program does not know how to compress an existing file such as user-logs.tar (bzip2 does that). The tar program can use external compression programs gzip, bzip2, xz by opening a pipe to those programs, sending a tar archive via the pipe to the compression utility, which compresses the data which it reads from tar and writes the result to the filename which the tar program specifies.
Alternatively, the tar and compression utility could be the same program. BSD tar does its compression using lib archive (they're not really distinct except in name).
use sudo
sudo tar -zxvf xxxxxxxxx.tar.gz
Error messages
Easy way to fix this issue
First Remove files
Download file again
Extract file again ( tar -xzvf(or -xvf) FreeFileSync**.tar.gz
Had the same error code:
tar -xvfz processed.json.gz
tar: z: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
Turned out the file had the .gz extension but wasn't compressed. Just removed the .gz and opened it.