MySQL Dump to tar.gz from remote without shell access - linux

I'm trying to get a dump from MySQL to my local client. This is what I currently have:
mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | gunzip -9 > $FILE
What I want though is .tar.gz instead of a gunzip archive. I have shell access on local client but not on the server. So, I can't do a remote tar and copy it here. So, is there a way of piping the gzip to a tar.gz. (Currently, the .gz does not get recognized as a tar archive.)
Thanks.

If you are issuing the above command in client side, your compression is done in client side. mysqldump connects the remote server and downloads the data without any compression.
mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db > filename
tar cfz filename.tar.gz filename
rm filename
Probably some unix gurus will have a one liner to do it.

No. The files (yes, plural, since tar is usually used for more than one file) are first placed in a tar archive, and then that is compressed. If you are trying to use the tar command line tool then you will need to save the result in a temporary file and then tar that.
Personally though, I'd rather hit the other side with a cluebat.

mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | tar -zcvf $FILE -
Where $FILE is your filename.tar.gz

Archived backup and renamed by time and date:
/usr/bin/mysqldump -u $MyUSER -h $MyHOST -p$MyPASS $db | gzip -c > /home/backup_`/bin/date +"\%Y-\%m-\%d_\%H:\%M"`.gz

Related

Creating compressed tar file, with only subset of files, remotely over SSH

I've successfully managed to transfer a tar file over SSH on stdout from a remote system, creating a compressed file locally, by doing something like this:
read -s sudopass
ssh me#remote "echo $sudopass | sudo -S tar cf - '/dir'" 2>/dev/null | XZ_OPT='-6 -T0 -v' xz > dir.tar.xz
As expected this gets me a dir.tar.xz locally which is all of the remote /dir compressed.
I've also managed to figure out how to locally only compress a subset of files, by passing a filelist to tar with -T on STDIN:
find '/dir' -name '*.log' | XZ_OPT='-6 -T0 -v' tar cJvf /root/logs.txz -T -
My main question is: how would I go about doing the first thing (transfer plain tar remotly, then compress locally) while at the same time telling tar that I only want to do it on a specific subset of files?
When I try combining the two:
ssh me#remote "echo $sudopass | sudo -S find '/dir' -name '*.log' | tar cf
-T -" | XZ_OPT='-6 -T0 -v' xz > cypress_logs.tar.xz
I get errors like:
tar: -: Cannot stat: No such file or directory
I feel like tar isn't liking the fact that I'm both passing it something on STDIN as well as expecting it to output to STDOUT. Adding another - didn't seem to help either.
Also, as a bonus question, if anyone has a better idea on how to pass $sudopass above that would be great, since this method -- while avoiding having the password in the bash history -- makes the sudo password show up in the process list while it's running.
Remember that the f option requires an argument, so when you write cf -T -, I suspect that the -T is getting consumed as the argument to f, which throws off the rest of the command line.
This works for me:
ssh me#remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar -cf- -T-"
You could also write it like this:
ssh me#remote "echo $password | sudo -S find /tmp/dir -name '*.log' | tar cf - -T-"
But I prefer to always use - for options, rather than legacy tar's weird options without any prefix.

How to exclude a specific file in scp linux shell command?

I am trying to execute the scp command in such a way that it can copy .csv files from source to sink, except a few specific CSV file.
For example in the source folder I am having four files:
file1.csv, file2.csv, file3.csv, file4.csv
Out of those four files, I want to copy all files, except file4.csv, to the sink location.
When I was using the below scp command:
scp /tmp/source/*.csv /tmp/sink/
It would copy all the four CSV files to the sink location.
How can I achieve the same by using the scp command or through writing a shell script?
You can use rsync with the --exclude switch, e.g.
rsync /tmp/source/*.csv /tmp/sink/ --exclude file4.csv
Bash has an extended globbing feature which allows for this. On many installations, you have to separately enable this feature with
shopt -e extglob
With that in place, you can
scp tmp/source/(!fnord*).csv /tmp/sink/
to copy all *.csv files except fnord.csv.
This is a shell feature; the shell will expand the glob to a list of matching files - scp will have no idea how that argument list was generated.
As mentioned in your comment, rsync is not an option for you. The solution presented by tripleee works only if the source is on the client side. Here I present a solution using ssh and tar. tar does have the --exclude flag, which allows us to exclude patterns:
from server to client:
$ ssh user#server 'tar -cf - --exclude "file4.csv" /path/to/dir/*csv' \
| tar -xf - --transform='s#.*/##' -C /path/to/destination
This essentially creates a tar-ball which is send over /dev/stdout which we pipe into a tar extract. To mimick scp we need to remove the full path using --transform (See U&L). Optionally you can add the destination directory.
from client to server:
We do essentially the same, but reverse the roles:
$ tar -cf - --exclude "file4.csv" /path/to/dir/*csv \
| ssh user#server 'tar -xf - --transform="s#.*/##" -C /path/to/destination'
You could use a bash array to collect your larger set, then remove the items you don't want. For example:
files=( /tmp/src/*.csv )
for i in "${!files[#]}"; do
[[ ${files[$i]} = *file4.csv ]] && unset files[$i]
done
scp "${files[#]}" host:/tmp/sink/
Note that our for loop steps through array indices rather than values, so that we'll have the right input for the unset command if we need it.

shell sftp download remote file

How to download the lastest file via sftp in command line?
This connects to the server and lists the current directory.. But how can I find the last file sorted by filename and download it?
sshpass -p $pass sftp root#$host << EOF
cd /var/www/bak/db
dir
quit
EOF
update
#!/bin/sh
pass="pwd"
host="ftps://host:22"
mkdir /ftp
cd /ftp
curlftpfs $host /ftp -o user=root:$pass
ls
error
Error connecting to ftp: gnutls_handshake() failed: An unexpected TLS packet was received.
Maybe that
Get latest file and save to batchfile file:
ssh user#server "find /path/to/dir -type f -printf 'get %p\n' | sort -n | tail -1" > batchfile
And get file:
sftp -b batchfile user#server:/
I checked and it works!
It may be more convenient to use CurlFtpFS to mount sftp folder. Tutorial on "using curlftpfs to mount a FTP folder" explains details.
And then use standard commands to achieve what you want to do.
Or for sshfs follow tutorial on how to Mount a SFTP Folder (SSH + FTP).
Not sure which sftp do you mean.

Shell Scripting for archiving and encrypting

I need to create a script which receives from the CLI the name of a file with the extension .tar.gz and a folder(e.g ./archivator.sh box.tar.gz MyFolder). This script will archive the files from the folder(only the files WITHIN the folder and without any compression) and they will be moved into the archive received as a parameter. The archive will be then encrypted(using the aescrypt) with the password 'apple'.
OS: Debian 6
Note: The final encrypted archive will have the same name as the first given parameter.
What i have tried so far is this:
tar -cvf $1 $2/* | aescrypt -e -p apple - > $1.aes | mv $1.aes $1
And this is what I receive when I am trying to check my script:
tar: This does not look like a tar archive
tar: Exiting with a failure status due to previous errors
Try doing this :
tar cf - $2/* | aescrypt -e -p apple - > $1
- here, means STDIN
Works well on Linux (archlinux) with GNU tar 1.26
If it doesn't work, run the script in debug mode:
bash -x script.sh
then come again to post the output.
After a little research, here is the solution:
pushd $2
tar cvf $1 .
openssl aes-256-cbc -in $1 -out $1.enc -pass pass:apple
mv $1.enc $1
popd
Your error seems to signal that the interpreter is receiving a file which is not a tar archive, yet it expects one. Have you checked to make sure the file your providing is a tar archive?

Wget Output in Recursive mode

I am using wget -r to download 3 .zip files from a specified webpage. Here is what I have so far:
wget -r -nd -l1 -A.zip http://www.website.com/example
Right now, the zip files all begin with abc_*.zip where * seems to be a random. I want to have the first downloaded file to be called xyz_1.zip, the second to be xyz_2.zip, and the third to be xyz_3.zip.
Is this possible with wget?
Many thanks!
I don't think it's possible with wget alone. After downloading you could use some simple shell scripting to rename the files, like:
i=1; for f in abc_*.zip; do mv "$f" "xyz_$i.zip"; i=$(($i+1)); done
Try to get a listing first and then download each file separately.
let n=1
wget -nv -l1 -r --spider http://www.website.com/example 2>&1 | \
egrep -io 'http://.*\.zip'| \
while read url; do
wget -nd -nv -O $(echo $url|sed 's%^.*/\(.*\)_.*$%\1%')_$n.zip "$url"
let n++
done
I don't think there is a way you can do it within a single wget command.
wget does have a -O option which you can use to tell it which file to output to, but it won't work in your case because multiple files will get concatenated together.
You will have to write a script which renames the files from abc_*.zip to xyz_*.zip after wget has completed.
Alternatively, invoke wget for one zip file at a time and use the -O option.

Resources