How to use mysqldump, pv, and zip command together? - linux

I want to use mysqldump to get a table from a remote MySQL server, then compress this downloaded file using zip. While downloading, I wish I can view the progress using pv.
Can I do the above things with | in one line of command?
These are what I've tried:
mysqldump -uuser_name -ppassword -hremote_address --routines my_database my_table | pv | zip > my_database_my_table.sql.zip
The problem with this command is that when executing unzip my_database_my_table.sql.zip, I got - as the name of the output file. I wish I could determine the file's name when I execute zip command.
Is it possible to set the name of the inflated file?
mysqldump -uuser_name -ppassword -hremote_address --routines my_database my_table | pv | zip my_database_my_table.sql > my_database_my_table.sql.zip
This command gives me mysqldump: Got errno 32 on write error.

Here is how I used them together:
mysqldump -u db_user -pdb_password db_name| pv | zip > backup.zip
Here is https://stackoverflow.com/a/50985546/3778130 full automatic backup script if someone is interested.
Hope it helps someone down the road.

A quick man zip shows that you can use -O or --output-file to specify the output name. That should do you.

In order to be able to get the progress out of pv, it needs to know the full size of the data. Unfortunately, by piping it to mysqldump it does not have that kind of data.
I would recommend to dump the data first and transfer and restore it later.
Dump it.
pv mysqldump.sql | zip > ~/mysqldump.sql.zip
scp remotehost:/home/folder/mysqldump.sql ./
unzip it
pv mysqldump.sql | mysql -u<dbuser> -p <dbname>

I am not familiar with pv, but my zip documentation states for zip to use the stdin, the dash must be specified. So i use this :
mysqldump --user=username --password=password mydatabase | zip mydatabase.zip -

Related

postgresql pg_dump backup transfer

I need to transfer around 50 databases from one server to other. Tried two different backup options.
1. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -Fc -d db -f db.cust -- all backups in backup1 folder size 10GB
2. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -j 4 -Fd -d db -f db.dir -- all backups in backup2 folder size 10GB
then transferred to other server for restoration using scp.
scp -r backup1 postgres#YY.YYYY.YY.YYYY:/backup/
scp -r backup2 postgres#YY.YYYY.YY.YYYY:/backup/
Noticed a strange thing. Though the backup folder size are same for both backup but it takes different time to transfer using scp. For directory format backup, transfer is 4 times than custom format backup. Both the SCP done in same network and tried multiple times but result are some.also tried rsync but no difference.
please suggest, what could be the reason for slowness and how can I speed up. I am open to use any other method to transfer.
Thanks

Shell script to remove files from multiple host

I have a script, which ssh’s into a list of host’s and deletes files. The problem that I am facing is that when the deletion happens the console asks’s for password.
To get around that I know use the below code which hardcodes the password
time cat ../hosts.txt | xargs -P 16 -I foo ssh foo ' echo "MYPASSWORD" | sudo -kS rm -rf /tmp/randomfile*'
Is there any way to avoid hardcoding the password?
You could set up a ssh-agent.
I would suggest using a ssh-key, you could check this article describing how to create it and add it to your ssh-agent:
https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/

smbclient -c with ls -l option

i am trying to get folder lists from remote server, and it is not possible to mount remote server into my local computer (because of the permission issue).
i used
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=myid" -c 'ls;'
to get lists of the folder.
and the result was success.
but, actually i want to use ls -l with the above the command line
and when i try to get results using the line
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=LGE\final.lee" -c 'ls -l;'
it returns
NT_STATUS_NO_SUCH_FILE listing \-l
64000 blocks of size 16777216. 6503 blocks available
...
how should i use smbclient operator with ls -l option?
please help me!
smbclient ls does not run a native ls command, but rather invokes built-in functionality. As such, it does not support the usual options which a native, POSIX-compliant ls command would provide.
Thus, you cannot do this.
If your goal is to read metadata, consider trying the smbclient stat [filename] subcommand instead (if your server supports UNIX extensions), or smbclient allinfo [filename] (otherwise).

mysql dump 200DB ~ 40GB from one server to another

What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!

ssh mysqldump from oscent to remote server

I'm trying to create automated backups of the mysql databases from my virtual host to my NAS storage.
I'm only just starting to learn shell commands so please bear with me - what I've found so far is:
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
but this seem to return the following error:
-bash: /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz: No such file or directory
Does anyone know how to overcome this problem and actually save it to the designated storage?
Change
cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz
to
cat > /path-to-the-directory-on-nas/`date +%Y-%m-%d_%H.%I.%S`.sql.gz
Make sure the folder already exists. At least worked on my Ubuntu :)
Check that the directory /path-to-the-directory-on-nas/ exists on the remote server.
If it is missing you can create it over ssh with the following command:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/
( using the -p if there is multiple directories tree that need to be created )
If you wanted to create the directory with a time stamp you should do the following:
ssh user#ipaddress mkdir -p /path-to-the-directory-on-nas/$(date '+%Y%M%D')/'
If you choose to include a timestamp in the directory path, you need to include it in the path that your mysqldump command uses.
Example:
Successfully create the file to a remote directory that exists on the remote system /var/tmp
$ date | ssh user#ipaddress 'cat > /var/tmp/file.txt'
$ ssh user#ipaddress cat /var/tmp/file.txt
Fri Oct 12 19:39:16 EST 2012
Failing with the same error you are getting, trying to write to a directory that dosn't exist.
$ date | ssh user#ipaddress 'cat > /var/Xtmp/file.txt'
bash: /var/Xtmp/file.txt: No such file or directory
You should debug further. First try
cat > /path-to-the-directory-on-nas/test.sql.gz.
After that you should try if the date works:
echo $(date +%Y-%m-%d_%H.%I.%S)
Then you'll know if the path exists or if date... fails. From your error msg it seems like the date is the problem but you need to be sure first. Then you could try to assign the date to a variable:
#!/bin/bash
filename=$(date +%Y-%m-%d_%H.%I.%S);
mysqldump
-uusername
-ppassword
--opt database_name |
gzip -c |
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$filename.sql.gz"
Replace
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/$(date +%Y-%m-%d_%H.%I.%S).sql.gz"
with
ssh user#ipaddress
"cat > /path-to-the-directory-on-nas/"$(date +%Y-%m-%d_%H.%I.%S)".sql.gz"

Resources