I need to transfer around 50 databases from one server to other. Tried two different backup options.
1. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -Fc -d db -f db.cust -- all backups in backup1 folder size 10GB
2. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -j 4 -Fd -d db -f db.dir -- all backups in backup2 folder size 10GB
then transferred to other server for restoration using scp.
scp -r backup1 postgres#YY.YYYY.YY.YYYY:/backup/
scp -r backup2 postgres#YY.YYYY.YY.YYYY:/backup/
Noticed a strange thing. Though the backup folder size are same for both backup but it takes different time to transfer using scp. For directory format backup, transfer is 4 times than custom format backup. Both the SCP done in same network and tried multiple times but result are some.also tried rsync but no difference.
please suggest, what could be the reason for slowness and how can I speed up. I am open to use any other method to transfer.
Thanks
Related
I'm using fstab to mount a samba share on boot
//ip/share /mnt/share cifs credentials=/home/user/.smbcredentials,uid=user 0 0
and scheduled rsync via cron job to copy the contents to a local drive once a week
0 2 * * 7 /usr/bin/rsync -av --delete /mnt/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
The thought came to mind if the host was unavailable /mnt/share would be empty- if the cron job ran it'd wipe all the data on my local backup mount because of the difference and --delete flag. I want to keep that as I want a clone of my share.
I'm relatively new with Linux and curious what approach might add a safeguard to this. could I run "ls" to check for content, if present continue? Otherwise what would ensure I don't inadvertently delete everything on my backup mount?
Solved my problem by reading the manual a little bit more on rsync and ssh.
Generated ssh key on client ssh-keygen
Copied to host ssh-copy-id user#host
Modified cron job 0 2 * * 7 /usr/bin/rsync -av --delete user#ip:/mnt/driveuid/share/ /mnt/backup/ --log-file=/var/log/rsyncbackup.log
Now if my computer can't connect to the host the job doesn't run.
i run a mixed windows and linux network with different desktops, notebooks and raspberry pis. i am trying to establish an off-site backup between a local raspberry pi and an remote raspberry pi. both run on dietpi/raspbian and have an external hdd with ntfs to store the backup data. as the data to be backuped is around 800GB i already initially mirrored the data on the external hdd, so that only the new files have to be sent to the remote drive via rsync.
i now tried various combinations of options including --ignore-existing --size-only -u -c und of course combinations of other options like -avz etc.
the problem is: nothing of the above really changes anything, the system tries to upload all the files (although they are remotely existing) or at least a good number of them.
could you give me hint how to solve this?
I do this exact thing. Here is my solution to this task.
rsync -re "ssh -p 1234” -K -L --copy-links --append --size-only --delete pi#remote.server.ip:/home/pi/source-directory/* /home/pi/target-directory/
The options I am using are:
-r - recursive
-e - specifies to utilize ssh / scp as a method to transmit data, a note, my ssh command uses a non-standard port 1234 as is specified by -p in the -e flag
-K - keep directory links
-L - copy links
--copy-links - a duplicate flag it would seem...
--append - this will append data onto smaller files in case of a partial copy
--size-only - this skips files that match in size
--delete - CAREFUL - this will delete local files that are not present on the remote device..
This solution will run on a schedule and will "sync" the files in the target directories with the files from the source directory. to test it out, you can always run the command with --dry-run which will not make any changes at all and only show you what will be transferred and/or deleted...
all of this info and additional details can be found in the rsync man page man rsync
**NOTE: I use ssh keys to allow connection/transfer without having to respond to a password prompt between these devices.
I want to transfer a file from my server to another.The network between these servers isn't very well,so I want to use lftp to speed up.My script is like this:
lftp -u user,password -e "set sftp:connect-program 'ssh -a -x -i /key'; mirror --use-pget=5 -i data.tar.gz -r -R /data/ /tmp; quit" sftp://**.**.**.**:22
I found data.tar.gz wasn't segmented, But When I use it to download a file, that will works.
What should I do?
Segmented uploads are not implemented in lftp. If you have ssh access to the server, login there and use lftp to download the file. If there were many files, you could also upload different files in parallel using -P mirror option.
What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
I have a migration script to run on the server itself, but it won't run because the disc is full, so I need something I can run remotely which just gets everything.
I need to copy the whole contents of a linux server, but I'm not sure how to do it recursively.
How about
scp -r root#remotebox:/ your_local_copy
sudo rsync -hxDPavil -H --stats --delete / remote:/backup/
this will copy everything (permissions, owners, timestamps, devices, sockets, hardlinks etc). It will also delete stuff that no longer exists in source. (note that -x indicates to only copy files within the same mountpoint)
If you want to preserve owners but the receiving end is not on the same domain, use --numeric-ids
To automate incremental backup w/snapshots, look at rdiff-backup or rsnapshot.
Also, gnu tar is highly underrated
sudo tar cpf / | ssh remote 'cd /backup && tar xv'