Is it possible to restore a dump file from a remote server?
mysql -u root -p < dump.sql
Can a dump.sql be located in a remote server? If so how do I refer it in the command above. Copying to the server isn't an option as there is no enough space in the server. I'm on redhat 5
Implement SSH remote access from the remote server to local
run on the remote server
cat dump.sql | ssh -c 'mysql -u root -pPASSWORD '
OR
Implement MYSQL access from the remote server
setup all privileges for root#(REMOTESERVER)
run on remote server
mysql -h mysql.yourcompany.com -u root -p < dump.sql
Maybe try this? Pick an available server port (call it 12345)
On your mysql box do nc -l 12345 | mysql -u root -p
On the box with the dumpfile nc mysqlbox 12345 < local_dump_file.sql
look at the -h options for mysql connections
MySQL Connection Docs
Edit:
Oh I misread, you are not trying to load onto a remote server, you want the file to come from a remote server. If you can log into the remote server where the file is you can then:
remotebox:user>>mysql -h <your db box> < local_dump_file.sql
Related
I've been trying to import a .sql file from a local directory into my PostgreSQL database ("bikeshare"). I'm currently using Linux OS (Ubuntu)
I made the recommended adjustments to both files listed below:
/etc/postgresql/10/main/postgresql.conf (tried listen_address = "*" and "::")
/etc/postgresql/10/main/pg_hba.conf (local all postgres md5)
I've also tried allowing access to all databases for all users with an encrypted password. (host all all 0.0.0.0/0 md5)
Every time I try this (after changing into a directory with the .sql file):
$ psql -U Postgres -d bike-share -f status.sql
I get this error message:
psql:/home/glenn/Documents/PostgreSQL/status.sql:27612953: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
psql:/home/glenn/Documents/PostgreSQL/status.sql:27612953: connection to server was lost
I've also tried doing this:
$ sudo -u postgres psql
postgres=# \c bikeshare
postgres=# \i '/home/glenn/Documents/PostgreSQL/status.sql'
And this:
$ psql -h Glenn -d bikeshare -U postgres -f status.sql
And every time I get the same error above.
Is there a step I'm missing?
I am trying to automatically connect server -> server on startup using ssh with port forwading. I need this so that the 1st server can connect to the 2nd sever's postgres DB.
For the connection I am using
ssh -i /root/.ssh/id_rsa -L 5434:localhost:5432 user#ipAddress
This works fine when I try it manually and I can connect to my DB with
psql -U postgres -h localhost -p 5434
with having the .pgpass file in the home dir..
But the problem is, that the ssh connection is NOT made by itself on startup. I thought of using the sudo crontab's #reboot, but that did not work.. Then I tried to move the script to /etc/rc.local based on this but also with no luck..
Please can someone help me establish the ssh connection on startup?
Thanks in advance
I think I have solved it by adding "-N" to the ssh connection parameters. This should keep it in background and it seems to be working..
So now I have
ssh -N -i /root/.ssh/id_rsa -L 5434:localhost:5432 user#ipAddress
in the root's crontab and it connects after reboot. This does not solve the "cold start" connection, but since it is a server it will be mostly only restarted and not powered down and started..
Hello I am useing postgresql pg_dump to dump a databases but there are multiple databases on the postgresql instance can the .pgpass file have multiple databases passwords in it.
pg_dump command: -h = host -p =port -U = user -w = look for .pgpass file
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
.pgpass file looks like this:
localhost:7432:scm:scm:password
There are other databases running on this instance of postgresql and I would like to add them to the file so i only need to use one .pgpass file
I think the users in the dump command needs to change also?
localhost:7432:amon:amon:password
So adding multiple lines to the .pgpass file i was able to do more than 1 databases at a time.
EX: .pgpass file:
localhost:7432:scm:scm:password
localhost:7432:amon:amon:password
and the dump commands needs to be in script file one after the other.
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
pg_dump -h localhost -p 7432 -U amon -w > /nn5/scm_server_db_backup
I use PuTTy to connect a remote Ubuntu. I want to transfer a table from another windows computer into a database in the Ubuntu.
I searched in the Internet and the code is
pg_dump -C -t table_name -h 192.168.1.106 -p 5432 database_name1| psql -h localhost -p 5432 -U postgres database_name2
But the PuTTy shows:
Password for user postgres: Password:
I need to input two password: one is the Ubuntu user postgres pw, and the other is windows computer user postgres pw.
I guess the right way is to input Ubuntu user pw first, and then the computer pw. But it shows:
pg_dump: [archiver (db)] connection to database "database_name1" failed
Is this error caused by PuTTY? What would it be like if I use the Ubuntu computer directly?
I also used the separated way: first pg_dump, then psql. It worked.
Can anyone tell me why I can not transfer a table using pg_dump | psql directly?
This won't work well because both programs are trying to read from the same terminal input stream at the same time. Some of them get some parts of each of what you type, so passwords are mangled.
Use a ~/.pgpass file to supply the passwords, or use the PGPASSWORD environment variable:
PGPASSWORD='pw_for_dump' pg_dump .... | PGPASSWORD='pw_for_restore' pg_restore ...
The .pgpass file is preferable.
What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!