Hello I am useing postgresql pg_dump to dump a databases but there are multiple databases on the postgresql instance can the .pgpass file have multiple databases passwords in it.
pg_dump command: -h = host -p =port -U = user -w = look for .pgpass file
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
.pgpass file looks like this:
localhost:7432:scm:scm:password
There are other databases running on this instance of postgresql and I would like to add them to the file so i only need to use one .pgpass file
I think the users in the dump command needs to change also?
localhost:7432:amon:amon:password
So adding multiple lines to the .pgpass file i was able to do more than 1 databases at a time.
EX: .pgpass file:
localhost:7432:scm:scm:password
localhost:7432:amon:amon:password
and the dump commands needs to be in script file one after the other.
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
pg_dump -h localhost -p 7432 -U amon -w > /nn5/scm_server_db_backup
Is it possible to restore a dump file from a remote server?
mysql -u root -p < dump.sql
Can a dump.sql be located in a remote server? If so how do I refer it in the command above. Copying to the server isn't an option as there is no enough space in the server. I'm on redhat 5
Implement SSH remote access from the remote server to local
run on the remote server
cat dump.sql | ssh -c 'mysql -u root -pPASSWORD '
OR
Implement MYSQL access from the remote server
setup all privileges for root#(REMOTESERVER)
run on remote server
mysql -h mysql.yourcompany.com -u root -p < dump.sql
Maybe try this? Pick an available server port (call it 12345)
On your mysql box do nc -l 12345 | mysql -u root -p
On the box with the dumpfile nc mysqlbox 12345 < local_dump_file.sql
look at the -h options for mysql connections
MySQL Connection Docs
Edit:
Oh I misread, you are not trying to load onto a remote server, you want the file to come from a remote server. If you can log into the remote server where the file is you can then:
remotebox:user>>mysql -h <your db box> < local_dump_file.sql
I'm doing a dump like this pg_dump prod-db -U postgres > prod-db.sql, but would be cool if I could do a dump like pg_dump prod-db -U postgres > prod-db-{date}.sql and be able to generate a file like prod-db-06-02-13.sql via shell...
I have no idea on how to start nor where to looking for. Any ideas, links, docs would be much appreciated.
Try this:
pg_dump prod-db -U postgres > prod-db-$(date +%d-%m-%y).sql
Here's the date manual, for other format options.
Use backticks and the date command.
i.e.
pg_dump prod-db -U postgres > prod-db-`date +%d-%m-%y`.sql
I'm writing a shell script that will perform a pg_dump from one server, and later it will restore the dump into a database on another server.
To access the Postgres database on the first server, I use export PGPASSWORD in order to avoid a password prompt.
When I perform the restore on the Postgres database for the second server, I am prompted for a user password. I tried to re-declare export PGPASSWORD (since the password for this database is different), but this does not work. I'm suspecting it has something to do with the fact that I have to double SSH hop in order to access the second database:
export PGPASSWORD=******
perform pg_dump
...
export PGPASSWORD=********
cat out.sql | ssh -i .ssh/server1.pem root#dev.hostname.com "ssh -i .ssh/server2 \
0.0.0.0 psql -U postgres -h localhost -p 5432 DBNAME"
The above results in:
Password for user postgres:
psql: FATAL: password authentication failed for user "postgres"
Does anyone have any suggestions? Thanks!
You can create a file in your home directory with name .pgpass & give the password details there :
like :
cat .pgpass
hostname:port:database:username:password
localhost:2323:test_db:test_user:test123**
Try this one. Hope your problem will be solved.
The .ssh/environment file is what you want. I think that file needs to be on server1, though, and server2 needs to be configured to allow users to modify their environment (via the PermitUserEnvironment option. See man ssh under "ENVIRONMENT" for more details.
I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).