I'm doing a dump like this pg_dump prod-db -U postgres > prod-db.sql, but would be cool if I could do a dump like pg_dump prod-db -U postgres > prod-db-{date}.sql and be able to generate a file like prod-db-06-02-13.sql via shell...
I have no idea on how to start nor where to looking for. Any ideas, links, docs would be much appreciated.
Try this:
pg_dump prod-db -U postgres > prod-db-$(date +%d-%m-%y).sql
Here's the date manual, for other format options.
Use backticks and the date command.
i.e.
pg_dump prod-db -U postgres > prod-db-`date +%d-%m-%y`.sql
Related
I need to clone a Postgres schema programatically in nodejs.
I have tried this approach
How to duplicate schemas in PostgreSQL
but it is quite old and could not make it work.
I have tried using pg_dump with -s flag to copy the schema structure then replacing in that file all the ocurrencies of the original schema for the new schema name.
Then i run psql instead of pg_restore because the file is not compatible with that command.
psql lims -U lims -h localhost -f new_schema.dump
psql:new_schema.dump:1127: ERROR: no existe el esquema «cliente2»
How could i create the schema programatically?
I think the most robust and sanity-preserving option is to use the database software itself to rename the schema. So something like this:
pg_dump -s -n old_schema > dmp_old.sql
createdb scratch_db_465484
psql scratch_db_465484 -1 -f dmp_old.sql
psql scratch_db_465484 -c 'alter schema old_schema rename to new_schema'
pg_dump scratch_db_465484 -s -n new_schema > dmp_new.sql
psql -1 -f dmp_new.sql
dropdb scratch_db_465484
I think the main problem here would be any extensions which installed their objects into old_schema would not get dumped but might be depended upon.
I suppose that:
you created dump of schema
than search-replace old schema name to new. I suppose it's cliente2 in your case
then you tried to run it on postgresql
Problem: "file is not compatible with that command"
I suppose you forgot to run
CREATE SCHEMA cliente2;
before running your script.
That's why you have an error message that schema cliente2 not exists.
Other (possible) problems
A. Most probably auto-generated script will need manual edit to be correct. Especially if you copied schema public into new schema.
B. Usually you need to add GRANTs for users to use schema and objects inside it.
I am trying to build a HA two node cluster with pacemaker and corosync for postgresql-9.3. I am using the link below as my guide.
http://kb.techtaco.org/#!linux/postgresql/building_a_highly_available_multi-node_cluster_with_pacemaker_&_corosync.md
However, I cannot get pass to the part where I need do pg_basebackup as shown below.
[root#TKS-PSQL01 ~]# runuser -l postgres -c 'pg_basebackup -D
/var/lib/pgsql/9.3/data -l date +"%m-%d-%y"_initial_cloning -P -h
TKS-PSQL02 -p 5432 -U replicator -W -X stream' pg_basebackup:
directory "/var/lib/pgsql/9.3/data" exists but is not empty
/var/lib/pgsql/9.3/data in TKS-PSQL02 is confirmed empty.
[root#TKS-PSQL02 9.3]# ls -l /var/lib/pgsql/9.3/data/ total 0
Any idea why I am getting such error? And is there any better way to do Postgresql HA? Note: I am not using shared storage for the database so I could not proceed with Redhat clustering.
Appreciate all the answers in advance.
I'm new to Postgres and Bash so I'm not sure what the difference is.
I'm trying to automate in a bash script updating a table in Postgres. I have the .sql file and I've created .pgpass file with 600.
The a script that is provided to me uses sudo -u postgres psql db -w < .sql and it fails because it can't find the pass.
Whereas, sudo psql -U postgres db -w < .sql doesn't prompt for a pass and is able to update.
So what's the difference? Why can't the first command get the pass from the .pgpass?
sudo -u postgres is running the rest of the command string as the UNIX user postgres
sudo psql -U postgres db -w is running the command as the UNIX user root and (presumeably) connecting to postgres as the user "postgres"
Probably the .pgpass file doesn't exist for the unix user postgres.
It is a case of peer autentication. If you're running user x and you have user x on your database you're trusted by postgres so you don't have to use password (default settings of instalation). Running sudo psql -u x you're trying to connect from user root to database as user x... root!=x so you need password. Client authentication is controlled by a configuration file pg_hba.conf You can also provide password via .pgpass file. You'll find all needed informations in PostgreSQL documentation.
Hello I am useing postgresql pg_dump to dump a databases but there are multiple databases on the postgresql instance can the .pgpass file have multiple databases passwords in it.
pg_dump command: -h = host -p =port -U = user -w = look for .pgpass file
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
.pgpass file looks like this:
localhost:7432:scm:scm:password
There are other databases running on this instance of postgresql and I would like to add them to the file so i only need to use one .pgpass file
I think the users in the dump command needs to change also?
localhost:7432:amon:amon:password
So adding multiple lines to the .pgpass file i was able to do more than 1 databases at a time.
EX: .pgpass file:
localhost:7432:scm:scm:password
localhost:7432:amon:amon:password
and the dump commands needs to be in script file one after the other.
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
pg_dump -h localhost -p 7432 -U amon -w > /nn5/scm_server_db_backup
What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!