Clone a Postgres schema programatically - node.js

I need to clone a Postgres schema programatically in nodejs.
I have tried this approach
How to duplicate schemas in PostgreSQL
but it is quite old and could not make it work.
I have tried using pg_dump with -s flag to copy the schema structure then replacing in that file all the ocurrencies of the original schema for the new schema name.
Then i run psql instead of pg_restore because the file is not compatible with that command.
psql lims -U lims -h localhost -f new_schema.dump
psql:new_schema.dump:1127: ERROR: no existe el esquema «cliente2»
How could i create the schema programatically?

I think the most robust and sanity-preserving option is to use the database software itself to rename the schema. So something like this:
pg_dump -s -n old_schema > dmp_old.sql
createdb scratch_db_465484
psql scratch_db_465484 -1 -f dmp_old.sql
psql scratch_db_465484 -c 'alter schema old_schema rename to new_schema'
pg_dump scratch_db_465484 -s -n new_schema > dmp_new.sql
psql -1 -f dmp_new.sql
dropdb scratch_db_465484
I think the main problem here would be any extensions which installed their objects into old_schema would not get dumped but might be depended upon.

I suppose that:
you created dump of schema
than search-replace old schema name to new. I suppose it's cliente2 in your case
then you tried to run it on postgresql
Problem: "file is not compatible with that command"
I suppose you forgot to run
CREATE SCHEMA cliente2;
before running your script.
That's why you have an error message that schema cliente2 not exists.
Other (possible) problems
A. Most probably auto-generated script will need manual edit to be correct. Especially if you copied schema public into new schema.
B. Usually you need to add GRANTs for users to use schema and objects inside it.

Related

POSTGRES (psql) - getting ERROR: must be owner of database, but I am doing it with the owner of the database

I have a script which deploys a web app to production. Currently, I am trying to implement a script in the begining which will "empty" the test database, before the database migrations are applied on it. This will be done to prevent "invalid/dirty database" problems, as everything will be newly created before attempting the migrations. Just a note, this runs in a centOS Docker container.
Anyways, this is what I tried to do:
psql -U MYUSER \
-h ${postgres_host} -c "DROP DATABASE my_test_database;"
psql -U MYUSER \
-h ${postgres_host} -c "CREATE DATABASE my_test_database;"
psql -U MYUSER -h ${postgres_host} -d \
my_test_database -c "CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA
public;"
Running the script gave me the following error:
psql: FATAL: Peer authentication failed for user MYUSER
Doing a little bit of research, I found out about peer authentication, and found a way around it (without having to edit the pg_conf file). The way was to pass the password as well.
So, for the three commands I did the following:
PGPASSWORD=${postgres_password} psql -U MYUSERNAME
-h ${postgres_host} -c "DROP DATABASE test_database;"
PGPASSWORD=${postgres_password} psql -U MYUSERNAME
-h ${postgres_host} -c "CREATE DATABASE test_database;"
PGPASSWORD=${postgres_password} psql -U MYUSERNAME -h ${postgres_host} -d \
test_database -c "CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA
public;"
The parameters I pass are defined earlier in the script. But now, I get the following error:
ERROR: must be owner of database test_database
However, the user I specify is THE owner. Can anyone point me toward the right way to do it or give me advice? If you need more info, I will be happy to oblige. Thank you in advance!

What is the difference between sudo -u postgres psql and sudo psql -U postgres?

I'm new to Postgres and Bash so I'm not sure what the difference is.
I'm trying to automate in a bash script updating a table in Postgres. I have the .sql file and I've created .pgpass file with 600.
The a script that is provided to me uses sudo -u postgres psql db -w < .sql and it fails because it can't find the pass.
Whereas, sudo psql -U postgres db -w < .sql doesn't prompt for a pass and is able to update.
So what's the difference? Why can't the first command get the pass from the .pgpass?
sudo -u postgres is running the rest of the command string as the UNIX user postgres
sudo psql -U postgres db -w is running the command as the UNIX user root and (presumeably) connecting to postgres as the user "postgres"
Probably the .pgpass file doesn't exist for the unix user postgres.
It is a case of peer autentication. If you're running user x and you have user x on your database you're trusted by postgres so you don't have to use password (default settings of instalation). Running sudo psql -u x you're trying to connect from user root to database as user x... root!=x so you need password. Client authentication is controlled by a configuration file pg_hba.conf You can also provide password via .pgpass file. You'll find all needed informations in PostgreSQL documentation.

mysql dump 200DB ~ 40GB from one server to another

What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!

Create PostgreSQL backup files with timestamp

I'm doing a dump like this pg_dump prod-db -U postgres > prod-db.sql, but would be cool if I could do a dump like pg_dump prod-db -U postgres > prod-db-{date}.sql and be able to generate a file like prod-db-06-02-13.sql via shell...
I have no idea on how to start nor where to looking for. Any ideas, links, docs would be much appreciated.
Try this:
pg_dump prod-db -U postgres > prod-db-$(date +%d-%m-%y).sql
Here's the date manual, for other format options.
Use backticks and the date command.
i.e.
pg_dump prod-db -U postgres > prod-db-`date +%d-%m-%y`.sql

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources