pg_basebackup: directory "/var/lib/pgsql/9.3/data" exists but is not empty - linux

I am trying to build a HA two node cluster with pacemaker and corosync for postgresql-9.3. I am using the link below as my guide.
http://kb.techtaco.org/#!linux/postgresql/building_a_highly_available_multi-node_cluster_with_pacemaker_&_corosync.md
However, I cannot get pass to the part where I need do pg_basebackup as shown below.
[root#TKS-PSQL01 ~]# runuser -l postgres -c 'pg_basebackup -D
/var/lib/pgsql/9.3/data -l date +"%m-%d-%y"_initial_cloning -P -h
TKS-PSQL02 -p 5432 -U replicator -W -X stream' pg_basebackup:
directory "/var/lib/pgsql/9.3/data" exists but is not empty
/var/lib/pgsql/9.3/data in TKS-PSQL02 is confirmed empty.
[root#TKS-PSQL02 9.3]# ls -l /var/lib/pgsql/9.3/data/ total 0
Any idea why I am getting such error? And is there any better way to do Postgresql HA? Note: I am not using shared storage for the database so I could not proceed with Redhat clustering.
Appreciate all the answers in advance.

Related

with bash, is it possible to enter commands inside a docker container

i'm trying to see if i can run commands after "entering" into a container:
#!/bin/sh
# 1 - create a new user in the db for mautic
podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY /'$1/';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
exit
EOF
this gives me an error: Error: container create failed (no logs from conmon): EOF
but im thinking maybe this is not a good use of HERE DOCS
something like this doesn't work either:
echo $1 | podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p postal-server-1 -e 'select * from deliveries limit 10;'
That's a fine (and common) use of here docs, although you probably want to drop the -t from your podman command line. If I have a mariadb container running:
podman run -d --name mariadb -e MARIADB_ROOT_PASSWORD=secret docker.io/mariadb:10
Then if I put your shell script into a file named createdb.sh, modified to look like this for my environment:
podman exec -i mariadb mysql -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY '$1';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
EOF
I've made three changes:
I've removed the -t from the podman exec command line, since we're passing input on stdin rather than starting an interactive terminal;
I removed the unnecessary exit command (the interactive mysql shell will exit when it reaches end-of-file);
I removed the weird forward slashes around your quotes (/'$1/' -> '$1').
I can run it like this:
sh createdb.sh secret
And it runs without errors. The database exists:
$ podman exec mariadb mysql -u root -psecret -e 'show databases'
Database
information_schema
mauticdb <--- THERE IT IS
mysql
performance_schema
sys
And the user exists:
$ podman exec mariadb mysql -u root -psecret mysql -e 'select user from user where user="mautic"'
User
mautic

POSTGRES (psql) - getting ERROR: must be owner of database, but I am doing it with the owner of the database

I have a script which deploys a web app to production. Currently, I am trying to implement a script in the begining which will "empty" the test database, before the database migrations are applied on it. This will be done to prevent "invalid/dirty database" problems, as everything will be newly created before attempting the migrations. Just a note, this runs in a centOS Docker container.
Anyways, this is what I tried to do:
psql -U MYUSER \
-h ${postgres_host} -c "DROP DATABASE my_test_database;"
psql -U MYUSER \
-h ${postgres_host} -c "CREATE DATABASE my_test_database;"
psql -U MYUSER -h ${postgres_host} -d \
my_test_database -c "CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA
public;"
Running the script gave me the following error:
psql: FATAL: Peer authentication failed for user MYUSER
Doing a little bit of research, I found out about peer authentication, and found a way around it (without having to edit the pg_conf file). The way was to pass the password as well.
So, for the three commands I did the following:
PGPASSWORD=${postgres_password} psql -U MYUSERNAME
-h ${postgres_host} -c "DROP DATABASE test_database;"
PGPASSWORD=${postgres_password} psql -U MYUSERNAME
-h ${postgres_host} -c "CREATE DATABASE test_database;"
PGPASSWORD=${postgres_password} psql -U MYUSERNAME -h ${postgres_host} -d \
test_database -c "CREATE EXTENSION IF NOT EXISTS citext WITH SCHEMA
public;"
The parameters I pass are defined earlier in the script. But now, I get the following error:
ERROR: must be owner of database test_database
However, the user I specify is THE owner. Can anyone point me toward the right way to do it or give me advice? If you need more info, I will be happy to oblige. Thank you in advance!

Docker volume option create folder as "root" user

I am logged in in my PC (Fedora 24) as rperez. I have setup Docker for being able to run through this user, so I am running a container as follow:
$ docker run -d \
-it \
-e HOST_IP=192.168.1.66 \
-e PHP_ERROR_REPORTING='E_ALL & ~E_STRICT' \
-p 80:80 \
-v ~/var/www:/var/www \
--name php55-dev reypm/php55-dev
Notice the $ sign meaning I am running the command as a non root user (which uses #). The command above creates the following directory: /home/rperez/var/www but owner is set to root I believe this is because docker run as root user behind scenes.
Having this setup I am not able to create a file under ~/var/www as rperez because the owner is root so ...
What is the right way to deal with this? I have read this and this but is not so helpful.
Any help?
As discussioned here, this is an expected behavior of docker. You can create the target volume directory before running docker command or change the owner to your current user after the directory is created by docker:
chown $(whoami) -R /path/to/your/dir
I hit this same issue (also in a genomics context for the very same reason) and also found it quite unintuitive. What is the recommended way to "inherit ownership". Sorry if this described elsewhere, but I couldn't find it. Is it something like:
docker run ... -u $(id -u):$(id -g) ...

PostgreSQL .pgpass file with multiple passwords?

Hello I am useing postgresql pg_dump to dump a databases but there are multiple databases on the postgresql instance can the .pgpass file have multiple databases passwords in it.
pg_dump command: -h = host -p =port -U = user -w = look for .pgpass file
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
.pgpass file looks like this:
localhost:7432:scm:scm:password
There are other databases running on this instance of postgresql and I would like to add them to the file so i only need to use one .pgpass file
I think the users in the dump command needs to change also?
localhost:7432:amon:amon:password
So adding multiple lines to the .pgpass file i was able to do more than 1 databases at a time.
EX: .pgpass file:
localhost:7432:scm:scm:password
localhost:7432:amon:amon:password
and the dump commands needs to be in script file one after the other.
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
pg_dump -h localhost -p 7432 -U amon -w > /nn5/scm_server_db_backup

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources