PostgreSQL .pgpass file with multiple passwords? - linux

Hello I am useing postgresql pg_dump to dump a databases but there are multiple databases on the postgresql instance can the .pgpass file have multiple databases passwords in it.
pg_dump command: -h = host -p =port -U = user -w = look for .pgpass file
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
.pgpass file looks like this:
localhost:7432:scm:scm:password
There are other databases running on this instance of postgresql and I would like to add them to the file so i only need to use one .pgpass file
I think the users in the dump command needs to change also?
localhost:7432:amon:amon:password

So adding multiple lines to the .pgpass file i was able to do more than 1 databases at a time.
EX: .pgpass file:
localhost:7432:scm:scm:password
localhost:7432:amon:amon:password
and the dump commands needs to be in script file one after the other.
pg_dump -h localhost -p 7432 -U scm -w > /nn5/scm_server_db_backup
pg_dump -h localhost -p 7432 -U amon -w > /nn5/scm_server_db_backup

Related

with bash, is it possible to enter commands inside a docker container

i'm trying to see if i can run commands after "entering" into a container:
#!/bin/sh
# 1 - create a new user in the db for mautic
podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY /'$1/';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
exit
EOF
this gives me an error: Error: container create failed (no logs from conmon): EOF
but im thinking maybe this is not a good use of HERE DOCS
something like this doesn't work either:
echo $1 | podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p postal-server-1 -e 'select * from deliveries limit 10;'
That's a fine (and common) use of here docs, although you probably want to drop the -t from your podman command line. If I have a mariadb container running:
podman run -d --name mariadb -e MARIADB_ROOT_PASSWORD=secret docker.io/mariadb:10
Then if I put your shell script into a file named createdb.sh, modified to look like this for my environment:
podman exec -i mariadb mysql -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY '$1';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
EOF
I've made three changes:
I've removed the -t from the podman exec command line, since we're passing input on stdin rather than starting an interactive terminal;
I removed the unnecessary exit command (the interactive mysql shell will exit when it reaches end-of-file);
I removed the weird forward slashes around your quotes (/'$1/' -> '$1').
I can run it like this:
sh createdb.sh secret
And it runs without errors. The database exists:
$ podman exec mariadb mysql -u root -psecret -e 'show databases'
Database
information_schema
mauticdb <--- THERE IT IS
mysql
performance_schema
sys
And the user exists:
$ podman exec mariadb mysql -u root -psecret mysql -e 'select user from user where user="mautic"'
User
mautic

pg_basebackup: directory "/var/lib/pgsql/9.3/data" exists but is not empty

I am trying to build a HA two node cluster with pacemaker and corosync for postgresql-9.3. I am using the link below as my guide.
http://kb.techtaco.org/#!linux/postgresql/building_a_highly_available_multi-node_cluster_with_pacemaker_&_corosync.md
However, I cannot get pass to the part where I need do pg_basebackup as shown below.
[root#TKS-PSQL01 ~]# runuser -l postgres -c 'pg_basebackup -D
/var/lib/pgsql/9.3/data -l date +"%m-%d-%y"_initial_cloning -P -h
TKS-PSQL02 -p 5432 -U replicator -W -X stream' pg_basebackup:
directory "/var/lib/pgsql/9.3/data" exists but is not empty
/var/lib/pgsql/9.3/data in TKS-PSQL02 is confirmed empty.
[root#TKS-PSQL02 9.3]# ls -l /var/lib/pgsql/9.3/data/ total 0
Any idea why I am getting such error? And is there any better way to do Postgresql HA? Note: I am not using shared storage for the database so I could not proceed with Redhat clustering.
Appreciate all the answers in advance.

What is the difference between sudo -u postgres psql and sudo psql -U postgres?

I'm new to Postgres and Bash so I'm not sure what the difference is.
I'm trying to automate in a bash script updating a table in Postgres. I have the .sql file and I've created .pgpass file with 600.
The a script that is provided to me uses sudo -u postgres psql db -w < .sql and it fails because it can't find the pass.
Whereas, sudo psql -U postgres db -w < .sql doesn't prompt for a pass and is able to update.
So what's the difference? Why can't the first command get the pass from the .pgpass?
sudo -u postgres is running the rest of the command string as the UNIX user postgres
sudo psql -U postgres db -w is running the command as the UNIX user root and (presumeably) connecting to postgres as the user "postgres"
Probably the .pgpass file doesn't exist for the unix user postgres.
It is a case of peer autentication. If you're running user x and you have user x on your database you're trusted by postgres so you don't have to use password (default settings of instalation). Running sudo psql -u x you're trying to connect from user root to database as user x... root!=x so you need password. Client authentication is controlled by a configuration file pg_hba.conf You can also provide password via .pgpass file. You'll find all needed informations in PostgreSQL documentation.

mysql dump 200DB ~ 40GB from one server to another

What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!

Restore mysql database from a remote dump file

Is it possible to restore a dump file from a remote server?
mysql -u root -p < dump.sql
Can a dump.sql be located in a remote server? If so how do I refer it in the command above. Copying to the server isn't an option as there is no enough space in the server. I'm on redhat 5
Implement SSH remote access from the remote server to local
run on the remote server
cat dump.sql | ssh -c 'mysql -u root -pPASSWORD '
OR
Implement MYSQL access from the remote server
setup all privileges for root#(REMOTESERVER)
run on remote server
mysql -h mysql.yourcompany.com -u root -p < dump.sql
Maybe try this? Pick an available server port (call it 12345)
On your mysql box do nc -l 12345 | mysql -u root -p
On the box with the dumpfile nc mysqlbox 12345 < local_dump_file.sql
look at the -h options for mysql connections
MySQL Connection Docs
Edit:
Oh I misread, you are not trying to load onto a remote server, you want the file to come from a remote server. If you can log into the remote server where the file is you can then:
remotebox:user>>mysql -h <your db box> < local_dump_file.sql

Resources