Background:
I am trying to set up streaming replication between two servers. Although postgresql is running on both boxes without any errors, when I add or change a record from the primary, these changes are not reflected on the secondary server.
I have the following set up on my primary database server:
(Note: I'm using fake ip addresses but 10.222.22.12 represents the primary server and .21 the secondary)
primary server - posgresql.conf
listen_addresses = '10.222.22.12'
unix_socket_directory = '/tmp/postgresql'
wal_level = hot_standby
max_wal_senders = 5 # max number of walsender processes
# (change requires restart)
wal_keep_segments = 32 # in logfile segments, 16MB each; 0 disables
primary server - pg_hba.conf
host all all 10.222.22.21/32 trust
host replication postgres 10.222.22.0/32 trust
primary server - firewall
I've checked to make sure all incoming to the fw is open and that all traffic out is allowed.
secondary server - posgresql.conf
listen_addresses = '10.222.22.21'
wal_level = hot_standby
max_wal_senders = 5 # max number of walsender processes
# (change requires restart)
wal_keep_segments = 32 # in logfile segments, 16MB each; 0 disables
hot_standby = on
secondary server - pg_hba.conf
host all all 10.222.22.12/32 trust
host all all 10.222.22.21/32 trust
host replication postgres 10.222.22.0/24 trust
secondary server - recovery.conf
standby_mode='on'
primary_conninfo = 'host=10.222.22.12 port=5432 user=postgres'
secondary server firewall
everything is open here too.
What I've tried so far
Made a change in data on the primary. Nothing replicated over.
Checked the firewall settings on both servers.
Checked the arp table on the secondary box to make sure it can communicate with the primary.
checked the postmaster.log file on both servers. They are empty.
Checked the syslog file on both servers. no errors noticed.
restarted postgresql on both servers to make sure it starts without errors.
I'm not sure what else to check. If you have any suggestions, I'd appreciate it.
EDIT 1
I've checked the pg_stat_replication table on the master and I get the following results:
psql -U postgres testdb -h 10.222.22.12 -c "select * from pg_stat_replication;"
pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | state | sent_location | write_location | flush_location | repl
ay_location | sync_priority | sync_state
-----+----------+---------+------------------+-------------+-----------------+-------------+---------------+-------+---------------+----------------+----------------+-----
------------+---------------+------------
(0 rows)
And on the slave, notice the results from the following query:
testdb=# select now()-pg_last_xact_replay_timestamp();
?column?
----------
(1 row)
openser=#
I changed the pg_hba.conf file on the primary and added the exact ip addr of my slave like so:
host all all 10.222.22.21/32 trust
host replication postgres 10.222.22.0/32 trust
#added the line below
host replication postgres 10.222.22.12/32 trust
Then I restarted postgresql and it worked.
I guess I was expecting that the line above the new line I added would work, but it's not. I have to do more reading on subnetting.
On master, listen address should allow connection from slave, i.e
listen_addresses = '10.222.22.21'
It seems your postgres logging not well configured.
My guess is, the slave cannot stream because it falls behind the master, it can be due to network latency.
My suggestion is, You should archive the wal files, so if the slave falls behind the master, it can replay wal files from the archive.
You can also check by doing
select * from pg_stat_replication;
on master. If it does not show any rows, it means that streaming fails, probably due to slave falls behind the master.
You can also check by issuing :
select now()-pg_last_xact_replay_timestamp();
on slave. The query count how far the slave falls behind the master.
Usually, streaming replication lag is under 1s. Lag larger than 1s, then streaming will be cut off.
Related
Good Morning all,
I have an HA setup consisting of 2 Postgres servers with pgpool on top of them. The setup is working fine.
I am trying to replicate data of this cluster to a third Postgres server(outside the current cluster) using the pg_basebackup command for the same. When I provided pg_basebackup with the master Postgres IP it worked fine and data was replicated to the third server.
Now, I am trying to perform the same activity but this time, instead of providing the IP for the master Postgres I am providing the pgpool IP to the pg_basebackup command and received the following errors.
On the master Postgres server logs
LOG: could not receive data from client: Connection reset by peer
ERROR: cannot execute SQL commands in WAL sender for physical replication
On the Postgres server from where pg_basebackup is executed
pg_basebackup: could not send replication command "SHOW data_directory_mode": FATAL: Backend throw an error message
DETAIL: Exiting current session because of an error from backend
HINT: BACKEND Error: "cannot execute SQL commands in WAL sender for physical replication"
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Note: Pgpool is configured in a way that all the read/write requests are directed only to the master postgres server and I am using streaming replication with pg_basebackup.
I just installed the new version of Elementary OS and I lost the configuration that makes work my Postgresql.
I have an app that works perfectly online with a remote DB on Heroku, but when I run that on my local machine I can't reach the server. I think I miss something in the pg_hba.conf because I have all services up and running and all ports open for the DB. Actually I have this config file
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
host all all 0.0.0.0/0 md5
host all all ::/0 md5
I hope you can give me a way to contact my DB. In the last installation, I was able to, but I lost the config file.
Hello and thanks for the replies. I read the docs that says just how to connect to local service or configure a server that runs on a machine. I did all the steps before (just in case): user add, configuration of the local DB, giving admin user to the DB etc.
This line (the last)
host all all ::/0 md5
Is the one the docs say to add, but it is working for DB calls on the same machine.
I take web monitors, scanned ports and whatever I could see on linux system: ports for Postgresql are open, service runs, seems all fine. DB is reachable via PGAdmin, same credentials in the app. App is a NodeJS that calls the DB for an interactive website.
For those reasons I believe that should be a configuration problem. I have also no active firewall, no other rules than the actual config file for postgresql.
As I write the app works perfectly when it is on local or when it is all on the server. I need to have a cross config for development to have quick way to work on the actual online DB and a local copy (editable) of the web app. Or permit more people to develop at same time from different machines.
There is no error, just the app can't go online to get the DB and loops to find it. At last goes timeout.
Last time I had fix this with a similar line, obviously it is not the right form. What I ask is simply a line of config. I am not skilled in server configuration and I don't need to be anyway: once this will be online the server will have already configuration. I don't even care on what SQL type I will work, the app has a parser that makes all SQLs compatible.
I had to restore the system because of a problem, otherwise all was working before and I changed just that line, can't remember how...
Hope this will clear the situation.
I've installed Cassandra on one EC2 instance that contains one keyspace with SimpleStrategy and replcation factor 1.
I've also made port 9042 accessible from anywhere in the security group.
I have a Node.js application that contains the following code:
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['12.34.567.890:9042',], keyspace: 'ks1' });
const query = 'CREATE TABLE table.name ( field1 text, field2 text, field3 counter, PRIMARY KEY (field1, field2));';
client.execute(query)
.then(result => console.log(result));
which produces the following error:
NoHostAvailableError: All host(s) tried for query failed. First host
tried, 12.34.567.890:9042: DriverError: Connection timeout. See
innerErrors.
I use cassandra-driver.
I've made sure Cassandra is running.
Edit:
As Aaron suggested, I have installed cqlsh on the client machine. When I go cqlsh 12.34.567.890 9042, it returns:
Connection error: ('Unable to connect to any servers',
{'12.34.567.890': error(10061, "Tried connecting to [('12.34.567.890',
9042)]. Last error: No connection could be made because the target
machine actively refused it")})
As Aaron suggeted, I have edited Cassandra.yaml on the server and replaced localhost with 12.34.567.890. I'm still getting the same error though.
First of all, you don't need to specify the port. Try this:
const client = new cassandra.Client({ contactPoints: ['12.34.567.890'], keyspace: 'ks1' });
Secondly, where is your NodeJS app running from? Install and run cqlsh from there, just to make sure connection is possible. You can also use telnet to make sure you can connect to your node on 9042.
Also, you're going to want to enable authentication and authorization, and never use SimpleStrategy again. Enabling auth and building your keyspaces with NetworkTopologyStrategy are good habits to get into.
I just noticed that you said this:
instance that contains one keystore
Did you mean "keyspace" or are you using client-to-node SSL? If so, you're going to need to adjust your connection code to present a SSL certificate which matches the one in your node's keystore.
If you're still having problems, the next thing to do, would be to go about ensuring that you are connecting to the correct IP address. Grep your cassandra.yaml to see that:
$ grep "_address:" conf/cassandra.yaml
listen_address: 192.168.0.2
broadcast_address: 10.1.1.4
# listen_on_broadcast_address: false
rpc_address: 192.168.0.2
broadcast_rpc_address: 10.1.1.4
If they're configured, you'll want to use the "broadcast" address. These different addresses are typically useful for deployments where you have both an internal and external IP address.
$ grep "_address:" conf/cassandra.yaml
listen_address: localhost
# broadcast_address: 1.2.3.4
# listen_on_broadcast_address: false
rpc_address: localhost
# broadcast_rpc_address: 1.2.3.4
If you see output that looks like this, it means that Cassandra is listening on your local IP of 127.0.0.1. In which case, you wouldn't even need to specify it.
grep "_address:" cassandra.yaml returns exactly what you wrote in the second quote (with the localhost). Is it good or I need to change it?
You will need to change this. Otherwise, it will only accept connections on 127.0.0.1, which will not allow anything outside that node to connect to it.
Then what should I write there? I guess Cassandra is not supposed to be aware to the IP address of the machine that hosting it.
Actually, the main problem is that Cassandra is very aware of which IP it is on. Since you're trying to connect on 12.34.567.890 (which I know isn't a real IP), you should definitely use that.
You only need to specify broadcast addresses if each of your instances has both internal and external IP addresses. Typically the internal address gets specified as both rpc and listen, while the external becomes the broadcast addresses. But if your instance only has one IP, then you can leave the broadcast addresses commented-out.
We can configure two more mongos server ip in to nodejs application.If we are configured 3 Mongos Ip means Which ip will be used whether it is based on Round Robin or any concern, how its works.
Its Mainly help full for what automatic fail over or load balancing.
How we find which mongos ip used for current operation
Replication is the process of synchronizing data across multiple servers. Replication provides redundancy and increases data availability with multiple copies of data on different database servers, replication protects a database from the loss of a single server. Replication also allows you to recover from hardware failure and service interruptions.
MongoDB achieves replication by the use of replica set. A replica set is a group of mongod instances that host the same data set. In a replica one node is primary node that receives all write operations. All other instances, secondaries, apply operations from the primary so that they have the same data set. Replica set can have only one primary node.
Replica set is a group of two or more nodes (generally minimum 3 nodes are required).
In a replica set one node is primary node and remaining nodes are secondary.
All data replicates from primary to secondary node.
At the time of automatic failover or maintenance, election establishes for primary and a new primary node is elected.
After the recovery of failed node, it again join the replica set and works as a secondary node.
start the mongodb server by specifying --replSet option.
Basic syntax of --replSet is given below:
mongod --port "PORT" --dbpath "YOUR_DB_DATA_PATH" --replSet "REPLICA_SET_INSTANCE_NA
like this
mongod --port 27017 --dbpath "D:\set up\mongodb\data" --replSet rs0
To add members to replica set, start mongod instances on multiple machines. Now start a mongo client and issue a command rs.add(HOST_NAME:PORT)
You can add mongod instance to replica set only when you are connected to primary node. To check whether you are connected to primary or not issue the command db.isMaster() in mongo client.
When a customer signs up for my service, I would like to create an A DNS entry for them:
username.mydomain.tld pointing to the IPv4 address of the server that hosts their page
This DNS system would ideally:
Be fairly light-weight
Be distributed. A master/slaves model would be fine, potentially with master failover or going read-only when the master is offline.
Support changes being made via a nice API (mainly, create/remove A entries)
Applies changes instantly (understanding that DNS takes time to propagate)
Run on Linux
Is there something awesome fitting that description?
Thanks :-)
You can just use dynamic DNS updates. Here's a very rudimentary application:
Generate a shared symmetric key which will be used by the DNS server and update client:
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST key.name.
The key name is a domain name, but you can use anything you want: it's more or less just a name for the key.
Configure bind to allow this key to make changes to the zone mydomain.tld:
key "key.name." {
algorithm hmac-md5;
secret "copy-the-base64-string-from-the-key-generated-above==" ;
}
zone "mydomain.tld" {
...
allow-update { key key.name. ; };
...
}
Make changes using nsupdate:
nsupdate -k <pathname-to-file-generated-by-dnssec-keygen>
As input to the nsupdate command:
server dns.master.server.name
update delete username.mydomain.com
update add username.mydomain.com a 1.2.3.4
update add username.mydomain.com aaaa 2002:1234:5678::1
Don't forget the blank line after the update command. nsupdate doesn't send anything to the server until it sees a blank line.
As is normal with bind and other DNS servers, there is no high availability of the master server, but you can have as many slaves as you want, and if they get incremental updates (as they should by default) then changes will be propagated quickly. You might also choose to use a stealth master server whose only job is to receive and process these DDNS updates and feed the results to the slaves.