When hazelcast will add a new node to a cluster. If a new node is added it will be functioning in different port number. So If we have to access the data from that node whether we have to add its ip address and port number in the client side?
No, you don't have to this.
In client side,It is enough to connect 1 member. Everytime client connects 1 node. And get cluster information from that node.
Providing more member info ensure that: when node that has a client connection crashes, client tries to connect other members that provided in its configuration.
Related
I have locally setup a port forwarding to the documentDB that is working successfully on the mongodb driver versions 3.x. When I update the mongodb package to 4.x I am getting an error of a timeout with the reason ReplicaSetNoPrimary.
The code is very simple:
const MongoClient = require('mongodb').MongoClient;
const client = new MongoClient('mongodb://xxxx:xxxx#localhost:27017');
client.connect(function(err) {
if (err) {
console.log(err);
return;
}
const db = client.db('testdb');
console.log("Connected successfully to server");
client.close();
});
Has anyone been able to connect to the documentDB locally using port forwarding with the 4.x driver? Am I missing some sort of config options? (Keep in mind I have disabled all tls and everything to make it simpler to connect and as previously stated, successfully connect when using the mongodb 3.x packages)
When connecting to a replica set, the driver:
uses the host in the connection string as a seed to make an initial connection.
runs the isMaster or hello command on that initial connection to get the full list of host:port replica set members and their current status
drops the initial connections
connects to each of the members discovered in step #2
during operations, automatically monitors all of the members, sending operation to the primary even if a different node becomes primary
In your scenario, even though you are connecting to localhost, the initial connection returns the host:port pairs that are included in the replica set configuration.
The reason that this just became a problem is the MongoDB driver specifications changed to use unified topology by default.
Unified topology permits the driver to automatically detect if it is connecting to a standalone instance, replica set, or sharded cluster, which simplifies the connection process and reduces the administrative overhead required when changing how the database is deployed.
Since your connection is failing, I assume the hostname:port pairs listed in the replica set config are either not resolvable or not reachable from the test host.
To resolve this situation either:
make it so this machine can resolve the hostnames via DNS or hosts file, and permit the connections to those ports through any firewalls
use the directConnection=true connection option to disable topology discovery
I have one situation to deal with redis-cluster.Actually we want to move to redis-cluster for high availability.So, currently we have one transaction server and we are using redis for managing mini-Statements.We have single instance of redis running on default port with 0.0.0.0 ip. In my transaction server, i have one configuration file in which i am putting redis ip and port for connection.
My Question:
1) Suppose i have two machine with redis server and i want something like if one machine died then my transaction server will automatically use second machine for its work and it should have all the keys available.So for this what ip and port i should configure in my transaction server config file and what should be the setup for redis to achieve this goal?
A suggestion or a link will be helpful!
If you looking for high availability solution for Redis, you might want to look inot Redis Sentinel but not cluster.
Redis Sentinel offers exactly what you need, you can see the official document for more information.
I have 10 Cassandra Nodes running on Kubernetes on my server and 1 contact point that expose the service on port 10023.
However, when the datastax driver tries to establish a connection with the other nodes of the cluster it uses the exposed port instead of the default one and i get the following error:
com.datastax.driver.core.ConnectionException: [/10.210.1.53:10023] Pool was closed during initialization
Is there a way to expose one single contact point and have it to communicate with the other nodes on the standard port (9042)?
i checked on the datastax documentation if there is anything related to it but i didn't find much.
this is how i connect to the cluster
Cluster.Builder builder = Cluster.builder();
builder.addContactPoints(address)
.withPort(Integer.valueOf(10023))
.withCredentials(user, password)
.withMaxSchemaAgreementWaitSeconds(600)
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(Integer.valueOf(timeout))
.setReadTimeoutMillis(Integer.valueOf(timeout))
).build();
Cluster cluster = builder.withoutJMXReporting().build();
Session session = cluster.connect();
After driver contacts first node, it fetches information about cluster, and use this information, and this information includes on what ports Cassandra listens.
To implement what you want to do, you need that Cassandra listened on the corresponding port - this is configured via native_transport_port parameter of the cassandra.yaml.
Also, by default Cassandra driver will try to connect to all nodes in cluster because it uses DCAware/TokenAware load balancing policy. If you want to use only one node, then you need to use WhiteListPolicy instead of default policy. But is not optimal from the performance point of view.
I would suggest to re-think how you expose Cassandra to clients.
We are using the latest Apache Cassandra database server, and the Datastax Node.js client, running in the cloud.
When our Cassandra servers are rebuilt, they get new IP addresses. Then any running service clients can't find the new servers, the client driver obviously must cache the IP addresses, instead of using DNS.
Is there some way around this problem, other than doing client shutdown and get a new client, in our services when we encounter an error accessing the database?
If you only have 1 server, there is nothing you can do.
Otherwise the node when it rebuilds (if it is a single node in the cluster of many) will advertise the new IP to the cluster and cluster topology is updated. So the peers table will be updated and the driver can register this event (AFAIK).
But why not use private static addresses for your cassandra nodes?
I'm working on a project which uses SocketIO and should be Horizontally scalable. Im using
A Load Balancer using HAProxy
Multiple Node Servers (2-4)
Database server(Redis and MongoDB)
I'm able to redirect my incoming Socket connections to Node servers using roundrobin method. Socket connection is stable and if I use socket.emit() I'm receiving the data. I'm also able to emit to the Other socket connection connected to the same Node server.
I'm facing issue in the following scenario:
User A connected to Node server 1 and User B connected to Node Server 2
My intention is to store the Socket data in redis
If User A wants to send some data to User B, how can I tell the Node server 2 to emit the data to User B from Node server 1
Please let me know how can I achieve this (with ref if possible).
Thanks in advance.
This scenario is a match for the case Pub/Sub of Redis.
If you haven't already, you should try Pub/Sub.
Have a look at socket.io Redis adapter. It should be exactly what you need.
clients() method in particular looks promising. Keep in mind, that socket.io creates a unique room for each client.