Create Two or More Mongos Connection Objects in Node JS - node.js

We can configure two more mongos server ip in to nodejs application.If we are configured 3 Mongos Ip means Which ip will be used whether it is based on Round Robin or any concern, how its works.
Its Mainly help full for what automatic fail over or load balancing.
How we find which mongos ip used for current operation

Replication is the process of synchronizing data across multiple servers. Replication provides redundancy and increases data availability with multiple copies of data on different database servers, replication protects a database from the loss of a single server. Replication also allows you to recover from hardware failure and service interruptions.
MongoDB achieves replication by the use of replica set. A replica set is a group of mongod instances that host the same data set. In a replica one node is primary node that receives all write operations. All other instances, secondaries, apply operations from the primary so that they have the same data set. Replica set can have only one primary node.
Replica set is a group of two or more nodes (generally minimum 3 nodes are required).
In a replica set one node is primary node and remaining nodes are secondary.
All data replicates from primary to secondary node.
At the time of automatic failover or maintenance, election establishes for primary and a new primary node is elected.
After the recovery of failed node, it again join the replica set and works as a secondary node.
start the mongodb server by specifying --replSet option.
Basic syntax of --replSet is given below:
mongod --port "PORT" --dbpath "YOUR_DB_DATA_PATH" --replSet "REPLICA_SET_INSTANCE_NA
like this
mongod --port 27017 --dbpath "D:\set up\mongodb\data" --replSet rs0
To add members to replica set, start mongod instances on multiple machines. Now start a mongo client and issue a command rs.add(HOST_NAME:PORT)
You can add mongod instance to replica set only when you are connected to primary node. To check whether you are connected to primary or not issue the command db.isMaster() in mongo client.

Related

Cassandra create pool with hosts on different ports

I have 10 Cassandra Nodes running on Kubernetes on my server and 1 contact point that expose the service on port 10023.
However, when the datastax driver tries to establish a connection with the other nodes of the cluster it uses the exposed port instead of the default one and i get the following error:
com.datastax.driver.core.ConnectionException: [/10.210.1.53:10023] Pool was closed during initialization
Is there a way to expose one single contact point and have it to communicate with the other nodes on the standard port (9042)?
i checked on the datastax documentation if there is anything related to it but i didn't find much.
this is how i connect to the cluster
Cluster.Builder builder = Cluster.builder();
builder.addContactPoints(address)
.withPort(Integer.valueOf(10023))
.withCredentials(user, password)
.withMaxSchemaAgreementWaitSeconds(600)
.withSocketOptions(
new SocketOptions()
.setConnectTimeoutMillis(Integer.valueOf(timeout))
.setReadTimeoutMillis(Integer.valueOf(timeout))
).build();
Cluster cluster = builder.withoutJMXReporting().build();
Session session = cluster.connect();
After driver contacts first node, it fetches information about cluster, and use this information, and this information includes on what ports Cassandra listens.
To implement what you want to do, you need that Cassandra listened on the corresponding port - this is configured via native_transport_port parameter of the cassandra.yaml.
Also, by default Cassandra driver will try to connect to all nodes in cluster because it uses DCAware/TokenAware load balancing policy. If you want to use only one node, then you need to use WhiteListPolicy instead of default policy. But is not optimal from the performance point of view.
I would suggest to re-think how you expose Cassandra to clients.

How do I start 2 nodes in the elastic-search cluster?

I can't find good information on making 2 node servers running in elastic-search cluster.
I just want a clear explanation of what commands I have to run or what I have to change in the elastic.yml config file.
Assuming you're not using automatic discovery:
identical for all nodes
cluster.name: My-Cluster-Name
specific per node
node.name: [node name]
network.host: [hostname]
http.port: 92xx
transport.tcp.port: 93xx
(If they run on the same host, they need different ports. If they run on two different hosts, you can use the same port numbers)
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["server1:9300","server2:9300","server2:9301"]
Here you list all the nodes in your cluster with host:port
discovery.zen.minimum_master_nodes: 2
#This must be set to minimum (number of nodes/2) + 1 to avoid "split brains". I.e. for two or three nodes you would set it to 2
discovery.zen.fd.ping_timeout: 600s
Not mandatory, but if the network has issues you don't want the cluster to start panicking too quickly
The important points are that if you don't use multicast discovery, all the nodes need to know the host and port of all other nodes, and this has to be set in the elasticsearch.yml files. If you only use one node per host you can use the default 9200 and 9300 ports.
Once the nodes are set up you just start them all and check the output. they should log that they have found the other nodes, and you can view the active nodes using the _cat/nodes API:
http://server1:9200/_cat/nodes?v&h=id,ip,port,v,m,d,fdp,r,get.current,n,u
Each node should have its own copy of the elasticsearch software and will store its own data in the /data folder.

Cassandra client port enable

How to enable cassandra port to connect with BI application. Here my setup with cassandra is of multiple nodes (192.xxx.xx.01,192.xxx.xx.02,192.xxx.xx.03). In this scenario which node will be acting like master / coordinator with my application.
Although i worked with listen_address, rpc_address, broadcast_rpc_address and seeds, I opened both tcp ports 9042 and 9160.
version: 3.10
Kindly, lead me to the right direction.
Cassandra uses master-less architecture.All nodes are equal in cassandra.
When you connect to one of the node that node act as co-ordinator node, any of the node can be co-ordinator.
The coordinator is selected by the driver based on the policy you have set. Common policies are DCAwareRoundRobinPolicy and TokenAware Policy.
For DCAwareRoundRobinPolicy, the driver selects the coordinator node based on its round robin policy. See more here: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/DCAwareRoundRobinPolicy.html
For TokenAwarePolicy, it selects a coordinator node that has the data being queried - to reduce "hops" and latency. More info: http://docs.datastax.com/en/drivers/java/2.1/com/datastax/driver/core/policies/TokenAwarePolicy.html
native_transport_port is 9042 by default and clients use native transport by default.
Hence you should have connection from your BI to Cassandra host on port 9042.

How to verify a CouchDB 2.0 cluster setup

I just set my three CouchDB instances as a cluster, this is how I did when I set it up:
Add "-kernel inet-dist-listen-minimum/maxinum" from 9100 to 9200 to the vm.args file. and shut down the firewall
Set three couchdb instanes' using the same admin and passwords.
Change binding address to 0.0.0.0 for both chttpd and httpd section in Fauxton
Choos one of the couchdb instance to set up as cluster then add two nodes (by entering their ip address)
All done
After these steps, I believe the cluster should be set up properly, however, when I ran the command
curl http://username:login#localhost/_membership
on three VMs, only the main one of the three nodes showed it had three members in the cluster( nodes).
This is what it looks like when in http://localhost:9000/_membership (It's a ssh tunnel to connect to the port 5984 from my computer) :
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#130.56.252.xxx","couchdb#130.56.252.xxx","couchdb#localhost"]}
And this is what the other instances show:
{"all_nodes":["couchdb#localhost"],"cluster_nodes":["couchdb#localhost"]}
So now I have got two questions:
Did I set the cluster it correctly?
How can I tell if I set it properly?

What happen if I didn't put all the C* hosts in DevCenter?

Let's say I have 4 nodes: host1, host2, host3 and host4. However I only add host1 and host2 as Contact hosts. What would happen if I perform any operation in DevCenter? Will the action propagate to host3 and host4? Will this cause data corruption?
Here's what will happen:
DevCenter will use the Whitelist load balancing policy 1 to connect to the provided nodes
While DevCenter uses the DataStax Java driver as the underlying connector, it does use the above mentioned load balancing policy to reduce the time needed to obtain connections (instead of the default driver's load balancing policy which requires discovering all the nodes in the cluster and initiating connection pools to all those)
DevCenter will send the request to the nodes in the list you provided
If data is local to these nodes they will take care of the requests. If data is found on the other nodes in the cluster, the nodes used for the connection will act as coordinators (basically they'll relay the requests to the nodes having the data)
Bottom line there's no risk of data corruption and the results you get will be exactly the same as for connecting to all the nodes.

Resources