I use the nodejs in three environments and the Cassandra is running in all the three nodes.
I totally understand using nodetool status I will be able to get the status of each node. But the problem is If my current node is down then I will not be able to perform nodetool status in the current node, So Is there a way to get the status using nodejs Cassandra driver?
Any help is appreciated.
EDITED :
As per dilsingi's suggestion, I used the client.hosts but the problem is, In the following cluster, 172.30.56.61 is down still it is showing as available.
How to get the status of each node?
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['172.30.56.60','172.30.56.61','172.30.56.62'], keyspace: 'test', policies : { loadBalancing : new cassandra.policies.loadBalancing.RoundRobinPolicy }});
async function read() {
client.connect().then(function () {
console.log('Connected to cluster with %d host(s): %j', client.hosts.length, client.hosts.keys());
client.hosts.forEach(function (host) {
console.log(host.address, host.datacenter, host.rack);
});
});
}
read();
nodeTool status output :
Datacenter: newyork
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.30.56.62 1.13 MiB 256 34.8% e93827b7-ba43-4fba-8a51-4876832b5b22 rack1
DN 172.30.56.60 1.61 MiB 256 33.4% e385af22-803e-4313-bee2-16219f73c213 rack1
UN 172.30.56.61 779.4 KiB 256 31.8% be7fc52e-c45d-4380-85a3-4cbf6d007a5d rack1
Node Js Code :
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['172.30.56.60','172.30.56.61','172.30.56.62'], keyspace: 'qcs', policies : { loadBalancing : new cassandra.policies.loadBalancing.RoundRobinPolicy }});
async function read() {
client.connect().then(function () {
console.log('Connected to cluster with %d host(s): %j', client.hosts.length, client.hosts.keys());
client.hosts.forEach(function (host) {
console.log(host.address, host.datacenter, host.rack, host.isUp(), host.canBeConsideredAsUp());
});
});
}
read();
NodeJs output :
Connected to cluster with 3 host(s): ["172.30.56.60:9042","172.30.56.61:9042","172.30.56.62:9042"]
172.30.56.60:9042 newyork rack1 true true
172.30.56.61:9042 newyork rack1 true true
172.30.56.62:9042 newyork rack1 true true
The drivers in general including (nodejs) are aware of the entire Cassandra cluster Topology. Upon the initial contact with the one or more node ip address in connection string, driver can automatically identify all the node ips that make up the cassandra ring. Its intelligent enough to know when a node goes down or a new node joins the cluster. It can even continue working with completely new nodes (ips) than what it began with.
So there isn't a requirement to code for node status, as driver automatically handles that for you. Its recommended to provide more than 1 ip in the connection string, so as to provide redundancy while making initial connection.
Here is the nodejs driver documentation and this section describe the "Auto node discovery" feature and "Cluster & Schema Metadata".
Related
I am trying to connect to redshift from my nodejs code to run a code to copy from S3 into redshift.
I am using the node-redshift package for this using the below code.
var Redshift = require('node-redshift');
var client = {
user: 'awsuser',
database: 'dev',
password: 'zxxxx',
port: '5439',
host: 'redshift-cluster-1.xxxxxxxxxx.us-east-1.redshift.amazonaws.com',
};
var redshiftClient = new Redshift(client);
var pg_query = "copy test1 from 's3://aws-bucket/" + file_name + "ACCESS_KEY_ID 'xxxxxxx' SECRET_ACCESS_KEY 'xxxxxxxxxx';";
redshiftClient.query(pg_query, {raw: true}, function (err1, pgres) {
if (err1) {
console.log('error here');
console.error(err1);
} else {
//upload successful
console.log('success');
}
});
}
});
I have tried using explicit connect also but in any case I am getting the timeout error as below
Error: Error: connect ETIMEDOUT XXX.XX.XX.XX:5439
The redshift cluster is assigned to a role for S3 full access and also has the default security group assigned.
Am I missing something here?
Make sure your cluster is publicly visible. The cluster should be sitting in a certain subnet. For that subnet, the security groups' inbound rules in VPC should have an entry that states that all IPs are allowed to connect to your Redshift cluster on port 5439.
If your public IP is present in that set then only you can connect to the cluster.
Say you have SQL Workbench/J which allows you to connect to the redshift cluster. If you are able to connect with this SQL client, you can ignore the above matter because it means that your IP is able to connect to the redshift cluster via SQL Workbench/J.
When I was using cassandra-driver version 3.x everything worked fine. Now that I have upgraded I get the following message...
Error: ArgumentError: 'localDataCenter' is not defined in Client options and also was not specified in constructor. At least one is required.
My client declaration looks like this...
const client = new Client({
contactPoints: this.servers,
keyspace: "keyspace",
authProvider,
sslOptions,
pooling: {
coreConnectionsPerHost: {
[distance.local]: 1,
[distance.remote]: 1
}
},
// TODO: Needed because in spite of the documentation provided by DataStax the default value is not 0
socketOptions: {
readTimeout: 0
}
});
What should I use for the localDataCenter property?
To find your datacenter name, check in your node's cassandra-rackdc.properties file:
$ cat cassandra-rackdc.properties
dc=HoldYourFire
rack=force10
Or, run a nodetool status:
$ bin/nodetool status
Datacenter: HoldYourFire
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.0.0.1 575.64 KiB 16 ? 5c5cfc93-2e61-472e-b69b-a4fc40f94876 force10
UN 172.0.0.2 575.64 KiB 16 ? 4f040fef-5a6c-4be1-ba13-c9edbeaff6e1 force10
UN 172.0.0.3 575.64 KiB 16 ? 96626294-0ea1-4775-a08e-45661dc84cfa force10
If you have multiple data centers, you should pick the same one that your application is deployed in.
Since v4.0 localDataCenter is now a required Client option
When using DCAwareRoundRobinPolicy, which is used by default, a local data center must now be provided to the Client options parameter as localDataCenter. This is necessary to prevent routing requests to nodes in remote data centers.
Refer upgrade guide here.
I'm using the Azure CosmosDB Emulator in Cassandra API mode. I could not find any documentation on the proper localDataCenter property, so I just tried datacenter1 to see what would happen.
const client = new cassandra.Client({
contactPoints: ['localhost'],
localDataCenter: 'dataCenter1',
authProvider: new cassandra.auth.PlainTextAuthProvider('localhost', 'key provided during emulator startup'),
protocolOptions: {
port: 10350
},
sslOptions: {
rejectUnauthorized: true
}
});
client.connect()
.then(r => console.log(r))
.catch(e => console.error(e))
This gave me a very helpful error message:
innerErrors: {
'127.0.0.1:10350': ArgumentError: localDataCenter was configured as 'datacenter1', but only found hosts in data centers: [South Central US]
Once I changed my data center to "South Central US" my connection was successful.
It should be the datacenter where the application is running or the one that is close.
Example copied from the datastax nodejs documentation
const client = new cassandra.Client({
contactPoints: ['host1', 'host2'],
localDataCenter: 'datacenter1'
});
We have a 3 nodes cluster with a RF 3.
As soon as we drain one node from the cluster we see many:
All host(s) tried for query failed (no host was tried)
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:214)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
All our writes and read are with a Consistency Level QUORUM or ONE so with one node down everything should work perfectly. But as long as the node is down, exceptions are thrown.
We use Cassandra 2.2.4 + Java Cassandra Driver 2.1.10.2
Here's how we create our cluster:
new Cluster.Builder()
.addContactPoints(CONTACT_POINTS)
.withCredentials(USERNAME, PASSWORD)
.withRetryPolicy(new LoggingRetryPolicy(DefaultRetryPolicy.INSTANCE))
.withReconnectionPolicy(new ExponentialReconnectionPolicy(10, 10000))
.withLoadBalancingPolicy(new TokenAwarePolicy(new RoundRobinPolicy()))
.withSocketOptions(new SocketOptions().setReadTimeoutMillis(12_000))
.build();
CONTACT_POINTS is a String array of the 3 public ips of the nodes.
A few months ago, the cluster was working fine with temporarily only 2 nodes but for an unknown reason it's not the case anymore and I'm running out of ideas :(
Thanks a lot for your help!
Problem solved.
More analysis showed that the issue was coming from an IP problem. Our cassandra servers use private local IPs (10.0.) to communicate together while our app servers have their public IPs in their config.
When they were in the same network it was working properly but as they moved to a different network, they were able to connect to only one machine of the cluster and the other two were considered as down as they were trying to connect to the private local IPs instead of the public one for the other two.
The solution was to add an IPTranslater in the cluster builder:
.withAddressTranslater(new ToPublicIpAddressTranslater())
With the following code:
private static class ToPublicIpAddressTranslater implements AddressTranslater {
private Map<String, String> internalToPublicIpMap = new HashMap<>();
public ToPublicIpAddressTranslater() {
for (int i = 0; i < CONTACT_POINT_PRIVATE_IPS.length; i++) {
internalToPublicIpMap.put(CONTACT_POINT_PRIVATE_IPS[i], CONTACT_POINTS[i]);
}
}
#Override
public InetSocketAddress translate(InetSocketAddress address) {
String publicIp = internalToPublicIpMap.get(address.getHostString());
if (publicIp != null) {
return new InetSocketAddress(publicIp, address.getPort());
}
return address;
}
}
I have setup Redis cluster in Google compute Engine by click to deploy option. Now i want to connect to this redis server from my node js code using 'ioredis' here is my code to connect to single instance of redis
var Redis = require("ioredis");
var store = new Redis(6379, 'redis-ob0g');//to store the keys
var pub = new Redis(6379, 'redis-ob0g');//to publish a message to all workers
var sub = new Redis(6379, 'redis-ob0g');//to subscribe a message
var onError = function (err) {
console.log('fail to connect to redis ',err);
};
store.on('error',onError);
pub.on('error',onError);
sub.on('error',onError);
And it worked. Now i want to connect to redis as cluster, so i change the code as
/**
* list of server in replica set
* #type {{port: number, host: string}[]}
*/
var nodes =[
{ port: port, host: hostMaster},
{ port: port, host: hostSlab1},
{ port: port, host: hostSlab2}
];
var store = new Redis.Cluster(nodes);//to store the keys
var pub = new Redis.Cluster(nodes);//to publish a message to all workers
var sub = new Redis.Cluster(nodes);//to subscribe a message channel
Now it throw this error:
Here is my Redis cluster in my google compute console:
Ok, I think there is a confusion here.
A Redis Cluster deployment is not the same than a number of standard Redis instances protected by Sentinel. Two very different things.
The click-to-deploy option of GCE deploys a number of standard Redis instances protected by Sentinel, not Redis Cluster.
ioredis can handle both kind of deployments, but you have to use the corresponding API. Here, you were trying to use the Redis Cluster API, resulting in this error (cluster related commands are not activated for standard Redis instances).
According to ioredis documentation, you are supposed to connect with:
var redis = new Redis({
sentinels: [{ host: hostMaster, port: 26379 },
{ host: hostSlab1, port: 26379 },
{ host: hostSlab2, port: 26379 } ],
name: 'mymaster'
});
Of course, check the sentinel ports and name of the master. ioredis will manage automatically the switch to a slave instance when the master fails, and sentinel will ensure the slave is promoted as master just before.
Note that since you use pub/sub, you will need several redis connections.
We've been advised to try Ganglia as a monitoring tool for our cluster.
The installation was pretty smooth but I have a problem with connectivity between gmond and gmetad.
Meta node is able to see(on web) only local gmond host (itself).
The configuration of gmetad (10.45.11.26 is gmetad localhost):
data_source "hbase" 10.45.11.26
The configuration of gmond(10.45.11.27 is gmond localhost):
cluster {
name = "hbase"
owner = "unspecified"
latlong = "unspecified"
url = "unspecified"
}
udp_send_channel {
host=10.45.11.26
port = 8649
ttl = 1
}
udp_recv_channel {
port = 8649
bind = 10.45.11.27
}
tcp_accept_channel {
port = 8649
}
Telnet from gmetad to gmond on port 8649 returns xml.
I can see udp traffic coming from gmond on gmetad node (tcpdump)
What I'm missing here?
I don't know if you still need help, but it can help to add
globals {
(......)
send_metadata_interval = 60 /*gmond heartbeats in secs */
}
in gmond.conf
In this case, you may have to wait 60 seconds after node startup before it is seen.