What should I use for localDataCenter in Cassandra-Driver 4.x - cassandra

When I was using cassandra-driver version 3.x everything worked fine. Now that I have upgraded I get the following message...
Error: ArgumentError: 'localDataCenter' is not defined in Client options and also was not specified in constructor. At least one is required.
My client declaration looks like this...
const client = new Client({
contactPoints: this.servers,
keyspace: "keyspace",
authProvider,
sslOptions,
pooling: {
coreConnectionsPerHost: {
[distance.local]: 1,
[distance.remote]: 1
}
},
// TODO: Needed because in spite of the documentation provided by DataStax the default value is not 0
socketOptions: {
readTimeout: 0
}
});
What should I use for the localDataCenter property?

To find your datacenter name, check in your node's cassandra-rackdc.properties file:
$ cat cassandra-rackdc.properties
dc=HoldYourFire
rack=force10
Or, run a nodetool status:
$ bin/nodetool status
Datacenter: HoldYourFire
========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.0.0.1 575.64 KiB 16 ? 5c5cfc93-2e61-472e-b69b-a4fc40f94876 force10
UN 172.0.0.2 575.64 KiB 16 ? 4f040fef-5a6c-4be1-ba13-c9edbeaff6e1 force10
UN 172.0.0.3 575.64 KiB 16 ? 96626294-0ea1-4775-a08e-45661dc84cfa force10
If you have multiple data centers, you should pick the same one that your application is deployed in.

Since v4.0 localDataCenter is now a required Client option
When using DCAwareRoundRobinPolicy, which is used by default, a local data center must now be provided to the Client options parameter as localDataCenter. This is necessary to prevent routing requests to nodes in remote data centers.
Refer upgrade guide here.

I'm using the Azure CosmosDB Emulator in Cassandra API mode. I could not find any documentation on the proper localDataCenter property, so I just tried datacenter1 to see what would happen.
const client = new cassandra.Client({
contactPoints: ['localhost'],
localDataCenter: 'dataCenter1',
authProvider: new cassandra.auth.PlainTextAuthProvider('localhost', 'key provided during emulator startup'),
protocolOptions: {
port: 10350
},
sslOptions: {
rejectUnauthorized: true
}
});
client.connect()
.then(r => console.log(r))
.catch(e => console.error(e))
This gave me a very helpful error message:
innerErrors: {
'127.0.0.1:10350': ArgumentError: localDataCenter was configured as 'datacenter1', but only found hosts in data centers: [South Central US]
Once I changed my data center to "South Central US" my connection was successful.

It should be the datacenter where the application is running or the one that is close.
Example copied from the datastax nodejs documentation
const client = new cassandra.Client({
contactPoints: ['host1', 'host2'],
localDataCenter: 'datacenter1'
});

Related

Azure Function connect Azure PostgreSQL ETIMEDOUT, errno: -4039

I have an Azure (AZ) Function does two things:
validate submitted info involving 3rd party packages.
when ok call a postgreSQL function at AZ to fetch a small set of data
Testing with Postman, this AF localhost response time < 40 ms. Deployed to Cloud, change URL to AZ, same set of data, took 30 seconds got Status: 500 Internal Server Error.
Did a search, thought this SO might be the case, that I need to bump my subscription to the expensive one to avoid cold start.
But more investigation running part 1 and 2 individually and combined, found:
validation part alone runs perfect at AZ, response time < 40ms, just like local, suggests cold start/npm-installation is not an issue.
pg function call always long and status: 500 regardless it runs alone or succeeding part 1, no data returned.
Application Insight is enabled and added a Diagnostic settings with:
FunctionAppLogs and AllMetrics selected
Send to LogAnalytiscs workspace and Stream to an event hub selected
Following queries found no error/exceptions:
requests | order by timestamp desc |limit 100 // success is "true", time taken 30 seconds, status = 500
traces | order by timestamp desc | limit 30 // success is "true", time taken 30 seconds, status = 500
exceptions | limit 30 // no data returned
How complicated my pg call is? Standard connection, simple and short:
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
db = pgp(
{
user: process.env.PGuser,
host: process.env.PGhost,
database: process.env.PGdatabase,
password: process.env.PGpassword,
port: process.env.PGport,
ssl:
{
rejectUnauthorized: true,
ca: fs.readFileSync("./environment/DigiCertGlobalRootCA.crt.pem").toString(),
},
}
);
const pgTest = (nothing) =>
{
return new Promise((resolve, reject) =>
{
var sql = 'select * from schema.test()'; // test() does a select from a 2-row narrrow table.
db.any(sql)
.then
(
good => resolve(good),
bad => reject({status: 555, body: bad})
)
}
);
}
module.exports = { pgTest }
AF test1 is a standard httpTrigger anonymous access:
const x1 = require("package1");
...
const xx = require("packagex");
const pgdb = require("db");
module.exports = function(context)
{
try
{
pgdb.pgTest(1)
.then
(
good => {context.res={body: good}; context.done();},
bad => {context.res={body: bad}; context.done();}
)
.catch(err => {console.log(err)})
}
catch(e)
{ context.res={body: bad}; context.done(); }
}
Note:
AZ = Azure.
AZ pg doesn't require SSL.
pg connectivity method: public access (allowed IP addresses)
Postman tests on Local F5 run against the same AZ pg database, all same region.
pgAdmin and psql all running fast against the same.
AF-deploy is zip-file deployment, my understanding it is using the same configuration.
I'm new to Azure but based on my experience, if it's about credential then should come back right away.
Update 1, FunctionAppLogs | where TimeGenerated between ( datetime(2022-01-21 16:33:20) .. datetime(2022-01-21 16:35:46) )
Is it because my pg network access set to Public access?
My AZ pgDB is a flexible server, current Networking is Public access (allowed IP address), and I have added some Firewall rule w/ client IP address. My assumption is access is allowed within AZ, but it's not.
Solution 1, simply check this box: Allow public access from any Azure servcie within Azure to this server at the bottom of the Settings -> Networking.
Solution 2, find out all AF's outbound IP and add them into Firewall rule, under Settings -> Networking. Reason to add them all is Azure select an outbound IP randomly.

How to pull values out of kubernetes config into a web application

I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?
You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs
npm install #kubernetes/client-node
You can use either way to provide authentication information to your kubernetes client
This is a code which worked for me
const k8s = require('#kubernetes/client-node');
const cluster = {
name: '<cluster-name>',
server: '<server-address>',
caData: '<certificate-data>'
};
const user = {
name: '<cluster-user-name>',
certData: '<certificate-data>',
keyData: '<certificate-key>'
};
const context = {
name: '<context-name>',
user: user.name,
cluster: cluster.name,
};
const kc = new k8s.KubeConfig();
kc.loadFromOptions({
clusters: [cluster],
users: [user],
contexts: [context],
currentContext: context.name,
});
const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api);
k8sApi.listNamespacedIngress('<namespace>').then((res) => {
console.log(res.body);
});
You need to Api client according to your ingress in my case I was using networkingV1Api
You can get further options from
https://github.com/kubernetes-client/javascript
JS client : https://github.com/kubernetes-client/javascript
If you have different ways to authenticate as you mentioned service account that is also one of them.
Yes, you will require it however if you are planning to run your script on cluster only there is no requirement of it.
you can directly use the method to authenticate
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8s = require('#kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.NetworkingV1beta1Api) // before 1.14 use extensions/v1beta1
k8sApi.listNamespacedIngress('<Namespace name>').then((res) => {
console.log(res.body);
});
You can check out this examples : https://github.com/kubernetes-client/javascript/tree/master/examples you can also use typescript.

Timeout when trying to connect to redshift from node using node-redshift

I am trying to connect to redshift from my nodejs code to run a code to copy from S3 into redshift.
I am using the node-redshift package for this using the below code.
var Redshift = require('node-redshift');
var client = {
user: 'awsuser',
database: 'dev',
password: 'zxxxx',
port: '5439',
host: 'redshift-cluster-1.xxxxxxxxxx.us-east-1.redshift.amazonaws.com',
};
var redshiftClient = new Redshift(client);
var pg_query = "copy test1 from 's3://aws-bucket/" + file_name + "ACCESS_KEY_ID 'xxxxxxx' SECRET_ACCESS_KEY 'xxxxxxxxxx';";
redshiftClient.query(pg_query, {raw: true}, function (err1, pgres) {
if (err1) {
console.log('error here');
console.error(err1);
} else {
//upload successful
console.log('success');
}
});
}
});
I have tried using explicit connect also but in any case I am getting the timeout error as below
Error: Error: connect ETIMEDOUT XXX.XX.XX.XX:5439
The redshift cluster is assigned to a role for S3 full access and also has the default security group assigned.
Am I missing something here?
Make sure your cluster is publicly visible. The cluster should be sitting in a certain subnet. For that subnet, the security groups' inbound rules in VPC should have an entry that states that all IPs are allowed to connect to your Redshift cluster on port 5439.
If your public IP is present in that set then only you can connect to the cluster.
Say you have SQL Workbench/J which allows you to connect to the redshift cluster. If you are able to connect with this SQL client, you can ignore the above matter because it means that your IP is able to connect to the redshift cluster via SQL Workbench/J.

Cassandra status check using nodejs

I use the nodejs in three environments and the Cassandra is running in all the three nodes.
I totally understand using nodetool status I will be able to get the status of each node. But the problem is If my current node is down then I will not be able to perform nodetool status in the current node, So Is there a way to get the status using nodejs Cassandra driver?
Any help is appreciated.
EDITED :
As per dilsingi's suggestion, I used the client.hosts but the problem is, In the following cluster, 172.30.56.61 is down still it is showing as available.
How to get the status of each node?
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['172.30.56.60','172.30.56.61','172.30.56.62'], keyspace: 'test', policies : { loadBalancing : new cassandra.policies.loadBalancing.RoundRobinPolicy }});
async function read() {
client.connect().then(function () {
console.log('Connected to cluster with %d host(s): %j', client.hosts.length, client.hosts.keys());
client.hosts.forEach(function (host) {
console.log(host.address, host.datacenter, host.rack);
});
});
}
read();
nodeTool status output :
Datacenter: newyork
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.30.56.62 1.13 MiB 256 34.8% e93827b7-ba43-4fba-8a51-4876832b5b22 rack1
DN 172.30.56.60 1.61 MiB 256 33.4% e385af22-803e-4313-bee2-16219f73c213 rack1
UN 172.30.56.61 779.4 KiB 256 31.8% be7fc52e-c45d-4380-85a3-4cbf6d007a5d rack1
Node Js Code :
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: ['172.30.56.60','172.30.56.61','172.30.56.62'], keyspace: 'qcs', policies : { loadBalancing : new cassandra.policies.loadBalancing.RoundRobinPolicy }});
async function read() {
client.connect().then(function () {
console.log('Connected to cluster with %d host(s): %j', client.hosts.length, client.hosts.keys());
client.hosts.forEach(function (host) {
console.log(host.address, host.datacenter, host.rack, host.isUp(), host.canBeConsideredAsUp());
});
});
}
read();
NodeJs output :
Connected to cluster with 3 host(s): ["172.30.56.60:9042","172.30.56.61:9042","172.30.56.62:9042"]
172.30.56.60:9042 newyork rack1 true true
172.30.56.61:9042 newyork rack1 true true
172.30.56.62:9042 newyork rack1 true true
The drivers in general including (nodejs) are aware of the entire Cassandra cluster Topology. Upon the initial contact with the one or more node ip address in connection string, driver can automatically identify all the node ips that make up the cassandra ring. Its intelligent enough to know when a node goes down or a new node joins the cluster. It can even continue working with completely new nodes (ips) than what it began with.
So there isn't a requirement to code for node status, as driver automatically handles that for you. Its recommended to provide more than 1 ip in the connection string, so as to provide redundancy while making initial connection.
Here is the nodejs driver documentation and this section describe the "Auto node discovery" feature and "Cluster & Schema Metadata".

Connection to Redis cluster failed

I have setup Redis cluster in Google compute Engine by click to deploy option. Now i want to connect to this redis server from my node js code using 'ioredis' here is my code to connect to single instance of redis
var Redis = require("ioredis");
var store = new Redis(6379, 'redis-ob0g');//to store the keys
var pub = new Redis(6379, 'redis-ob0g');//to publish a message to all workers
var sub = new Redis(6379, 'redis-ob0g');//to subscribe a message
var onError = function (err) {
console.log('fail to connect to redis ',err);
};
store.on('error',onError);
pub.on('error',onError);
sub.on('error',onError);
And it worked. Now i want to connect to redis as cluster, so i change the code as
/**
* list of server in replica set
* #type {{port: number, host: string}[]}
*/
var nodes =[
{ port: port, host: hostMaster},
{ port: port, host: hostSlab1},
{ port: port, host: hostSlab2}
];
var store = new Redis.Cluster(nodes);//to store the keys
var pub = new Redis.Cluster(nodes);//to publish a message to all workers
var sub = new Redis.Cluster(nodes);//to subscribe a message channel
Now it throw this error:
Here is my Redis cluster in my google compute console:
Ok, I think there is a confusion here.
A Redis Cluster deployment is not the same than a number of standard Redis instances protected by Sentinel. Two very different things.
The click-to-deploy option of GCE deploys a number of standard Redis instances protected by Sentinel, not Redis Cluster.
ioredis can handle both kind of deployments, but you have to use the corresponding API. Here, you were trying to use the Redis Cluster API, resulting in this error (cluster related commands are not activated for standard Redis instances).
According to ioredis documentation, you are supposed to connect with:
var redis = new Redis({
sentinels: [{ host: hostMaster, port: 26379 },
{ host: hostSlab1, port: 26379 },
{ host: hostSlab2, port: 26379 } ],
name: 'mymaster'
});
Of course, check the sentinel ports and name of the master. ioredis will manage automatically the switch to a slave instance when the master fails, and sentinel will ensure the slave is promoted as master just before.
Note that since you use pub/sub, you will need several redis connections.

Resources