I have setup Redis cluster in Google compute Engine by click to deploy option. Now i want to connect to this redis server from my node js code using 'ioredis' here is my code to connect to single instance of redis
var Redis = require("ioredis");
var store = new Redis(6379, 'redis-ob0g');//to store the keys
var pub = new Redis(6379, 'redis-ob0g');//to publish a message to all workers
var sub = new Redis(6379, 'redis-ob0g');//to subscribe a message
var onError = function (err) {
console.log('fail to connect to redis ',err);
};
store.on('error',onError);
pub.on('error',onError);
sub.on('error',onError);
And it worked. Now i want to connect to redis as cluster, so i change the code as
/**
* list of server in replica set
* #type {{port: number, host: string}[]}
*/
var nodes =[
{ port: port, host: hostMaster},
{ port: port, host: hostSlab1},
{ port: port, host: hostSlab2}
];
var store = new Redis.Cluster(nodes);//to store the keys
var pub = new Redis.Cluster(nodes);//to publish a message to all workers
var sub = new Redis.Cluster(nodes);//to subscribe a message channel
Now it throw this error:
Here is my Redis cluster in my google compute console:
Ok, I think there is a confusion here.
A Redis Cluster deployment is not the same than a number of standard Redis instances protected by Sentinel. Two very different things.
The click-to-deploy option of GCE deploys a number of standard Redis instances protected by Sentinel, not Redis Cluster.
ioredis can handle both kind of deployments, but you have to use the corresponding API. Here, you were trying to use the Redis Cluster API, resulting in this error (cluster related commands are not activated for standard Redis instances).
According to ioredis documentation, you are supposed to connect with:
var redis = new Redis({
sentinels: [{ host: hostMaster, port: 26379 },
{ host: hostSlab1, port: 26379 },
{ host: hostSlab2, port: 26379 } ],
name: 'mymaster'
});
Of course, check the sentinel ports and name of the master. ioredis will manage automatically the switch to a slave instance when the master fails, and sentinel will ensure the slave is promoted as master just before.
Note that since you use pub/sub, you will need several redis connections.
Related
I'm building a websocket backend which would connect to a topic (with only one partition) and consume data from the earliest position and keep consuming for new data till the websocket connection is disconnected. At a single time more than one websocket connection can exist.
To ensure all data from begining is consumed, everytime a websocket connection is made, I'd make a new consumer group and subscribe to the topic
const Kafka = require('node-rdkafka')
const { v4: uuidv4 } = require('uuid')
const kafkaConfig = (uuid) => ({
'group.id': `my-topic-${uuid}`,
'metadata.broker.list': KAFKA_URL,
})
const topicName= 'test-topic'
const consumer = new Kafka.KafkaConsumer(kafkaConfig(uuidv4()), {
'auto.offset.reset': 'earliest',
})
console.log('attempting to connect to topic')
consumer.connect({ topic: topicName, timeout: 300 }, (err) => {
if (err) {
console.log('error connecting consumer to topic', topicName)
throw err
}
console.log(`consumer connected to topic ${topicName}`)
consumer.subscribe([topicName])
consumer.consume((_err, data) => {
// send data to websocket
})
})
This seems to work fine as expected. However when I try to exceed the number of consumers/consumer groups to above 4, the consumer connection seems to be waiting indefinitely. In above snippet I'd see the log 'attempting to connect' but nothing after it.
I read Kafka document and it looks like there is no limit on number of consumer groups.
I'm running Kafka/Zookeper in a docker container on my localhost and I havent set any limits on topics.
My dockerfile
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
labels:
- 'custom.project=faster-cms'
- 'custom.service=kafka'
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
KAFKA_LOG4J_LOGGERS: 'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'
My question is, why is the connection waiting indefinitely, how do I raise the consumers limit or throw an error when it gets stuck indefinetly.
Apparently this is a limitation in node-rdkafka package. The default cunsumer/producer groups limit is 5. If you want to increase the limit set env variable UV_THREADPOOL_SIZE in .env file and the package would increase the limit of groups.
I am trying to connect to redshift from my nodejs code to run a code to copy from S3 into redshift.
I am using the node-redshift package for this using the below code.
var Redshift = require('node-redshift');
var client = {
user: 'awsuser',
database: 'dev',
password: 'zxxxx',
port: '5439',
host: 'redshift-cluster-1.xxxxxxxxxx.us-east-1.redshift.amazonaws.com',
};
var redshiftClient = new Redshift(client);
var pg_query = "copy test1 from 's3://aws-bucket/" + file_name + "ACCESS_KEY_ID 'xxxxxxx' SECRET_ACCESS_KEY 'xxxxxxxxxx';";
redshiftClient.query(pg_query, {raw: true}, function (err1, pgres) {
if (err1) {
console.log('error here');
console.error(err1);
} else {
//upload successful
console.log('success');
}
});
}
});
I have tried using explicit connect also but in any case I am getting the timeout error as below
Error: Error: connect ETIMEDOUT XXX.XX.XX.XX:5439
The redshift cluster is assigned to a role for S3 full access and also has the default security group assigned.
Am I missing something here?
Make sure your cluster is publicly visible. The cluster should be sitting in a certain subnet. For that subnet, the security groups' inbound rules in VPC should have an entry that states that all IPs are allowed to connect to your Redshift cluster on port 5439.
If your public IP is present in that set then only you can connect to the cluster.
Say you have SQL Workbench/J which allows you to connect to the redshift cluster. If you are able to connect with this SQL client, you can ignore the above matter because it means that your IP is able to connect to the redshift cluster via SQL Workbench/J.
I have used embedded elastic as part of a Spring application in Java like this:
Node node;
#SuppressWarnings("unused")
#Bean
public Client es() {
node = nodeBuilder().local(true).node();
Client client = node.client();
boolean indexExists = client.admin().indices().prepareExists(INDEX).execute().actionGet().isExists();
if (!indexExists) {
client.admin().indices().prepareCreate(INDEX).execute().actionGet();
}
return client;
}
I'm trying to do something similar with NodeJS so I don't have to create an elastic search instance separately(super low traffic). In the Spring case, I just set .local(true) and it's good to go. I can't find any option like that in Node.
This is what I'm doing now
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
// log: 'trace',
host: 'localhost:9200'
});
and it works fine for an external server.
You can't have Elasticsearch Node Client in NodeJS. The second method is the way to go.
Trying to cluster Socket.io using net.createServer. All examples are using IP to split what connection goes to witch node. However I'm using 4 servers with a load balancer that points ip;s to the different servers.
So in node cluster I would like to use an unique id to point the connection to a specific cluster.
Figure that each user that wants to connect can add a parameter to the connection url ws://localhost/socket.io?id=xxyyzz
How can I get the connection url in net.createServer
todays code for ip:
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var remote = connection.remoteAddress;
var local = connection.localAddress;
var ip = (remote+local).match( /[0-9]+/g )[0].replace(/,/g, '');
var wIndex = ip % num_processes;
var worker = workers[wIndex];
worker.send('sticky-session:connection', connection);
});
I use node module memjs with redis labs memcached cloud. Is there any way to close a connection? Thank you.
In github.com/alevy/memjs/blob/master/lib/memjs/memjs.js
There is a method which loops over the connected servers and close the connection for each one of them. another method is quit which actually makes use of close.
// Closes (abruptly) connections to all the servers.
Client.prototype.close = function() {
for (var i in this.servers) {
this.servers[i].close();
}
}
As the documentation of RedisLabs shows, creating the client here
var memjs = require('memjs');
var mc = memjs.Client.create('hostname:port', {
username: 'username',
password: 'password'
});
will give you the option to do the following to close :
mc.close();