I am currently trying to connect to my Redis cluster stored on another instance from a server running my application. I am using IoRedis to interface between my application and my Redis instance and it worked fine when there was only a single Redis node running. However, after trying to setup the cluster connection in my Node application, it constantly loops on the connection. My cluster setup works correctly.
As of now, I have tried the following configuration in my application to connect to the cluster. The issue is that the 'connect' even constantly loops printing out 'Connected to Redis!'. The events for 'ready' and 'error' are never fired.
const cache: Cluster = new Cluster([{
port: 8000,
host: REDIS_HOST
}, {
port: 8001,
host: REDIS_HOST
}, {
port: 8002,
host: REDIS_HOST
}]);
cache.on('connect', () => {
console.log('Connected to Redis!');
});
In the end, the 'connect' event should only fire once. Does anyone have any thoughts on this?
This kind of error, as I discovered it today, is not related to ioredis but to the redis instance setup. In my case, the problem I had with p3x-redis-ui, which use ioredis, it was the cluster that was not initialized.
See https://github.com/patrikx3/redis-ui/issues/48
maybe you'll find any clues to help you resolve your bug.
Related
I have tried, https://socket.io/docs/using-multiple-nodes/
const io = require('socket.io')(3000);
const redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));
but didn't work, with multiple core of server. Can anyone expert here will guilde me will appreciated by me.
I using PM2 for node process clustering.
and i am facing issue, user in different thread could no connect with socket.IO but all users in same Socket.IO thread connection is working fine.
In short I want to cluster multiple Socket.IO server for Load balancing.
I'm running a node.js on Google Cloud that uses a redis caching server. It was running fine for a couple of months but it suddenly started throwing connection errors and occasionally stops responding.
The app is running in the standard environment and connects to the VM that is running the Redis instance via a VPC connector. I suspect it is a networking issue because the issue doesn't seem to appear when I run the Node.js app from my own computer (connected to the same Redis server) or when the app is run in a flex environment and connects to the subnetwork directly. However, I'd prefer the app to run in the standard environment because as far as I know that's the only way to force the traffic over https.
When I monitor via Redis-cli the server just doesn't receive any commands when the connection has failed.
Time out in redis.conf is set to 0
Redis version: 5.0.5
Here's the Redis code. I don't think it is the issue though, it was running without issue a couple of weeks ago.
const redis = require('redis')
const redisOptions = {
host: process.env.REDIS_IP,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS,
enable_offline_queue: false,
}
const client = redis.createClient(redisOptions.host, redisOptions.port)
// Log any errors
client.on('error', function(error) {
console.log('Error:')
console.log(error)
})
module.exports = client
These errors regularly show up in the Google App engine log. When they occur commands sent to Redis do not show up in in the logs.
A 2019-08-31T12:42:27.162834Z { Error: Redis connection to 10.128.15.197:6379 failed - read ETIMEDOUT "
A 2019-08-31T12:42:27.162868Z at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'read' }
I see same issue many times with different databases. You already found the issue. Number of opened connections - is limited and costly resource. Try to use following pattern (it is just an example):
// Inside your db module
function dbCall(userFunc) {
const client = anyDb.createClient(host, port, ...);
userFunc(client, () => { client.quit(); /* client.close() or whatever */ }
}
// Usage
dbCall((client, done) => {
client.doSomethingWithCallback(..., () => {
// user code
done();
});
});
dbCall((client, done) => {
client.doSomePromise(...)
.finally(done);
});
I have a redis server in an azure linux vm running one master, slave and sentinel in the same VM(A). When i try to connect to the redis sentinal from another VM(B) using redis-cli, i am able to connect and set and get values. But when i try to connect to the redis sentinel using ioredis module in nodeJS from VM(B), it is throwing a connection timeout error. I use the following code snippet to connect to the sentinel from node application
var Redis = require('ioredis');
var redis = new Redis({
sentinels: [{ host: 'x.x.x.x', port: 26379}],
name: 'mymaster'
});
The confusing part is, when i run the redis master, slave and sentinel in the same vm(A) and using '127.0.0.1' instead of 'x.x.x.x' the code works fine.
Any help is much appreciated.
I have 2 Redis servers one master and the other slave(replication). Once the Master is down due to some reasons the slave will become Master and it continues to act as Master till something wrong happens to that server.
I have a nodeJS server from which i want to push data to the Redis which currently running as Master. I have a sentinel which monitors the Redis servers but my question is how to fetch master information from sentinel using nodeJS?
and if there is a way, does it automatically push data to alternative redis server without any service restart?
The ioredis supports the sentinel. like this:
var redis = new Redis({
sentinels: [{ host: 'localhost', port: 26379 }, { host: 'localhost', port: 26380 }],
name: 'mymaster'
});
redis.set('foo', 'bar');
The name identifies a redis cluster, which you should also specified in redis sentinel.conf. You could refer this page on how to configure sentinel.
About your second question, see below:
ioredis guarantees that the node you connected to is always a master even after a failover. When a failover happens, instead of trying to reconnect to the failed node (which will be demoted to slave when it's available again), ioredis will ask sentinels for the new master node and connect to it. All commands sent during the failover are queued and will be executed when the new connection is established so that none of the commands will be lost.
And node-redis does not support it now.
Edit: My issue is now this. I can connect to iot.eclipse.org using http://www.hivemq.com/demos/websocket-client, using port 80. When I connect via a browsified mqtt.js client I am getting the following error :
WebSocket connection to 'ws://iot.eclipse.org/' failed: Error during
WebSocket handshake: Unexpected response code: 200
I've tried ports 8080, 8000, 1883 and 80, without any luck. Any suggestions?
------------ Original question below -----------
I want to connect with a mqtt broker using mqtt over websockets. My client will need to run in a browser.
TO achieve this I am using mqtt.js library and am following these instructions.
Everything works when running against the public broker at broker.mqttdashboard.com. However when I connect to the public brokers at iot.eclipse.org and test.mosquitto.org I get HTTP errors.
I think the problem is incorrect configuration of the client when running against the second two brokers, but I'm struggling to find any help.
Heres the configuration, is there anyone out there who can help me?
// Works fine
var options = {
host: "broker.mqttdashboard.com",
port: 8000
};
// Doesn't work
/*var options = {
host: "m2m.eclipse.org",
protocolId: 'MQIsdp',
protocolVersion: 3
};*/
// Doesn't work
/*var options = {
host: "test.mosquitto.org",
protocolId: 'mosqOtti',
protocolVersion: 3
};*/
var client = mqtt.connect(options);
Let me know if theres any more information you need!
Mark
Both test.mosquitto.org and iot.eclipse.org are both websockets enabled (for a long time now actually).
You already have got test.mosquitto.org working - the key there is using port 8080.
The current iot.eclipse.org configuration expects the connection url to be ws://iot.eclipse.org/mqtt.
I don't think m2m.eclipse.org / iot.eclipse.org or test.mosquitto.org have websockets enabled.
broker.mqttdashboard.com runs a HiveMQ underneath which has native websockets enabled.
So in short, I don't think this is a configuration problem on your side. To make sure, you can check this web application and see if the other brokers work with that: http://www.hivemq.com/demos/websocket-client/