I'm running a node.js on Google Cloud that uses a redis caching server. It was running fine for a couple of months but it suddenly started throwing connection errors and occasionally stops responding.
The app is running in the standard environment and connects to the VM that is running the Redis instance via a VPC connector. I suspect it is a networking issue because the issue doesn't seem to appear when I run the Node.js app from my own computer (connected to the same Redis server) or when the app is run in a flex environment and connects to the subnetwork directly. However, I'd prefer the app to run in the standard environment because as far as I know that's the only way to force the traffic over https.
When I monitor via Redis-cli the server just doesn't receive any commands when the connection has failed.
Time out in redis.conf is set to 0
Redis version: 5.0.5
Here's the Redis code. I don't think it is the issue though, it was running without issue a couple of weeks ago.
const redis = require('redis')
const redisOptions = {
host: process.env.REDIS_IP,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASS,
enable_offline_queue: false,
}
const client = redis.createClient(redisOptions.host, redisOptions.port)
// Log any errors
client.on('error', function(error) {
console.log('Error:')
console.log(error)
})
module.exports = client
These errors regularly show up in the Google App engine log. When they occur commands sent to Redis do not show up in in the logs.
A 2019-08-31T12:42:27.162834Z { Error: Redis connection to 10.128.15.197:6379 failed - read ETIMEDOUT "
A 2019-08-31T12:42:27.162868Z at TCP.onStreamRead (internal/stream_base_commons.js:111:27) errno: 'ETIMEDOUT', code: 'ETIMEDOUT', syscall: 'read' }
I see same issue many times with different databases. You already found the issue. Number of opened connections - is limited and costly resource. Try to use following pattern (it is just an example):
// Inside your db module
function dbCall(userFunc) {
const client = anyDb.createClient(host, port, ...);
userFunc(client, () => { client.quit(); /* client.close() or whatever */ }
}
// Usage
dbCall((client, done) => {
client.doSomethingWithCallback(..., () => {
// user code
done();
});
});
dbCall((client, done) => {
client.doSomePromise(...)
.finally(done);
});
Related
Am quiet new to backend and database. I have a solution currently using mongodb. It was working fine till yesterday when i starting having the connect ETIMEDOUT 13.37.254.237:27017 error. Nothing was changed in the URI path or tampered with. It just started and i have not been able to sort it out.
is there any help available please?
I have created another cluster and its working well. But my initial cluster that has datas which are live from clients is not connecting still.
My connection code
I have used these connections code but it has not worked. It was connecting fine all through yesterday but today without tampering with the code, couldn't connect to my mongodb
mongoose.connect(process.env.MONGO_URI,{ useNewUrlParser: true, useUnifiedTopology: true });
const connectDB = async () => {
try {
const conn = await mongoose.connect(process.env.MONGO_URL);
console.log(`MongoDB Connected: ${conn.connection.host}`);
} catch (error) {
console.log(error);
process.exit(1);
}
};
my mongoose connection and the timedout error
Whenever you connect to Mongodb using IPs that keep updating from your system causes this kind of issues.
also this can be due to your network connection. So i will advice you to:
To allow connection from any IP address(but must ensure your URI is not made known to the public to avoid attack/ access from unwanted users.)
2.Check your network status(data)
3. Run the mongo URI on your atlas
I’m trying to perform a simple query on a local database. I expect this query to return the schema names of the database.
I am running Postgres version 13.1 and I installed it by following the steps shown here: https://postgresapp.com/
As per guidelines on Postgres Wiki, I'm including config file changes, I only manually edited settings to enable logging.
This computer is running MacOS Big Sur Version 11.0.1.
I'm using Node.js and Postgres is running on port 5432 and I can access it with psql.
The relevant changes I've made are the following:
Endpoint in server.js:
router.post('/mock_call', async (ctx) => {
try {
console.log('sup')
await sql.mockCall()
ctx.body = {
status: "It's good",
data: 'good'
}
} catch (e) {
console.log(e)
ctx.body = {
status: "Failed",
data: e
}
ctx.res.statusCode = 422;
}
})
SQL File:
require("openssl")
const { Pool, Client } = require('pg');
const client = new Client({
user: 'user1',
host: 'localhost',
database: 'postgres',
password: 'mypass',
port: 5432,
});
module.exports = {
mockCall: function () {
console.log('mockCall begin')
client.connect(err => {
if (err) {
console.error('connection error', err.stack)
} else {
console.log('connected')
}
})
console.log('before query')
client.query("SELECT schema_name FROM information_schema.schemata", (err, res) => {
if (err) {
console.log('theres an error:')
console.log(err)
};
console.log('theres a response:')
console.log(res)
for (let row of res.rows) {
console.log(JSON.stringify(row));
}
client.end();
});
}
}
Logs that actually get printed out when I hit the endpoint on localhost:
sup
mockCall begin
before query
Postgresql Logs (not helpful it's as if the server never gets hit):
This exact project and code is working on my personal local computer and the query goes through as expected. It used to be working on a Heroku server I had set up. The only difference with the Heroku server is that the connection is made like so:
const client = new Client({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: false
}
});
This connection had been working on a server I had for over a year. My database was running out of space so I upgraded from a hobby database to a standard plan on Heroku, the app continued to work. A couple weeks after this upgrade I pushed a new commit which included a couple new features on the app and this broke the postgresql connection. After this push I immediately checked out my last commit which was working and pushed that one, the issue however was still there.
I currently have the program running on my personal local computer but I need to move it back to Heroku as quickly as possible. The pictures and logs I've included above are the result of running my app locally on my friends computer, which seems to be having the same issue I'm having on Heroku so I'm hoping if I figure out the issue on his local computer I'll be able to solve what's going on in Heroku.
These are the logs that are printed out from my personal local computer which is working:
Edits:
Running psql -d postgres -U user1 -h localhost -p 5432 successfully connects me to the database on the command line.
The new features I added was a new endpoint for my apps customers. This commit works fine on my personal local computer, so I don't think it's an issue with the new features that I added. Additionally, since then I've reverted to my previous commit which used to be working so none of that new code is present anymore.
I'm running the entire app locally on my friends computer. I set up Postgres from scratch just as I did a year ago on my computer. However now, only my personal local computer is working.
I haven't changed anything on pg_hba.conf on either setup. This is what they both look like:
At first I thought the problem would be with Heroku since my local app was working fine. However after reaching out and talking for a couple days with support they said:
Hi there,
It looks like your application is able to successfully connect to the database, but something else in the application or framework is preventing the data from being retrieved. Unfortunately, as this is an application issue it falls outside the nature of the Heroku Support policy. I recommend searching our Knowledge Base or asking the community on Stack Overflow for more answers.
Turns out I was using an old version of pg, 7.8. I upgraded to 8.5 and now it works.
I'm using a very simple redis pub-sub application, in which I have a redis server in AWS and a nodejs based redis client that is located inside office LAN that subscribes to some channel.
This worked great until the network changed and it seems that some device is now interfering with outgoing connections (I also started receiving socket hangups on outbound SSH connections which I mitigated with the ServerAliveInterval 60 setting in the SSH config).
After the network change, whenever the redis client application is executed, it creates a redis client, subscribes to some channel and acts upon published messages in that channel.
It works okay for several minutes, but then it stops receiving any messages.
I registered the redis client to all known connection events (including the "error" event), I added a "retry_strategy" handler and also modified the configuration to have "socket_keepalive" and "socket_initialdelay" to 10 seconds (see code below).
Nevertheless, no event is triggered when the connection is interfered.
When the application stops receiving the messages, I see that the connection on the redis port is still valid:
dev#server:~> sudo netstat -tlnpua | grep 6379
tcp 0 0 10.43.22.150:52052 <server_ip>:6379 ESTABLISHED 27014/node
I also captured a PCAP on port 6379 on which I don't see any resets or TCP errors, and it seems that from the connection perspective everything is valid.
I tried running another nodejs application from within the LAN in which I create a client that connects to the AWS redis server, registers to all events and only publishes messages once in a while.
After several minutes (in which the connection breaks), I try publishing another command and the error event handler is indeed triggered:
> client.publish("channel", "ANOTHER TRY")
true
> Error: Redis connection to <server_hostname>:6379 failed - read ECONNRESET
Redis connection ended
Redis reconnecting
Redis connected
Redis connection is ready
So if I try publishing via the client after the connection was interfered, the connection event callbacks are indeed called and I can run some kind of reconnection logic.
But in the scenario in which I subscribe and wait for publishes to the channel, no connection event handler is called and the application is basically broken.
Application code:
const redis = require('redis');
const config = { "host": <hostname>, "port": 6379, "socket_keepalive": true,
"socket_initdelay": 10};
config.retry_strategy = function (options) {
console.log("retry strategy. error code: " + (options.error ?
options.error.code : "N/A"));
console.log("options.attempt", options.attempt, "options.total_retry_time",
options.total_retry_time);
return 2000;
}
const client = redis.createClient(config);
client.on('message', function(channel, message) {
console.log("Channel", channel, ", message", message);
});
client.on("error", function (err) {
console.log("Error " + err);
});
client.on("end", function () {
console.log("Redis connection ended");
});
client.on("connect", function () {
console.log("Redis connected");
});
client.on("reconnecting", function () {
console.log("Redis reconnecting");
});
client.on("ready", function () {
console.log("Redis connection is ready");
});
const channel = "channel";
console.log("Subscribing to channel", channel);
client.subscribe(channel);
I'm using redis#2.8.0 and node v8.11.3.
The solution for this issue is quite sad.
First, there is indeed some network device between the redis client and server, which drops inactive connections after some timeout. It seems that this timeout is really low (several minutes).
Redis has a socket_keepalive configuration which is enabled by default, and its default value is Node.js's default socket keep alive value (which is set for 2 hours if i'm not mistaken).
As can be seen above, I used a socket_initdelay configuration parameter that should have changed this default value, but unfortunately the code that uses this parameter isn't in the redis npm package but rather in node-redis.
To summarize:
There is no configuration setting to change the keep alive timeout value in redis#2.8.0 (latest version when writing this post).
You can either:
Use node-redis which accepts the socket_initdelay setting.
Modify the timeout manually by running the following:
const client = redis.createClient();
client.on("connect", function () {
client.stream.setKeepAlive(true, <timeout_value_in_milliseconds>);
}
I am currently trying to connect to my Redis cluster stored on another instance from a server running my application. I am using IoRedis to interface between my application and my Redis instance and it worked fine when there was only a single Redis node running. However, after trying to setup the cluster connection in my Node application, it constantly loops on the connection. My cluster setup works correctly.
As of now, I have tried the following configuration in my application to connect to the cluster. The issue is that the 'connect' even constantly loops printing out 'Connected to Redis!'. The events for 'ready' and 'error' are never fired.
const cache: Cluster = new Cluster([{
port: 8000,
host: REDIS_HOST
}, {
port: 8001,
host: REDIS_HOST
}, {
port: 8002,
host: REDIS_HOST
}]);
cache.on('connect', () => {
console.log('Connected to Redis!');
});
In the end, the 'connect' event should only fire once. Does anyone have any thoughts on this?
This kind of error, as I discovered it today, is not related to ioredis but to the redis instance setup. In my case, the problem I had with p3x-redis-ui, which use ioredis, it was the cluster that was not initialized.
See https://github.com/patrikx3/redis-ui/issues/48
maybe you'll find any clues to help you resolve your bug.
I have been researching this for day and I haven't been able to find the way to do this.
I am building a react app, running express at the backend, that needs to access some data in a remote database that lives inside a VPN. At the moment the app lives on my localhost so its enough for me to connect my machine using openvpn client and everything works a beauty. The problem will rise when the app will be live and I will need it to have access to the vpn by (I'm guessing) having a vpn client running on the site/domain.
Has anyone done this before?
I have tried to install the node-openvpn package that seems could do the job but unfortunately I can't manage to make it work as the connection doesn't seem to be configured properly.
This is the function I call to connect to the vpn that systematically fails at the line
--> openvpnmanager.authorize(auth);
const openvpnmanager = require('node-openvpn');
...
const connectToVpn = () => {
var opts = {
host: 'wopr.remotedbserver.com',
port: 1337, //port openvpn management console
timeout: 1500, //timeout for connection - optional,
logpath: '/log.txt'
};
var auth = {
user: 'userName',
pass: 'passWord',
};
var openvpn = openvpnmanager.connect(opts);
openvpn.on('connected', function() {
console.log('connecting..');
openvpnmanager.authorize(auth); <-- Error: Unhandled "error" event. (Cannot connect)
});
openvpn.on('console-output', function(output) {
console.log(output)
});
openvpn.on('state-change', function(state) { //emits console output of openvpn state as a array
console.log(output)
});
};
Am I misusing this function? Is there a better way?
Any help will be extremely appreciated.
Thank You!
The problem will rise when the app will be live and I will need it to
have access to the vpn by (I'm guessing) having a OpenVPN client running
on the site/domain.
Thats correct, you will need an openvpn client instance on the server where you will run the backend.
The above library (node-openvpn) is simply a library to interact with the local OpenVPN client instance. It cannot create a connection on its own. It depends on the OpenVPN binary (which should be running).
The solution you need is simply run the OpenVPN client on your server (apt-get openvpn). And let the daemon run. Check out the references below.
node-openvpn issues that points out that a running instance of the client is needed
OpenVPN CLI tutorial