How long can a ioredis connection be re-used - azure

I am using ioredis to create a redis client in Node for an appservice that is deployed in Azure. The code looks something like this, enter image description here
I am creating one connection per instance which will live forever until there is a problem during which it will try to re-connect. I am trying to understand if there is any idle timeout configuration in ioredis that will close these connections.
I am using ioredis to create a redis client in Node for an appservice that is deployed in Azure. The code looks something like this, enter image description here
I am creating one connection per instance which will live forever until there is a problem during which it will try to re-connect. I am trying to understand if there is any idle timeout configuration in ioredis that will close these connections. As per the microsoft documentation here,enter image description here some client libraries send a ping to keep the connection open, I am not sure if ioredis is one of them

Related

What should one prefer while using node-oracledb - Using one connection for all routes or getting connection from pool for each route and then close?

I am working on a node-express project and wanted to have a guide on what is more preferable
Using one database connection for all routes or getting connection from pool for each route and then closing it?
You want to use a connection pool, because while one connection is busy by receiving or sending data, another can be used from the pool and perform more work. Thus, it improve overall performances.
It's something you can't do with only one connection.
You create your pool when program start, then request a connection in your route and release it when the route have done its work.

Connection Pooling in typeorm with postgressql

I've gone through enough articles and typeorm official documentation on setting up connection pooling with typeorm and postgressql but couldn't find a solution.
All the articles, I've seen so far explains about adding the max/Poolsize attribute in orm configuration or connection pooling but this is not setting up a pool of idle connections in the database.
When I verify pg_stat_activity table after the application bootstraps, I could not see any idle connections in the DB but when a request is sent to the application I could see an active connection to the DB
The max/poolSize attribute defined under the extras in the orm configuration merely acts as the max number of connections that can be opened from the application to the db concurrently.
What I'm expecting is that during the bootstrap, the application opens a predefined number of connections with the database and keep it in idle state. When a request comes into the application one of the idle connection is picked up and the request is served.
Can anyone provide your insights on how to have this configuration defined with typeorm and postgresql?
TypeORM uses node-postgres which has built in pg-pool and doesn't have that kind of option, as far as I can tell. It supports a max, and as your app needs more connections it will create them, so if you want to pre-warm it, or maybe load/stress test it, and see those additional connections you'll need to write some code that kicks off a bunch of async queries/inserts.
I think I understand what you're looking for as I used to do enterprise Java, and connection pools in things like glassfish and jboss have more options where you can keep hot unused connections in the pool. There are no such options in TypeORM/node-postgres though.

gRPC DEADLINE_EXCEEDED even that the server is up and

I have two microservices that communicate each other thru gRPC, A is the RPC client and B is the RPC server, both written in NodeJS using grpc NPM module.
Everything is working fine until, at some point in time, unexpectedly A stop being able to send requests to B, it fails because of a timeout (5s) and throw this error:
Error: Deadline Exceeded
Both microservices are Docker containers, run on AWS ECS and communicate thru AWS ELB (not ALB because it does not support HTTP2 and some other problems).
I tried to run telnet from A to the ELB of B, both from the EC2 instance and from the running ECS task (the Docker container itself) itself and it connected fine, but still, the NodeJS application in A cannot reach the NodeJS application in B using the gRPC connection.
The only way to solve it is to stop and start the ECS tasks and then A succeed to connect to B again (until the next unexpected time the same scenario is reproduced), but it's not a solution of course.
Do anyone faced with that kind of issue?
Do you use unary or streaming API? Do you set any deadline?
gRPC deadline is per-stream, so in case of streaming when you set X milliseconds deadline, you'll get DEADLINE_EXCEEDED X milliseconds after you opened a stream (not send or receive any messages!). And you'll keep getting it forever for this stream, the only way to get rid of it is reopening a stream.
I have found that I need to create both a new stub, but also re-create the connection after some errors in order to get it to reconnect. (Also running in ECS)

Meteor's Remote Database Connection Timeout and Reconnect

Does Meteor have a setting to timeout and retry if its MongoDB does not give a response in x seconds? Wondering if anyone has tried this.
I am interested in running a MongoDB database remote to the Meteor production app. The Meteor-to-Mongo connection will be quick, just 3-9 milliseconds away, but I also want to understand how Meteor (and NodeJS) would react to a brief network outage. Would the app hang while waiting for a long timeout period? How can I force a 1 second timeout/retry to avoid a hang?
You can specify timeout in the mongo URL:
MONGO_URL=mongodb://host:port/db?connectTimeoutMS=60000&socketTimeoutMS=60000
but let's say you have a network outage, what does a short timeout give you?
your app will hang anyways...
To get high availability, look into replica sets.
https://docs.mongodb.com/manual/tutorial/deploy-replica-set/

Sails mongodb missing connection handling

Very simple and stupid question I came up with today when was playing around with sails and mongo db (adapter of the waterline odm). I just shut down the mongo while my sails server still has been running and saw nothing, neither in logs of sails or in the browser. How I could handle such situation?
If sails.js attempts a connection to the DB you will most certainly get a message in your console if the Mongo server is shut down, but if no request is made that requires a DB connection, then you will not see any error because no error as of yet exists.
It sounds like you might require a process that will monitor your DB and check to make sure it is still running? There are Sail.js options where you could create a CRON job to check the connection. Or you could use other application monitoring services like New Relic to monitor your DB.

Resources