Are Redis connection pools necessary with Node.js asynchronous I/O?
Most of the Redis libraries I see allow you to create client connections but there aren't many connection pool modules so I assume it's not as important.
The one thing that confuses me is that Redis has a default of 16 different/segmented databases in one Redis instance.
So if you create a connection pool, which database of the 16 are you connected to? Can you connect to all 16 at once with the same connection pool?
Is there a Node.js Redis library that creates a connection pool with 1 client per database, depending on how many databases you are using?
You've asked too many questions in one post.
Trying to answer them;
Are Redis connection pools necessary with Node.js asynchronous I/O?
Duplicate of Node.js Redis Connection Pooling
So if you create a connection pool, which database of the 16 are you connected to?
By default you're always connected to database 0. Databases in redis are numbered if you're thinking why 0. They cannot be renamed to a string.
Can you connect to all 16 at once with the same connection pool?
Connection pools are not necessary
Is there a Node.js Redis library that creates a connection pool with 1 client per database, depending on how many databases you are using?
After searching i find two :
node-redis-pool
redis-connection-pool
Related
I've gone through enough articles and typeorm official documentation on setting up connection pooling with typeorm and postgressql but couldn't find a solution.
All the articles, I've seen so far explains about adding the max/Poolsize attribute in orm configuration or connection pooling but this is not setting up a pool of idle connections in the database.
When I verify pg_stat_activity table after the application bootstraps, I could not see any idle connections in the DB but when a request is sent to the application I could see an active connection to the DB
The max/poolSize attribute defined under the extras in the orm configuration merely acts as the max number of connections that can be opened from the application to the db concurrently.
What I'm expecting is that during the bootstrap, the application opens a predefined number of connections with the database and keep it in idle state. When a request comes into the application one of the idle connection is picked up and the request is served.
Can anyone provide your insights on how to have this configuration defined with typeorm and postgresql?
TypeORM uses node-postgres which has built in pg-pool and doesn't have that kind of option, as far as I can tell. It supports a max, and as your app needs more connections it will create them, so if you want to pre-warm it, or maybe load/stress test it, and see those additional connections you'll need to write some code that kicks off a bunch of async queries/inserts.
I think I understand what you're looking for as I used to do enterprise Java, and connection pools in things like glassfish and jboss have more options where you can keep hot unused connections in the pool. There are no such options in TypeORM/node-postgres though.
We have a project on Node.js that is based on restify and we are using RethinkDB as a database. The problem is that RethinkDB should be accessed from different parts of code (from route handlers, middlewares), but not for all requests. I am wondering what is the best way to connect to RethinkDB in this case?
I see next options:
have one long connection that is stored somewhere (approach we use now),
connect to RethinkDB on each HTTP request, which potentially some of the connections being never used,
connect in each part individually, with potentially several connections per HTTP request, but without useless connections.
I ask this question because I am not sure how well Rethink handle well short/long connections and how expensive they are. For instance MongoDB prefers long connections, but all examples in RethinkDB docs uses one connection per HTTP request.
I recommend a connection pool or one connection per query. Especially if you use feature like changefeeds, which is recommened to be on its own connection.
When you use a single connection for everything, you have to also handle re-connection when the connection timeout/broken. I think it's easier to just use a connection per query, or shared a connection on a request/response.
Just ensure to close your connection after using it, otherwise you will leak connections and new connection cannot be created.
Some driver goes further and doesn't require you to think of connection anymore such as: https://github.com/neumino/rethinkdbdash
Or Elixir RethinkDB: https://github.com/hamiltop/rethinkdb-elixir/issues/32 has an issue to create connection pool.
RethinkDB has an issue related connection pool: https://github.com/rethinkdb/rethinkdb/issues/281
That's probably what community is heading too.
Is it possible to have one single connection poolsize { poolSize: 1 } for multiple and concurrent webservice hit in mongodb
Will it reuse the connection or will throw any exception
I'm Using mongoose driver in nodejs and mongoDB as database.
It will reuse the connection, but it will slow down your database calls, as there is a limit of only one active connection to the database. This will cause issues when you scale up to higher loads.
The chat room app are running on multiple servers and consists two services:
1. connection manager
Before joining the chat room, client ask for a chat service url from connection manager first
2. chat service
A typical socket.io based chat implementation.
I need to store each client's connection status in Redis, such as user connect to which room, how many users are in one room etc. So the connection manager can use the data to do load balancing.
I can use socket connection/disconnect event to maintain the current connection status in Redis, but in case of NodeJS server failure, how to make sure Node and Redis data are synchronized? What's the best way to do this?
I can use socket connection/disconnect event to maintain the current
connection status in Redis, but in case of NodeJS server failure, how
to make sure Node and Redis data are synchronized?
For example you can create a set in redis which would contain reference to keys that are managed by specific server node. If a node goes down or is restarted you can invalidate these keys.
When using the native mongo.db driver for node, should I open 1 connection per application, per page "serve", or open and close it whenever I need?
I've seen a few older answers but I know the project is always developing so I want to know what it the status today.
This isn't a situation that will change; opening a new connection to a server will be less performant than using an established connection.
Note: this is the general case for server applications, and not specific to MongoDB.
Typical overhead includes:
resolving server names to IPs
establishing network connection to server
per connection memory allocated on the server
For MongoDB in particular:
opening a new connection means a new socket connection and thread on the server
each connection (as of MongoDB 2.0) allocates 1Mb of RAM on the server (see also: Checking Memory Usage)
there is a per process limit on open files/connections (see also: Too Many Open Files)
For the MongoDB Node.js driver you can take advantage of connection pooling by setting the poolSize in the constructor. A blog post with an example of using this: Node.js: Connection Pools and MongoDB.