I am trying to create a small app that queries a database in nodejs for this I am using the following module https://node-postgres.com/
I have managed to establish the connection with the database in postgres and it connects correctly
(this is my code)
const express = require('express');
const fs = require('fs');
const app = express();
const port = 8080;
const { Pool } = require('pg');
const pool = new Pool({
user: 'Inv',
host: 'localhost',
database: 'database',
password: 'password',
port: 3000,
idleTimeoutMillis:1000,
connectionTimeoutMillis:0,
});
pool.connect()
.then(
function (client){
let query = 'SELECT * FROM Clients';
function query_db(query){
client.query(query)
.then(
function(res){
console.log('query')
}
)
.catch(
function(err){
console.log(err.stack)
}
)
.finally(
function(){
client.release()
console.log('client disconect')
}
)
}
query_db(query)
return;
}
);
console.log(pool.totalCount);
setInterval(()=>{console.log(pool.totalCount)}, 100)
pool.on('error', (err, client) => {
console.error('Unexpected error on idle client', err)
process.exit(-1)
});
app.listen(port, () => {
console.log(`\u001b[7mServer in: http://localhost:${port}\u001b[0m\n`);
});
My question is how the pooling really works in this module???
Since from what i have learned so far is that pooling serves to not make a connection for each client since this would make each of the clients be authenticated by the database and this process is time consuming while consuming more server resources so it would slow down the execution of the program for this reason, pooling connects a group of clients to the same user in the database.
Then within the program i create the pool and it connects correctly to postgres with the statement pool.connect () which i check with the list of users connected to the database in pgadmin4, after this i make a query with the statement client.query (query)and this query is done correctly, the problem is that according to what i understand from the node-postgres documentation after finishing the query, the client must be disconnected from the pool so that other clients can connect to the space left by the client that has already made its transaction and this is done with the client.release () statement but when the call is made to client.release () the user of the pool that is connected to postgres disconnects but ... why is it like that ???
That the user of the pool should not remain connected in postgres and only leave the space free for another client ????
So if the entire pool is disconnected, the very objective of making the pool is not lost ???
Doing tests by omitting the client.release () this behavior stops but then ...
If the client is not released, the limit of clients connected to the pool will be reached and the clients will be left waiting forever, right????
Also according to the documentation in the sentence idleTimeoutMillis: 1000, it indicates the time it will wait before disconnecting the client due to inactivity and it does so, but when it disconnects the client as in the previous case it disconnects the entire pool of postgres ...
Then what is the real operation of the pool ???
So if what I understand is correct then.... there is no difference between using the pool and using individual clients, right?
I'm sorry for so many questions and in case some of these questions are very obvious or silly but I am something new in node and I have already searched ad nauseam on google but the documentation is very basic and minimal thanks for taking your time to read my doubts :'D
Related
When I use pool.end() after a query I get the error message "Cannot read property 'rows' of undefined". It seems to me that I shouldn't be using pool.end after queries have finished. So when should I use pool.end()?
Below is my code snippet:
const pool = new Pool({
user: process.env.PGUSER,
host: process.env.PGHOST,
database: process.env.PGDATABASE,
password: process.env.PGPASSWORD,
port: process.env.PGPORT
})
// Display schedule on home page
app.get('/', (req, res) => {
const displaySchedule = `SELECT * FROM schedule`;
pool.query(displaySchedule, (err, results) => {
if(err) {
throw err;
}
else {
res.render('index', {schedules: results.rows});
}
})
//pool.end();
});
pool.end shuts down a pool completely. In your case - in a web scenario - you do not want do do this. Otherwise you would have to connect to a pool on every new request. This defeats the purpose of pooling.
In your example without calling pool.end - you are using the pool.query method. You are all set here and do not have to use any kind of client cleanup or pool ending.
The pool is usually a long-lived process in your application. You almost never have to shut it down yourself in a web application.
You will have to shut it down - when your are creating pools dynamically or when you are attempting a gracefull shutdown.
For example: in a testing environment, where you connect to a pool before all the features/tests and disconnect after the tests are run, you call pool.end at the end on all dynamically created pools.
https://github.com/brianc/node-postgres/issues/1670
This issue describes the use of pool.end()
I have a function app that serves a node.js API. We are hitting the 900 concurrent connections limit with tedious connected to Azure SQL and realize we should add connection pools (unless there is a better recommendation of course).
Azure Functions + Azure SQL + Node.js and connection pooling between requests? seems to answer our prayers but wanted to validate how you can use a single connection pool with Azure functions
Is the best practice to put "let pool = new ConnectionPool(poolConfig, connectionConfig);" above mode.exports on all functions? Is that not creating a new pool every time an individual function is called?
Microsoft doesn't have clear documentation on this for node.js unfortunately so any help would be greatly appreciated!
To make the whole Function app share one single pool, we need to put the initialization part to a shared module. Christiaan Westerbeek had posted a wonderful solution using mssql, there's not so much difference between a Function app and a web app in this respect.
I recommend using mssql(use tedious and generic-pool internally) instead of tedious-connection-pool which seems not updated for 2 years.
Put the connection code in poolConfig.js under a SharedLib folder.
const sql = require('mssql');
const config = {
pool:{
max:50 // default: 10
},
user: '',
password: '',
server: '',
database: '',
options: {
encrypt: true // For Azure Sql
}
};
const poolPromise = new sql.ConnectionPool(config).connect().then(pool => {
console.log('Connected to MSSQL');
return pool;
})
.catch(err => console.log('Database Connection Failed! Bad Config: ', err));
module.exports = {
sql, poolPromise
}
And load the module to connect to sql. We use await to get ConnectionPool the function should be async(default for v2 js function).
const { poolPromise } = require('../SharedLib/poolConfig');
module.exports = async function (context, req) {
var pool = await poolPromise;
var result = await pool.request().query("");
...
}
Note that if Function app is scaled out with multiple instances, new pool will be created for each instance as well.
My stack is node, express and the pg module. I really try to understand by the documentation and some outdated tutorials. I dont know when and how to disconnect and to end a client.
For some routes I decided to use a pool. This is my code
const pool = new pg.Pool({
user: 'pooluser',host: 'localhost',database: 'mydb',password: 'pooluser',port: 5432});
pool.on('error', (err, client) => {
console.log('error ', err); process.exit(-1);
});
app.get('/', (req, res)=>{
pool.connect()
.then(client => {
return client.query('select ....')
.then(resolved => {
client.release();
console.log(resolved.rows);
})
.catch(e => {
client.release();
console.log('error', e);
})
pool.end();
})
});
In the routes of the CMS, I use client instead of pool that has different db privileges than the pool.
const client = new pg.Client({
user: 'clientuser',host: 'localhost',database: 'mydb',password: 'clientuser',port: 5432});
client.connect();
const signup = (user) => {
return new Promise((resolved, rejeted)=>{
getUser(user.email)
.then(getUserRes => {
if (!getUserRes) {
return resolved(false);
}
client.query('insert into user(username, password) values ($1,$2)',[user.username,user.password])
.then(queryRes => {
client.end();
resolved(true);
})
.catch(queryError => {
client.end();
rejeted('username already used');
});
})
.catch(getUserError => {
return rejeted('error');
});
})
};
const getUser = (username) => {
return new Promise((resolved, rejeted)=>{
client.query('select username from user WHERE username= $1',[username])
.then(res => {
client.end();
if (res.rows.length == 0) {
return resolved(true);
}
resolved(false);
})
.catch(e => {
client.end();
console.error('error ', e);
});
})
}
In this case if I get a username already used and try to re-post with another username, the query of the getUser never starts and the page hangs. If I remove the client.end(); from both functions, it will work.
I am confused, so please advice on how and when to disconnect and to completely end a pool or a client. Any hint or explanation or tutorial will be appreciated.
Thank you
First, from the pg documentation*:
const { Pool } = require('pg')
const pool = new Pool()
// the pool with emit an error on behalf of any idle clients
// it contains if a backend error or network partition happens
pool.on('error', (err, client) => {
console.error('Unexpected error on idle client', err) // your callback here
process.exit(-1)
})
// promise - checkout a client
pool.connect()
.then(client => {
return client.query('SELECT * FROM users WHERE id = $1', [1]) // your query string here
.then(res => {
client.release()
console.log(res.rows[0]) // your callback here
})
.catch(e => {
client.release()
console.log(err.stack) // your callback here
})
})
This code/construct is suficient/made to get your pool working, providing the your thing here things. If you shut down your application, the connection will hang normaly, since the pool is created well, exactly not to hang, even if it does provides a manual way of hanging,
see last section of the article.
Also look at the previous red section which says "You must always return the client..." to accept
the mandatory client.release() instruction
before accesing argument.
you scope/closure client within your callbacks.
Then, from the pg.client documentation*:
Plain text query with a promise
const { Client } = require('pg').Client
const client = new Client()
client.connect()
client.query('SELECT NOW()') // your query string here
.then(result => console.log(result)) // your callback here
.catch(e => console.error(e.stack)) // your callback here
.then(() => client.end())
seems to me the clearest syntax:
you end the client whatever the results.
you access the result before ending the client.
you don´t scope/closure the client within your callbacks
It is this sort of oposition between the two syntaxes that may be confusing at first sight, but there is no magic in there, it is implementation construction syntax.
Focus on your callbacks and queries, not on those constructs, just pick up the most elegant for your eyes and feed it with your code.
*I added the comments // your xxx here for clarity
You shouldn't disconnect the pool on every query, connection pool is supposed to be used to have "hot" connections.
I usually have a global connection on startup and the pool connection close on (if) application stop; you just have to release the connection from pool every time the query ends, as you already do, and use the same pool also in the signup function.
Sometimes I need to preserve connections, I use a wrapper to the query function that checks if the connection is active or not before perform the query, but it's just an optimization.
In case you don't want to manage open/close connections/pool or release, you could try https://github.com/vitaly-t/pg-promise, it manage all that stuff silently and it works well.
The documentation over node-postgres's github says:
pro tip: unless you need to run a transaction (which requires a single client for multiple queries) or you have some other edge case like streaming rows or using a cursor you should almost always just use pool.query. Its easy, it does the right thing ™️, and wont ever forget to return clients back to the pool after the query is done.
So for non-transactional query, calling below code is enough.
var pool = new Pool()
pool.query('select username from user WHERE username= $1',[username], function(err, res) {
console.log(res.rows[0].username)
})
By using pool.query, the library will take care of releasing the client after the query is done.
Its quite simple, a client-connection (single connection) opens up, query with it, once you are done you end it.
The pool concept is different, in the case of mysql : you have to .release() the connection back to the pool once you are done with it, but it seems that with pg is a different story:
From an issue on the github repo : Cannot use a pool after calling end on the pool #1635
"Cannot use a pool after calling end on the pool"
You can't reuse a pool after it has been closed (i.e. after calling
the .end() function). You would need to recreate the pool and discard
the old one.
The simplest way to deal with pooling in a Lambda is to not do it at
all. Have your database interactions create their own connections and
close them when they're done. You can't maintain a pool across
freeze/thaw cycles anyway as the underlying TCP sockets would be
closed.
If opening/closing the connections becomes a performance issue then
look into setting up an external pool like pgbouncer.
So I would say that your best option is to not end the pool, unless you are shutting down the server
I have developed and deployed a Node.js application in apigee edge, which performs few CRUD operations. To establish db connection I have used trireme-jdbc module of node.js. I am able to perform all CRUD operations well through trireme-jdbc but I have problem with DB session close using db.close() function in trireme-jdbc. When I use db.close() it do not close currently active/open session from the pool. I want to know is there any other process or way to close db connection perfectly. Also I want to close all active connections from pool.
Any help will be appreciated. Below is the sample code to establish connection, run a select query and used db.close() to close session. My database is Openedge/Progress.
var jdbc = require('trireme-jdbc');
var db = new jdbc.Database({
url : connectionURL,
properties: {
user: 'XXXXXX',
password: 'XXXXXX',
},
minConnections:1,
maxConnections:2,
idleTimeout:10
});
db.execute('select * from users ', function(err, result, rows) {
console.log(err);
console.log(result);
rows.forEach(function(row) {
console.log(row);
});
db.close(); // Used to close DB session/connection.
});
Here is my code using socket.io as WebSocket and backend with pub/sub redis.
var io = io.listen(server),
buffer = [];
var redis = require("redis");
var subscribe = redis.createClient(); **<--- open new connection overhead**
io.on('connection', function(client) {
console.log(client.request.headers.cookie);
subscribe.get("..", function (err, replies) {
});
subscribe.on("message",function(channel,message) {
var msg = { message: [client.sessionId, message] };
buffer.push(msg);
if (buffer.length > 15) buffer.shift();
client.send(msg);
});
client.on('message', function(message){
});
client.on('disconnect', function(){
subscribe.quit();
});
});
Every new io request will create new redis connection. If someone open browser with 100 tabs then the redis client will open 100 connections. It doesn't look nice.
Is it possible to reuse redis connection if the cookies are same?
So if someone open many browser tabs also treat as open 1 connection.
Actually you are only creating a new redis client for every connection if you are instantiating the client on the "connection" event. What I prefer to do when creating a chat system is to create three redis clients. One for publishing, subscribing, and one for storing values into redis.
for example:
var socketio = require("socket.io")
var redis = require("redis")
// redis clients
var store = redis.createClient()
var pub = redis.createClient()
var sub = redis.createClient()
// ... application paths go here
var socket = socketio.listen(app)
sub.subscribe("chat")
socket.on("connection", function(client){
client.send("welcome!")
client.on("message", function(text){
store.incr("messageNextId", function(e, id){
store.hmset("messages:" + id, { uid: client.sessionId, text: text }, function(e, r){
pub.publish("chat", "messages:" + id)
})
})
})
client.on("disconnect", function(){
client.broadcast(client.sessionId + " disconnected")
})
sub.on("message", function(pattern, key){
store.hgetall(key, function(e, obj){
client.send(obj.uid + ": " + obj.text)
})
})
})
Redis is optimized for a high level of concurrent connections. There is also discussion about multiple database connections and connection pool implementation in node_redis module.
Is it possible to reuse redis
connection if the cookies are same? So
if someone open many browser tabs also
treat as open 1 connection.
You can use for example HTML5 storage on the client side to keep actively connected only one tab and others will handle communication/messages through storage events. It's related to this question.
I had this exact problem, with an extra requirement that clients must be able to subscribe to private channels, and publish to those channels should not be sent to all listeners. I have attempted to solve this problem by writing a miniature plugin. The plugin:
Uses only 2 redis connections, one for pub, one for sub
Only subscribes to "message" once total (not once every redis connection)
Allow clients to subscribe to their own private channels, without messages being sent to all other listening clients.
Especially useful if your prototyping in a place where you have a redis connection limit (such as redis-to-go).
SO link: https://stackoverflow.com/a/16770510/685404
You need to remove the listener when client disconnect.
var io = io.listen(server),
buffer = [];
var redis = require("redis");
var subscribe = redis.createClient();
io.on('connection', function(client) {
console.log(client.request.headers.cookie);
subscribe.get("..", function (err, replies) {
});
var redis_handler = function(channel,message) {
var msg = { message: [client.sessionId, message] };
buffer.push(msg);
if (buffer.length > 15) buffer.shift();
client.send(msg);
};
subscribe.on("message", redis_handler);
client.on('message', function(message){
});
client.on('disconnect', function(){
subscribe.removeListerner('message', redis_handler)
//subscribe.quit();
});
});
See Redis, Node.js, and Socket.io : Cross server authentication and node.js understanding
Using redis as a store has become much simpler since this question was asked/answered. It is built in now.
Note, that if you are using redis because you are using the new node clustering capabilities (utilizing multiple CPUs), you have to create the server and attach the listeners inside of each of the cluster forks (this is never actually explained anywhere in any of the documentation ;) ). The only good code example online that I have found is written in CoffeeScript and I see a lot of people saying this type of thing "just doesn't work", and it definitely doesn't if you do it wrong. Here's an example of "doing it right" (but it is in CoffeeScript)