Quickly using up all connections on postgresql in Node.js - node.js

I am using an app on GCP with Node.js with Postgresql (Cloud SQL, lowest tier i.e. 25 connections) using the 'pg' package ("pg": "^8.7.3",). I am quite new with this configuration so there may be some very basic errors here.
I configure my pg_client like this
// CLOUD SQL POSTGRESQL DATABASE
const { Client, Pool } = require('pg')
const pg_client = new Pool({
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PWD,
port: 5432,
})
and then, in order to copy the data from a nosql-database with some 50.000+ items I go through them pretty much like this. I know the code doesn't make perfect sense but this is how the SQL calls are being made:
fiftyThoussandOldItems.forEach(async (item) => {
let nameId = await pg_client.query("SELECT id from some1000items where name='John'")
pg_client.query("INSERT into items (id, name, url) VALUES (nameId, 1,2)"
})
This does however quickly render sorry, too many clients already :: proc.c:362 and error: remaining connection slots are reserved for non-replication superuser connections.
I have done similar runs before without experiencing this issue (but then with about 1000 items).
As far as I understand, I do Not need to do a pg_client.connect() and pg_client.release() (or is it .end()) any longer, according to a SO-answer I unfortunately can't find any longer. Is this really correct? (When I tried to before, I ended up with a lot of other issues that causes other types of problems)
So, my questions are:
What am I doing wrong? Do I need to use pg_client.connect() before every SQL-call and then pg_client.release() after every SQL-call? Or is it pg_client.end()?
Is there a way to have this automatically handled? It doesn't seem very DRY and bug prone.

Related

Querying my Cloud Sql DB from my Node.js backend

Recently I've been learning google cloud sql it took a little but I was able to connect my cloud sql auth proxy to my postgres client. However I'm not sure how to query or make post request to my cloud sql. Originally I was just doing
const Pool = require("pg").Pool;
const pool = new Pool({
user: "postgres",
password: "****",
host: "localhost",
port: 5432,
database: "somedb"
});
I'm not sure how to convert this over to try and query the cloud sql db. I did try converting it and got.
const Pool = require("pg").Pool;
const pool = new Pool({
user: "postgres",
password: "****",
host: "[cloud sql ip]",
port: 5432,
database: "[pg/gc db]"
});
I end up getting the error [pg_hba.conf rejects connection for host "[ipv4 ip]", user "postgres", database "[pg/gc db]", no encryption]. I know that the documentation has a code sample but I don't understand it and cant really find any resources on explaining it.
Edit: I am uploading files to a bucket in cloud storage which I was successfully able to do. I plan on mapping out all these files onto a webpage. However I would like to filter them by certain features so I am making a second request after I store the file. My second request will store attributes into a database that I can then relate to the files for filtering.
If you're running the auth proxy from your local machine where you're running your application, then the code will be the same from your application's code perspective. You'll still connect to localhost (although you may need to connect to 127.0.0.1 depending on how you have hosts set up on the machine).
The database field will depend on how you've set up the database in Cloud SQL, but it should be the same as your local database. E.g. if you created a database named "somedb" in Cloud SQL, you don't have to change anything to connect to it in Cloud SQL. The proxy running locally will make everything behave as if you're running the database locally from the application's perspective.
Edit: This particular answer wasn't the issue they were having, but in the comments it came up that both the Proxy and SSL-only was being used, which is (generally) less recommended as it doubles up the SSL/TLS usage because the Proxy also uses generated SSL certificates to connect to Cloud SQL so the database-level SSL connectivity is a redundancy that's likely not needed. There are some edge cases where you may want both, but broadly speaking one or the other is recommended.

Force pool requests to resolve in the order that they're made

I have a Node.js server running with a PostgreSQL database. I have the client set up to make a request to edit data, wait for said request to resolve, and then send another request to receive back all the data with the new changes applied.
This works well, however for some reason it fails every so often and I don't know why. The data in the database DOES get changed, and the new data IS requested, but for some reason the new changes aren't showing up in the new data (however refreshing the page causes it to reflect the new changes). I assumed it was because my requests were sometimes sending out of order, but that doesn't appear to be the case. My best guess at what's happening is that the database is falling behind, and so my Node.js code executes extremely fast and tells the client that it's ready to send the new data, but by the time it asks the database to give it the new data the database has sometimes not had enough time to finish updating its contents, and so it is giving old data...
The code I'm using to make calls to the database is such:
const {Pool} = require("pg");
const pool = new Pool({
user: "postgres",
database: "databasename",
password: "mypassword",
port: 5432
});
pool.connect((err,client,done) => {
client.query("SELECT id, title, description, people, due_date, real_due_date FROM table ORDER BY due_date;",(error,result) => {
if(error){
console.log(error);
resolve("error");
}else{
resolve(JSON.stringify(result.rows));
}
});
done();
});
(this is just a snippet to demonstrate the structure I use, the actual code's content isn't specific to the question.)
I'm using the same pool to make all the requests to the database. I'm wondering, is there a way that I can force this pool to complete the requests in the order they're received such that it will only execute the SELECT query after the UPDATE has fully completed? If so, do you think that will fix my problem?

How can I perfectly work with Oracle DB in my Node.js project

I am currently working on a web site using Angular, Node.Js, express and an Oracle database.
I'm still not familiar with all this technologies and I'm doing my best! My problem is the Oracle database connection can't be exported. I searched about this and I think it's a promise thing. The only solution I found is to connect to the database every time I use a new SQL query which is impossible for me because the project is big.
Is there any method I can use to make the database connection in a separate file and export it when I need it?
With a multi-user app, you should use a connection pool. You open it at app start, and then access it as needed - you don't have to pass anything around.
If you open the connection pool in your initialization routine:
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString
});
console.log('Connection pool started');
then any other module that needs a connection can simply do:
conn = await oracledb.getConnection();
result = await conn.execute(statement, binds, opts);
await conn.close();
Read up on connection pools and sizing and threads in the node-oracledb
documentation on Connection Pooling.
Also see the series Build REST APIs for Node.js and its code https://github.com/oracle/oracle-db-examples/tree/master/javascript/rest-api. In particular, look at database.js.

node-postgres pool management

I'm trying to connect Nodejs to PostgreSQL database, for that I'm using node-postgres.
var pool = new Pool({
user: username,
password: password,
host: server
database: database,
max: 25
});
module.exports = {
execute_query: function (query2) {
//usage of query
pool.query('query2', function(err, result){
return (result);
});
}
};
Then in my application, the function execute_query is called in different places in the application.
Locally it works but I wonder how the pool is managed, is this enough configuration to manage concurrent users if my application is used by different people ?
Do I need to do anything else to ensure that I have clients in the pool ?
Or should I use the old way of managing clients with a hard code ?
I read the documentation of node-postgres and it says that pool.query is the simplist way but it doesnt say how it manages the connections...
Do you have any information ?
Thank you
is this enough configuration to manage concurrent users if my
application is used by different people ?
This is a very broad question and depends on more than one thing. Let me still give this a shot
Number of connection in the pool is the number of active connections your server will maintain with the db. Each connection as a cost as postgres maintains this as a separate process. So just focussing on the connection pool is not enough.
pgtune gives you a good recommendation on your postgresql.conf setting based on your hardware.
If you want to test out your application, you can test using jMeter or any other load testing tool to see how your application will perform under certain load.
Some good resources to read on the topic stack overflow answer, postgres wiki

Opening less connections with rethinkDB

My question is; Can i have less connections open when using rethindb? Right now I'm opening a new one every time I want to insert or get some data. Im afraid that this is not a right thing to do. Is there any way I can open just one and use it instead? I'm using nodejs.
Yes. You can run multiple queries on a connection. That's the recommended way of doing things.
The best way is to use connection pool. For nodejs, for example, we are using rethinkdb-pool.
I havent looked into the opensource connection pools for rethink, but I have a node app that uses rethink, and it will have a limited number of users, So I save my one connection to rethink as a global var and then use it for all queries.
'use strict';
var r = require('rethinkdb);
var rethinkConnect = null;
r.connect(
{
'host': 'localhost',
'port': '28015',
'password': 'noneya',
},
function(err, conn) {
if (err) {
console.log(err);
} else {
rethinkConnect = conn;
}
}
);
Now all queries the node.js server makes can use the connection. Keep in mind this code is async so you could not immediately make a query the next line after r.connect(). You could, however, use the payload of an inbound socket.io event as the params of a rethink query.
I'd advise you to use rethinkdbdash, an "advanced Node.js RethinkDB driver" that includes connection pools and some other advanced features. It has many more stars and contributors than rethinkdb-pool.

Resources