I can use two connections with different database using prisma? - node.js

I'm trying to use different databases with the prisma, one database would be for read-only and the other for write, I need to configure it in my project and use it inside the UseCase, how to do that?
I tried to configure two schemas but I couldn't call them inside the UseCase

Here's an example of how you can create two PrismaClientInstances, one with read access and one with write access
import { PrismaClient } from '#prisma/client'
const client1 = new PrismaClient({ datasources: { db: { url: 'postgres://localhost/db1' }} })
const client2 = new PrismaClient({ datasources: { db: { url: 'postgres://localhost/db2' }} })
You are essentially overriding the database url while instantiating a prisma client.
Here client1 could be read-only and client2 can be with write access.
You can read more about it here: #2443

Related

loopback3: associate users to different databases

I'm developing a project in loopback3 where I need to create accounts for multiple companies, where each compnay has its own database, I'm fully aware that the loopback3 docs has a section where they explain how to create datasources programmatically and how to create models from that datasource, and I've used that to create the following code which receives in the request a parameter which i called dbname and this one changes the linking to the wanted datasource..
userclinic.js
Userclinic.observe('before save', async (ctx, next) => {
const dbname = ctx.instance.dbname; // database selection
const dbfound = await Userclinic.app.models.Clinics.findOne({where:{dbname}}) // checking if that database really exist in out registred clients databases
if( dbfound ){ // if database found
await connectToDatasource(dbname, Userclinic) // link the model to that database
} else { // otherwise
next(new Error('cancelled...')) // cancel the save
}
})
utils.js (from where i export my connectToDatasource method)
const connectToDatasource = (dbname, Model) => {
console.log("welcome");
var DataSource = require('loopback-datasource-juggler').DataSource;
var dataSource = new DataSource({
connector: require('loopback-connector-mongodb'),
host: 'localhost',
port: 27017,
database: dbname
});
Model.attachTo(dataSource);
}
module.exports = {
connectToDatasource
}
So my problem is that the datasource is actually really changing but the save happens in the previous datasource that was selected (which means it saves the instance to the old database) and doesn't save to the new one till I send the request again. so chaging the datasource is taking two requests to happen and it's also saving the instance in both databases.
I guess that when the request happen loopback checks the datasource related to that model first before allowing any action on that model, I really need to get this done by tonight and I wish someone can help out.
PS: if anyone has a solution to this or knows how to associate multiple clients (users) to multiple databases (programmatically of course) in any way using loopback 3 I'm all ears (eyes).
Thanks in advance.

Can you keep a PostgreSQL connection alive from within a Next.js API?

I'm using Next.js for my side project. I have a PostrgeSQL database hosted on ElephantSQL. Inside the Next.js project, I have a GraphQL API set up, using the apollo-server-micro package.
Inside the file where the GraphQL API is set up (/api/graphql), I import a database helper-module. Inside that, I set up a pool connection and export a function which uses a client from the pool to execute a query and return the result. This looks something like this:
// import node-postgres module
import { Pool } from 'pg'
// set up pool connection using environment variables with a maximum of three active clients at a time
const pool = new Pool({ max: 3 })
// query function which uses next available client to execute a single query and return results on success
export async function queryPool(query) {
let payload
// checkout a client
try {
// try executing queries
const res = await pool.query(query)
payload = res.rows
} catch (e) {
console.error(e)
}
return payload
}
The problem I'm running into, is that it appears as though the Next.js API doesn't (always) keep the connection alive but rather opens up a new one (either for every connected user or maybe even for every API query), which results in the database quickly running out of connections.
I believe that what I'm trying to achieve is possible for example in AWS Lambda (by setting context.callbackWaitsForEmptyEventLoop to false).
It is very possible that I don't have a proper understanding of how serverless functions work and this might not be possible at all but maybe someone can suggest me a solution.
I have found a package called serverless-postgres and I wonder if that might be able to solve it but I'd prefer to use the node-postgres package instead as it has much better documentation. Another option would probably be to move away from the integrated API functionality entirely and build a dedicated backend-server, which maintains the database connection but obviously this would be a last resort.
I haven't stress-tested this yet, but it appears that the mongodb next.js example, solves this problem by attaching the database connection to global in a helper function. The important bit in their example is here.
Since the pg connection is a bit more abstract than mongodb, it appears this approach just takes a few lines for us pg enthusiasts:
// eg, lib/db.js
const { Pool } = require("pg");
if (!global.db) {
global.db = { pool: null };
}
export function connectToDatabase() {
if (!global.db.pool) {
console.log("No pool available, creating new pool.");
global.db.pool = new Pool();
}
return global.db;
}
then in, eg, our API route, we can just:
// eg, pages/api/now
export default async (req, res) => {
const { pool } = connectToDatabase();
try {
const time = (await pool.query("SELECT NOW()")).rows[0].now;
res.end(`time: ${time}`);
} catch (e) {
console.error(e);
res.status(500).end("Error");
}
};

Is declaring a Node.js redis client as a const in multiple helpers a safe way to use it?

This is a little hard articulate so I hope my title isn't too terrible.
I have a frontend/backend React/Node.js(REST API) Web app that I want to add Redis support to for storing retrieving app global settings and per-user specific settings (like language preference, last login, etc... simple stuff) So I was considering adding a /settings branch to my backend REST API to push/pull this information from a redis instance.
This is where my Node.js inexperience comes through. I'm looking at using the ioredis client and it seems too easy. If I have a couple of helpers (more than one .js which will call upon redis) will constructing the client as a const in each be safe to do? Or is there another recommended approach to reusing a single instance of it be the way to go?
Here's a sample of what I'm thinking of doing. Imagine if I had 3 helper modules that require access to the redis client. Should I declare them as const in each? Or centralize them in a single helper module, and get the client from it? Is there a dis-advantage to doing either?
const config = require('config.json');
const redis_url = config.redis_url;
//redis setup
const Redis = require('ioredis');
const redis = new Redis(redis_url);
module.exports = {
test
}
async function test(id) {
redis.get(id, function (err, result) {
if (err) {
console.error(err);
throw(err);
} else {
return result;
}
});
Thank you.
If no redis conflicts...
If the different "helper" modules you are referring to have no conflicts when interacting with redis, such as overwriting / using the same redis keys, then I can't see any reason not to use the same redis instance (as outlined by garlicman) and export this to the different modules in which it is used.
Otherwise use separate redis databases...
If you do require separate redis database connections, redis ships with 16 databases so you can specify which to connect to when creating a new instance - see below:
const redis = new Redis({ // SET UP CONFIG FOR CONNECTION TO REDIS
port: 6379, // Redis port
host: 127.0.0.1, // Redis host
family: 4, // 4 (IPv4) or 6 (IPv6)
db: 10, // Redis database to connect to
});
Normally what I would do (in Java say) is implement any explicit class with singleton access the hold the connection and any connection error/reconnect handling.
All modules in Node.js are already singletons I believe, but what I will probably go with will be a client class to hold it and my own access related methods. Something like:
const config = require('config.json');
const Redis = require("ioredis");
class Redis {
constructor(){
client = new Redis(config.redis_url);
}
get(key) {
return this.client.get(key);
}
set(key, value, ttl) {
let rp;
if (ttl === 0) {
rp = this.client.set(key, value);
}
else {
rp = this.client.set(key, value)
.then(function(res) {
this.client.expire(key, ttl);
});
}
return rp;
}
}
module.exports = new Redis;
I'll probably include a data_init() method to check and preload an initial key/value structure on first connect.

Imported Sequelize connection works properly in one file but not another

I am writing a NodeJS API with express that uses Sequelize as an ORM to connect to a postgres database.
I have several API files containing the endpoint functions for each model, and I group related functions in these files.
In these files, I load the db connection by requiring the models folder, which contains all the model definitions, and an index file that instantiates & exports the database connection with the models.
I instantiate this at the top of any file that needs access to the database. My problem is that when I enter an endpoint, I can access the database connection perfectly. But when I call any function from another file that also accesses the database, all of my models from the required file are undefined, and it throws an error.
/* api/fooApi.js */
const db = require('../models')
const logApi = require('../logApi')
async function createFoo(req, res, next) {
try {
// db.foo and db.log are defined here, and accessible
const foo = await db.foo.create(req.body)
const log = await logApi.logCreation('foo', foo)
}
}
module.exports = { createFoo }
/* api/logApi.js */
const db = require('../models')
async function logCreation(recordType, record) {
// db.foo and db.log are not defined here when the function is called from fooApi.js
const log = await db.log.create({
event: 'create',
type: recordType,
details: `${recordType} record created by user ${record.createdBy}`
})
return log
}
module.exports = { logCreation }
When I enter the endpoint function createFoo(), I have full access to everything that I expect to be in db, including db.foo.create(). But in logCreation(), I cannot access these same functions.
Many different models will access the logCreation function, so it needs to be defined in one place. The require statement at the top of the file is exactly the same as that in fooApi, but when I debug the function, db is an empty object without any of the properties it should have.
If I pass db as an argument to logCreation, then the function works, but I'd like to avoid this if I can, as it would involve a major restructuring.
Previously to having thing set up this way, I had the following in each api file:
let db = null
function init (dbConn) {
db = dbConn
}
At setup, I would call init() in every API file using the same instance as the argument. However, this was a super clunky way of doing things that I wanted to move away from.
So my question is: What is the correct way to set up Sequelize so that I can access the database across multiple files?
I am using implicit transactions using namespace and cls-hooked as directed in the docs.
I solved my problem - this had nothing to do with how I was importing the file, and was entirely due to a circular dependency because a helper function on one of the models was accessing a helper function in an API file.

efficiency of mongodb/mongoskin access by multiple modules approach?

I'm developing an express app that provides a REST api, it uses mongodb through mongoskin. I wanted a layer that splits routing from db acess. I have seen an example that creates a database bridge by creating a module file, an example models/profiles.js:
var mongo = require('mongoskin'),
db = mongo.db('localhost:27017/profiler'),
profs = db.collection('profiles');
exports.examplefunction = function (info, cb) {
//code that acess the profs collection and do the query
}
later this module is required in the routing files.
My question is: If I use this aproach for creating one module for each collection, will it be efficient? Do I have an issue of connecting and disconnecting multiple(unnecessary) times from mongo by doing that?
I was thiking that maybe exporting the db variable from one module to the others that handle each collection would solve the suposed issue, but I'm not sure.
Use a single connection and then create your modules passing in the shared db instance. You want to avoid setting up separate db pools for each module. One of doing this is to construct the module as a class.
exports.build = function(db) {
return new MyClass(db);
}
var MyClass = function(db) {
this.db = db;
}
MyClass.doQuery = function() {
}

Resources