PostgreSQL Row Level Security in Node JS - node.js

I have a database which is shared amongst multiple tenants/users. However, I want to add row-level-security protection so that any given tenant can only see those rows that belong to them.
As such, for each tenant I have a user in PostgreSQL, such as "client_1" and "client_2". In each table, there is a column "tenant_id", the default value of which is "session_user".
Then, I have row level security as such:
CREATE POLICY policy_warehouse_user ON warehouse FOR ALL
TO PUBLIC USING (tenant_id = current_user);
ALTER TABLE warehouse ENABLE ROW LEVEL SECURITY;
This works great, and if I set the user "SET ROLE client_1" I can only access those rows in which the tenant_id = "client_1".
However, I am struggling with how to best set this up in the Node JS back-end. Imortantly, for each tenant, such as "client_1", there can be multiple users connected. So several users on our system, all who work at company X, will connect to the database as "client_1".
What I am currently doing is this:
let config = {
user: 'test_client2',
host: process.env.PGHOST,
database: process.env.PGDATABASE,
max: 10, //default value
password: 'test_client2',
port: process.env.PGPORT,
}
const pool = new Pool(config);
const client = await pool.connect()
await client.query('sql...')
client.release();
I feel like this might be a bad solution, especially since I am creating a new Pool each time a query is executed. So the question is, how can I best ensure that each user executes queries in the database using the ROLE that corresponds to their tenant?

Maybe you can have a setupDatabase method that returns the pool for your app this will be called once at app bootstrap:
function setUpDatabase {
let config = {
user: 'test_client2',
host: process.env.PGHOST,
database: process.env.PGDATABASE,
max: 10, //default value
password: 'test_client2',
port: process.env.PGPORT,
}
const pool = new Pool(config);
return pool
}
and then when you identify the tenant before executing the query you set the role
await client.query('set role $tenant', currentTenant);
// my assumption is that next line will use the role you set before
await client.query('select * from X where Y');
This is just a suggestion, I haven't tested it.

Related

Azure Function connect Azure PostgreSQL ETIMEDOUT, errno: -4039

I have an Azure (AZ) Function does two things:
validate submitted info involving 3rd party packages.
when ok call a postgreSQL function at AZ to fetch a small set of data
Testing with Postman, this AF localhost response time < 40 ms. Deployed to Cloud, change URL to AZ, same set of data, took 30 seconds got Status: 500 Internal Server Error.
Did a search, thought this SO might be the case, that I need to bump my subscription to the expensive one to avoid cold start.
But more investigation running part 1 and 2 individually and combined, found:
validation part alone runs perfect at AZ, response time < 40ms, just like local, suggests cold start/npm-installation is not an issue.
pg function call always long and status: 500 regardless it runs alone or succeeding part 1, no data returned.
Application Insight is enabled and added a Diagnostic settings with:
FunctionAppLogs and AllMetrics selected
Send to LogAnalytiscs workspace and Stream to an event hub selected
Following queries found no error/exceptions:
requests | order by timestamp desc |limit 100 // success is "true", time taken 30 seconds, status = 500
traces | order by timestamp desc | limit 30 // success is "true", time taken 30 seconds, status = 500
exceptions | limit 30 // no data returned
How complicated my pg call is? Standard connection, simple and short:
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
db = pgp(
{
user: process.env.PGuser,
host: process.env.PGhost,
database: process.env.PGdatabase,
password: process.env.PGpassword,
port: process.env.PGport,
ssl:
{
rejectUnauthorized: true,
ca: fs.readFileSync("./environment/DigiCertGlobalRootCA.crt.pem").toString(),
},
}
);
const pgTest = (nothing) =>
{
return new Promise((resolve, reject) =>
{
var sql = 'select * from schema.test()'; // test() does a select from a 2-row narrrow table.
db.any(sql)
.then
(
good => resolve(good),
bad => reject({status: 555, body: bad})
)
}
);
}
module.exports = { pgTest }
AF test1 is a standard httpTrigger anonymous access:
const x1 = require("package1");
...
const xx = require("packagex");
const pgdb = require("db");
module.exports = function(context)
{
try
{
pgdb.pgTest(1)
.then
(
good => {context.res={body: good}; context.done();},
bad => {context.res={body: bad}; context.done();}
)
.catch(err => {console.log(err)})
}
catch(e)
{ context.res={body: bad}; context.done(); }
}
Note:
AZ = Azure.
AZ pg doesn't require SSL.
pg connectivity method: public access (allowed IP addresses)
Postman tests on Local F5 run against the same AZ pg database, all same region.
pgAdmin and psql all running fast against the same.
AF-deploy is zip-file deployment, my understanding it is using the same configuration.
I'm new to Azure but based on my experience, if it's about credential then should come back right away.
Update 1, FunctionAppLogs | where TimeGenerated between ( datetime(2022-01-21 16:33:20) .. datetime(2022-01-21 16:35:46) )
Is it because my pg network access set to Public access?
My AZ pgDB is a flexible server, current Networking is Public access (allowed IP address), and I have added some Firewall rule w/ client IP address. My assumption is access is allowed within AZ, but it's not.
Solution 1, simply check this box: Allow public access from any Azure servcie within Azure to this server at the bottom of the Settings -> Networking.
Solution 2, find out all AF's outbound IP and add them into Firewall rule, under Settings -> Networking. Reason to add them all is Azure select an outbound IP randomly.

Managing syncs between databases with different permissions

I am building an app using pouchdb and couchdb. The structure is that each user has its own database (I have activated per user option).
Then, to simplify let say there is a database that aggregates data for all users, a location database.
This database syncs with user databases.
For each user database, the user has role admin.
The location database, has an admin user as admin. Regular users are not added as admin to this database. Each document has a userId attribute. The sync between userdb and locationdb will be filtered by userID.
Now, when I login in the app as user, I have permissions to launch the sync between let say localdb on pouchdb and userdb on couchdb. Since user is admin on userdb. So far so good.
var remoteUser =
new PouchDB(
'https://domain:6984/' + 'userdb-' + hex,
{
auth: {
username: 'user',
password: 'password'
}
}
)
db.replicate.from(remoteUser).on('complete', function () {
db.sync(remoteUser, { live: true, retry: true })
.on('change', function (info) {
dispatch('syncPrintQueue')
console.log('sync remote user')
}).on('pause', function () {
console.log('user remote syncing done')
})
})
But then from the app I want to sync userdb to locationdb. As user I cannot do that. So I add auth as admin. And now I can launch the sync.
var remoteLocation =
new PouchDB(
'https://domain:6984/' + 'locationdb-' + locationHex,
{
auth: {
username: 'admin',
password: 'password'
}
}
)
remoteUser.replicate.from(remoteLocation).on('complete', function () {
remoteUser.sync(remoteLocation, {
live: true,
retry: true
})
.on('change', function (info) {
console.log('location remote syncing ')
}).on('pause', function () {
console.log('location remote syncing done')
})
})
dispatch('syncCompany', remoteLocation)
},
The problem is that now im logged in as admin in the current session.
What I am doing right now is store user info on localStorage right after login. And I use that for filtering or validating on couch. Instead of user returned from checking current session. Which would allow me to correctly filter server side.
Adding each user to the general database as admin is not an option. So the only idea I have is to move all the syncs and authorizations to a middleware in say rails or node.
Is there a solution within couchdb to manage this situation?
The standard persistent replication generally didn't scale to replicating infrequent updates from (or to) many databases. It has been improving with support for cycling through many permanent replications using a scheduler, so you could look at that to see if it is currently sufficient.
The interim solution has been the spiegel project which has listener processes that observe the _global_changes feed and match database names by regex pattern to identify which source databases have changed and need to be re-examined by one of its change or replicator processes.

Hyperledger Fabric 2.0, Can't Access Fabtokens of Users Using Node.js SDK

I'm trying to issue some Fabtokens to users and then use them in various scenarios such as transferring, redeeming, etc. I follow the Node SDK documentation here: https://fabric-sdk-node.github.io/master/tutorial-fabtoken.html
This is how they do the Fabtoken operations:
// create a TokenClient instance from client
const tokenclient = client.newTokenClient(mychannel);
// create a transaction ID for "issuer"
const txId = client.newTransactionID();
// create two parameters for issue, one for user1 and one for user2
const param1 = {
owner: user1.getIdentity().serialize(),
type: 'USD',
quantity: '500',
};
const param2 = {
owner: user2.getIdentity().serialize(),
type: 'EURO',
quantity: '300',
};
// create the token request for issue
const issueRequest = {
params: [param1, param2],
txId: txId,
};
// issuer calls issue method to issue tokens to user1 and user2
const result = await tokenClient.issue(issueRequest);
And then use a different tokenClient to list the tokens of user 1:
const user1Tokenclient = client1.newTokenClient(mychannel);
// user1 lists tokens
const mytokens = await user1TokenClient.list();
// iterate the tokens to get token id, type, and quantity for each token
for (const token of tokens) {
// get token.id, token.type, and token.quantity
// token.id will be used for transfer and redeem
}
It's mentioned on the Node SDK's Client class page here: https://fabric-sdk-node.github.io/master/Client.html that switching userContexts with the same client instance is an anti-pattern and not recommended since client instances are stateful.
As they suggest, I create my client instances with different user contexts. This is how I create my clients, set their user context and create my tokenClient instances:
const adminClient = new Fabric_Client();
const admin = await adminClient.createUser(user_opts);
adminClient.setUserContext(admin, true);
let adminConfig = {
admin: admin,
adminClient: adminClient,
adminTokenClient: adminClient.newTokenClient(channel)
}
const server = await serverClient.createUser(server_opts);
serverClient.setUserContext(server, true);
let serverConfig = {
server: server,
serverClient: serverClient,
serverTokenClient: serverClient.newTokenClient(channel)
}
Later on, I'm using these config objects to issue some tokens to different users. How I issue tokens to my server account from my issuer (admin) account:
const txId = adminConfig.adminClient.newTransactionID();
let issueQuery = {
tokenClient: adminConfig.adminTokenClient,
txId: txId,
channel: channel,
params: []
}
for(let i=0; i < 3; ++i) {
let param = {
owner: serverConfig.server.getIdentity().serialize(),
type: 'test',
quantity: '1'
}
issueQuery.params.push(param);
}
let issueTx = await waitForIssue(issueQuery);
This successfully issue three tokens to the server as expected. The problem is that when I try to access to the tokens of my server like the example they provide using a similar code:
let server_tokens = await serverConfig.serverTokenClient.list();
for (let server_token of server_tokens) {
console.log(server_token.id);
}
Result is just empty and I don't get any error messages. However, when I check the transaction using queryTransaction(txId) for the token issue transaction I generate, I can see that owner of the issued tokens in that transaction is the server and that's how I can be sure that I can successfully issue the tokens to the server. Is there any other way to check the tokens of my server? Or shouldn't I use a different client and user context per each user as they suggest? Because, previously I was able to see the tokens of the server when I used a single client and single user context to issue and list tokens. But this approach caused me problems when I was trying to transfer my tokens asynchronously.
As far as I know FabTokens are removed from the Master branch of Fabric 2.0 now as described in these links:
https://gerrit.hyperledger.org/r/c/fabric/+/32979
https://lists.hyperledger.org/g/fabric/topic/fabtoken/34150195?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,34150195
I would expect the tutorial and the information in the Fabric docs to be removed in due course.

One connection per user

I know that this question was asked already, but it seems that some more things have to be clarified. :)
Database is designed in the way that each user has proper privileges to read documents, so the connection pool needs to have a connection with different users, which is out of connection pool concept. Because of the optimization and the performance I need to call so-called "user preparation" which includes setting session variables, calculating and caching values in a cache, etc, and after then execute queries.
For now, I have two solutions. In the first solution, I first check that everything is prepared for the user and then execute one or more queries. In case it is not prepared then I need to call "user preparation", and then execute query or queries. With this solution, I lose a lot of performance because every time I have to do the checking and so I've decided for another solution.
The second solution includes "database pool" where each pool is for one user. Only at the first connection useCount === 0 (I do not use {direct: true}) I call "user preparation" (it is stored procedure that sets some session variables and prepares cache) and then execute sql queries.
User preparation I’ve done in the connect event within the initOptions parameter for initializing the pgPromise. I used the pg-promise-demo so I do not need to explain the rest of the code.
The code for pgp initialization with the wrapper of database pooling looks like this:
import * as promise from "bluebird";
import pgPromise from "pg-promise";
import { IDatabase, IMain, IOptions } from "pg-promise";
import { IExtensions, ProductsRepository, UsersRepository, Session, getUserFromJWT } from "../db/repos";
import { dbConfig } from "../server/config";
// pg-promise initialization options:
export const initOptions: IOptions<IExtensions> = {
promiseLib: promise,
async connect(client: any, dc: any, useCount: number) {
if (useCount === 0) {
try {
await client.query(pgp.as.format("select prepareUser($1)", [getUserFromJWT(session.JWT)]));
} catch(error) {
console.error(error);
}
}
},
extend(obj: IExtensions, dc: any) {
obj.users = new UsersRepository(obj);
obj.products = new ProductsRepository(obj);
}
};
type DB = IDatabase<IExtensions>&IExtensions;
const pgp: IMain = pgPromise(initOptions);
class DBPool {
private pool = new Map();
public get = (ct: any): DB => {
const checkConfig = {...dbConfig, ...ct};
const {host, port, database, user} = checkConfig;
const dbKey = JSON.stringify({host, port, database, user})
let db: DB = this.pool.get(dbKey) as DB;
if (!db) {
// const pgp: IMain = pgPromise(initOptions);
db = pgp(checkConfig) as DB;
this.pool.set(dbKey, db);
}
return db;
}
}
export const dbPool = new DBPool();
import diagnostics = require("./diagnostics");
diagnostics.init(initOptions);
And web api looks like:
GET("/api/getuser/:id", (req: Request) => {
const user = getUserFromJWT(session.JWT);
const db = dbPool.get({ user });
return db.users.findById(req.params.id);
});
I'm interested in whether the source code correctly instantiates pgp or should be instantiated within the if block inside get method (the line is commented)?
I've seen that pg-promise uses DatabasePool singleton exported from dbPool.js which is similar to my DBPool class, but with the purpose of giving “WARNING: Creating a duplicate database object for the same connection”. Is it possible to use DatabasePool singleton instead of my dbPool singleton?
It seems to me that dbContext (the second parameter in pgp initialization) can solve my problem, but only if it could be forwarded as a function, not as a value or object. Am I wrong or can dbContext be dynamic when accessing a database object?
I wonder if there is a third (better) solution? Or any other suggestion.
If you are troubled by this warning:
WARNING: Creating a duplicate database object for the same connection
but your intent is to maintain a separate pool per user, you can indicate so by providing any unique parameter for the connection. For example, you can include custom property with the user name:
const cn = {
database: 'my-db',
port: 12345,
user: 'my-login-user',
password: 'my-login-password'
....
my_dynamic_user: 'john-doe'
}
This will be enough for the library to see that there is something unique in your connection, which doesn't match the other connections, and so it won't produce that warning.
This will work for connection strings as well.
Please note that what you are trying to achieve can only work well when the total number of connections well exceeds the number of users. For example, if you can use up to 100 connections, with up to 10 users. Then you can allocate 10 pools, each with up to 10 connections in it. Otherwise, scalability of your system will suffer, as total number of connections is a very limited resource, you would typically never go beyond 100 connections, as it creates excessive load on the CPU running so many physical connections concurrently. That's why sharing a single connection pool scales much better.

Share a database connection pool between sequelize and pg

I have a server which I've written using Express and node-postgres (pg). It creates its own DB pool:
const dbPool = new pg.Pool(dbConfig);
and runs SQL queries directly using this connection.
Now I'm adding a new table and corresponding REST API. I'd like to use sequelize and epilogue to reduce the boilerplate. Unfortunately, sequelize wants to create its own database connection pool:
const sequelize = new Sequelize(database, user, password, config);
Is it possible to re-use the existing connection pool or otherwise share it between my existing pg code and my new sequelize code?
Sequelize does not offer the option to pass a custom pool, but you can pass options that will get used to create the sequelize pool, such as min and max connections.
What I would do in your case is to check your total DB connection count, and make a repartition based on the expected usage of your two pools.
For example if you have 20 connections max on your database:
const dbPool = new pg.Pool({
max: 10
});
const sequelize = new Sequelize(/* ... */, {
// ...
pool: {
max: 10,
min: 0,
acquire: 30000,
idle: 10000
}
});
I would also suggest using environment variables to set the max connection on your sequelize pool and nod-pg pool, so that you could adapt easily your repartition if needed.

Resources