Managing syncs between databases with different permissions - couchdb

I am building an app using pouchdb and couchdb. The structure is that each user has its own database (I have activated per user option).
Then, to simplify let say there is a database that aggregates data for all users, a location database.
This database syncs with user databases.
For each user database, the user has role admin.
The location database, has an admin user as admin. Regular users are not added as admin to this database. Each document has a userId attribute. The sync between userdb and locationdb will be filtered by userID.
Now, when I login in the app as user, I have permissions to launch the sync between let say localdb on pouchdb and userdb on couchdb. Since user is admin on userdb. So far so good.
var remoteUser =
new PouchDB(
'https://domain:6984/' + 'userdb-' + hex,
{
auth: {
username: 'user',
password: 'password'
}
}
)
db.replicate.from(remoteUser).on('complete', function () {
db.sync(remoteUser, { live: true, retry: true })
.on('change', function (info) {
dispatch('syncPrintQueue')
console.log('sync remote user')
}).on('pause', function () {
console.log('user remote syncing done')
})
})
But then from the app I want to sync userdb to locationdb. As user I cannot do that. So I add auth as admin. And now I can launch the sync.
var remoteLocation =
new PouchDB(
'https://domain:6984/' + 'locationdb-' + locationHex,
{
auth: {
username: 'admin',
password: 'password'
}
}
)
remoteUser.replicate.from(remoteLocation).on('complete', function () {
remoteUser.sync(remoteLocation, {
live: true,
retry: true
})
.on('change', function (info) {
console.log('location remote syncing ')
}).on('pause', function () {
console.log('location remote syncing done')
})
})
dispatch('syncCompany', remoteLocation)
},
The problem is that now im logged in as admin in the current session.
What I am doing right now is store user info on localStorage right after login. And I use that for filtering or validating on couch. Instead of user returned from checking current session. Which would allow me to correctly filter server side.
Adding each user to the general database as admin is not an option. So the only idea I have is to move all the syncs and authorizations to a middleware in say rails or node.
Is there a solution within couchdb to manage this situation?

The standard persistent replication generally didn't scale to replicating infrequent updates from (or to) many databases. It has been improving with support for cycling through many permanent replications using a scheduler, so you could look at that to see if it is currently sufficient.
The interim solution has been the spiegel project which has listener processes that observe the _global_changes feed and match database names by regex pattern to identify which source databases have changed and need to be re-examined by one of its change or replicator processes.

Related

PostgreSQL Row Level Security in Node JS

I have a database which is shared amongst multiple tenants/users. However, I want to add row-level-security protection so that any given tenant can only see those rows that belong to them.
As such, for each tenant I have a user in PostgreSQL, such as "client_1" and "client_2". In each table, there is a column "tenant_id", the default value of which is "session_user".
Then, I have row level security as such:
CREATE POLICY policy_warehouse_user ON warehouse FOR ALL
TO PUBLIC USING (tenant_id = current_user);
ALTER TABLE warehouse ENABLE ROW LEVEL SECURITY;
This works great, and if I set the user "SET ROLE client_1" I can only access those rows in which the tenant_id = "client_1".
However, I am struggling with how to best set this up in the Node JS back-end. Imortantly, for each tenant, such as "client_1", there can be multiple users connected. So several users on our system, all who work at company X, will connect to the database as "client_1".
What I am currently doing is this:
let config = {
user: 'test_client2',
host: process.env.PGHOST,
database: process.env.PGDATABASE,
max: 10, //default value
password: 'test_client2',
port: process.env.PGPORT,
}
const pool = new Pool(config);
const client = await pool.connect()
await client.query('sql...')
client.release();
I feel like this might be a bad solution, especially since I am creating a new Pool each time a query is executed. So the question is, how can I best ensure that each user executes queries in the database using the ROLE that corresponds to their tenant?
Maybe you can have a setupDatabase method that returns the pool for your app this will be called once at app bootstrap:
function setUpDatabase {
let config = {
user: 'test_client2',
host: process.env.PGHOST,
database: process.env.PGDATABASE,
max: 10, //default value
password: 'test_client2',
port: process.env.PGPORT,
}
const pool = new Pool(config);
return pool
}
and then when you identify the tenant before executing the query you set the role
await client.query('set role $tenant', currentTenant);
// my assumption is that next line will use the role you set before
await client.query('select * from X where Y');
This is just a suggestion, I haven't tested it.

How to manage user authentication for hyperledger using msp and fabric-ca-client?

I am developing an application over fabric 1.3 . I have built a network on multi-node setup, connected peers, instantiate chaincode and have my network up and ready for invocation and queries.
Now, I am thinking to make a log-in portal through which a user can register/enroll and perform invoke/queries. All my peers and orderer are on cloud, and am planning to provide this log-in feature using the Node SDK exposed on a cloud instance.
I went through the official doc:
https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#registering-a-new-identity
I can see that we need fabric-ca component to register users and enroll them for queries. Upon enrollment, we get a cert files under ~/.hfc-key-store.
Now I want to understand how should I go ahead with my flow.
User signs up on network:
fabric_ca_client.register({enrollmentID: 'user1', affiliation: 'org1.department1'}, admin_user)
User log in with his secret:
fabric_ca_client.enroll({enrollmentID: 'user1', enrollmentSecret: secret});
}).then((enrollment) => {
console.log('Successfully enrolled member user "user1" ');
return fabric_client.createUser(
{username: 'user1',
mspid: 'Org1MSP',
cryptoContent: { privateKeyPEM: enrollment.key.toBytes(), signedCertPEM: enrollment.certificate }
});
}).then((user) => {
member_user = user;
return fabric_client.setUserContext(member_user);
Invoke/Query as user1:
var store_path = path.join(os.homedir(), '.hfc-key-store');
Fabric_Client.newDefaultKeyValueStore({ path: store_path
}).then((state_store) => {
// assign the store to the fabric client
fabric_client.setStateStore(state_store);
var crypto_suite = Fabric_Client.newCryptoSuite();
// use the same location for the state store (where the users' certificate are kept)
// and the crypto store (where the users' keys are kept)
var crypto_store = Fabric_Client.newCryptoKeyStore({path: store_path});
crypto_suite.setCryptoKeyStore(crypto_store);
fabric_client.setCryptoSuite(crypto_suite);
// get the enrolled user from persistence, this user will sign all requests
return fabric_client.getUserContext('user1', true);
}).then((user_from_store) => {
if (user_from_store && user_from_store.isEnrolled()) {
console.log('Successfully loaded user1 from persistence');
member_user = user_from_store;
} else {
throw new Error('Failed to get user1.... run registerUser.js');
}..
Now, what shall I do when a user logs out? delete the ~/.hfc-key-store certs? Since these certs are going to be stored on server side where Node script is running, so it doesn't make sense.
Also, is my flow correct or if there is any better way to accomplice my objective?
I had a similar login implementation that I had to do, As you said I did create certificates for each registered user as well as stored the basic user information in MongoDB.
The flow that I went with is that once the user is registered the user certification is created as well as his login credentials such as username and password are stored in MongoDB.
When the user tries to login back, I would check the MongoDB as well as the certification to see if the user has already registered, once logged in the user would then be in possession of an auth token which he then can use to interact with the fabric-client.
When an identity (certs and keys) is issued by CA then it should be persisted for that particular user for future interactions (either by saving in client wallet or other methods) so if you delete it then there will be re-enroll process on each log in and it will slow it down too.
To resolve it -
1. Create separate logic like JWT token for log in and session management.
2. Save the keys and certs on server directory (not the best way but will work for now later)
Let me know if it satisfies your query.
Maybe a little late but maybe it can help someone. My approach is to do what you do in the login, in a register method where I return the certificate and the private key, then on login the user needs to provide both the certificate and the private keys generated (and also the certificate and the private key for the TLS connection), and with this information I recreate de Identity and store it in a MemoryWallet, and with this MemoryWallet now, I can create the gateway and connect to the blockchain.
It would be something like this
const identity = X509WalletMixin.createIdentity(mspId, certificate, privateKey);
const identityTLS = X509WalletMixin.createIdentity(mspId, certificateTLS, privateKeyTLS);
const wallet = new InMemoryWallet();
await wallet.import(userId, identity);
await wallet.import(userId + '-tls', identityTLS);
const gateway = new Gateway();
await gateway.connect(ccpPath, { wallet, identity: userId, discovery: { enabled: true, asLocalhost: false } });
const client = gateway.getClient();
const userTlsCert = await wallet.export(userId + '-tls') as any;
client.setTlsClientCertAndKey(userTlsCert.certificate, userTlsCert.privateKey);
Hope it helps

Storing firebase authenticated user info (such as FirstName, LastName, Gender and etc.) to our own database

I use the user registration and login through the firebase authentication, The moment the user registers / login , I want to store the additional user information (such as FirstName, LastName, Gender and etc.) in my database.
Here is what I am doing to do so but what happens when a rest call to store the new user information fails. The second time he logins he is not a new user
loginWithFacebook() {
const provider = new firebase.auth.FacebookAuthProvider();
provider.addScope('user_birthday');
provider.addScope('user_friends');
provider.addScope('user_gender');
return new Promise<any>((resolve, reject) => {
this.afAuth.auth
.signInWithPopup(provider) // a call made to sign up via fb
.then(res => {
if (res) {
resolve(res);
if (res.additionalUserInfo.isNewUser) { // creatin profile only if he is a new user
this.createProfile(res); // a call to store the response in the db
}
this.setTokenSession(res.credential.accessToken);
}
}, err => {
console.log(err);
reject(err);
})
})
}
How to always ensure that the new user information is stored in my own db?
I don't know angular, but I think that you need to use the user uid as a key in the firebase database. I think that this uid is generated when a user first logs into your app, and is kept through the entire life of your app.. So when a user logs in, you can get the uid, and check in the Firebase DB, if there is an entry with the uid..if not, it means that you have a new user, and you should create an entry in the DB.
More details here: https://firebase.google.com/docs/auth/web/manage-users.
If you want to see how to find UID : Is there any way to get Firebase Auth User UID?

Is there a way to trigger a Firebase Function when the user node is updated?

I have two nodes that contain a user's associated email. Whenever the user resets their authentication email, it is updated in the Firebase Authentication user node, as well as two additional nodes in my database via a fan-out technique. Each time a user updates their Authentication email they're sent an email address change notification which allows them to revert the email address change.
The problem I am running into is that if a user reverts these changes, the proper email address is no longer reflected in the database and it's left in an inconsistent state. My solution would be to have these nodes automatically updated via Cloud Function whenever a user changes their authentication email.
Is this possible? If so, what would I use to implement it? If not, is there another workaround that anyone knows of to keep my database in a consistent state for Authentication email changes?
After quite a few hours of sleuthing, I have figured out that this is possible through the Firebase Admin SDK. See https://firebase.google.com/docs/auth/admin/manage-users for more details.
Basically, you make a Cloud Function which uses the admin SDK to reset the email without sending that pesky notification to the user and, on success, uses sever-side fan out to update the database.
For example:
const functions = require('firebase-functions');
const admin = require("firebase-admin");
// Initializes app when using Firebase Cloud Functions
admin.initializeApp(functions.config().firebase);
const databaseRef = admin.database().ref();
exports.updateEmail = functions.https.onRequest((request, response)
// Cross-origin headers
response.setHeader("Access-Control-Allow-Methods", "GET, POST, PUT, OPTIONS");
response.setHeader("Access-Control-Allow-Origin", "YOUR-SITE-URL");
response.setHeader("Access-Control-Allow-Headers", "Content-Type");
const email = request.body.email;
const uid = request.body.uid;
// Update auth user
admin.auth().updateUser(uid, {
"email": email
})
.then(function() {
// Update database nodes on success
let fanoutObj = {};
fanoutObj["/node1/" + uid + "/email/"] = email;
fanoutObj["/node2/" + uid + "/email/"] = email;
// Update the nodes in the database
databaseRef.update(fanoutObj).then(function() {
// Success
response.send("Successfully updated email.");
}).catch(function(error) {
// Error
console.log(error.message);
// TODO: Roll back user email update
response.send("Error updating email: " + error.message);
});
})
.catch(function(error) {
console.log(error.message);
response.send("Error updating email: " + error.message);
});
});
This technique can be used to do user information changes in cases where you have to perform some task afterwards, since Firebase does not yet have a Cloud Function trigger which runs when a user's profile data changes, as noted by Doug Stevenson in the comments.

How should ACL work in a REST API?

A REST API is written in ExpressJs 4.x.x / NodeJS.
Let's assume an interface :
app.delete('/api/v1/users/:uid', function (req, res, next) {
...
}
So with that interface users can be deleted.
Let's assume there are 2 Customers in the system, and each Customer has Users. A User can have the privilege of deleting other Users with a role named CustomersAdmin.
But this User should only be able to delete Users which are Users from his Company(Customer).
So, let's get ACL into the scene. Assuming in our ACL we can define roles, resources and permissions. (Code is adopted from http://github.com/OptimalBits/node_acl#middlware.)
app.delete('/api/v1/users/:uid', acl.protect(), function (req, res, next)
{
// ? Delete User with uid = uid or check
// ? first if current user is in same company as user uid
}
There are two things to consider. One is protecting the route from persons without permission to HTTP/DELETE on that route ( /api/v1/users/:uid ) and the other is that a Person with Role CustomersAdmin shall not be allowed to delete Users from another Customer.
Is ACL supposed to do both? Or is it supposed to protect the route /api/v1/users?
So, would I use it like
acl.allow([
{
roles:'CustomersAdmin',
allows:[
{resources:['/api/v1/users', '/api/v1/users'] permissions:'delete'}
}
app.delete('/api/v1/users/:uid',acl.middleware(3), function(req,res,next)
{
Make sure uid is a User from same Customer as request is from(req.session.userid)
}
This would allow every User with Role CustomersAdmin to delete whatever user he wants.
Or is it preferable to define each possible Users route as a Resource and define multiple Roles which can interact with them?
acl.allow([
{
roles:'CustomersAdminOne',
allows:[
{resources:['/api/v1/users/1', '/api/v1/users/2'], permissions:'delete'}]
},
{
roles:'CustomersTwoAdmin',
allows:[
{resources:['/api/v1/users/3','/api/v1/users/4'], permissions:'delete'}
]
}
app.delete('/api/v1/users/:uid',acl.middleware(), function(req,res,next)
{
no logic needed to be sure that request is from a user within the same customer
}
The way I solved this was to create a role per user. I use a mongoose post save hook:
acl.addUserRole(user._id, ['user', user._id]);
Then in a post save hook for a resource I do this:
acl.allow(['admin', doc.user._id], '/album/' + doc._id, ['*']);
acl.allow(['guest', 'user'], '/album/' + doc._id, ['get']);
You can then use the isAllowed method to check if req.user has the right permissions.

Resources