Prevent duplicated user id using a locked thread in Next.js - node.js

I use MongoDB to store user data. The user id goes incrementally, such as 1, 2, 3, 4 etc when new user register.
I have the following code to generate the user id. "users" is the name of the collection where I store the user data.
// generate new user id
let uid;
const collections = await db.listCollections().toArray();
const collectionNames = collections.map(collection => collection.name);
if(collectionNames.indexOf("users") == -1){
uid = 1;
}
else{
const newest_user = await db.collection("users").find({}).sort({"_id":-1}).limit(1).toArray();
uid = newest_user[0]["_id"] + 1;
}
user._id = uid;
// add and save user
db.collection("users").insertOne(user).catch((error)=>{
throw error;
});
One concern I have now is that when two users make a request to register at the same time, they will get same maximum user id, and create the same new user id. One way to prevent it is using a locked thread. But, I think Node.js and Next.js doesn't support multi-thread.
What are some alternatives I have to solve this problem?
In addition, _id will be the field for uid. Will it make a difference since _id can't be duplicated.

Why not have the database generate the auto-incrementing ID? https://www.mongodb.com/basics/mongodb-auto-increment

One idea I have is using a transaction which can solve the concurrency issue. Transactions obey the rule of ACID. The writes to the database from the concurrent requests will run in isolation.

Related

getting error on associating token in hashgraph/sdk. "publicKey._toProtobufSignature is not a function"

// TOKEN ASSOCIATION WITH ALICE's ACCOUNT
let associateAliceTx = await new TokenAssociateTransaction()
.setAccountId(aliceId)
.setTokenIds([tokenId])
.freezeWith(global.client)
.sign(aliceKey);
//SUBMIT THE TRANSACTION
let associateAliceTxSubmit = await associateAliceTx.execute(global.client);
//GET THE RECEIPT OF THE TRANSACTION
let associateAliceRx = await associateAliceTxSubmit.getReceipt(
global.client
);
//LOG THE TRANSACTION STATUS
console.log(
`- Token association with Alice's account: ${associateAliceRx.status} \n`
);
code is here. I am trying to associate and transfer custom tokens with another user but getting same error for both TokenAssociateTransaction and TransferTransaction. What's the problem here?
Instead of alicKey write PublicKey.fromString(aliceKey)
You have to pass the tokenId which you would associate with Alice's id. Otherwise keep an account that would create a token and pass it to tokenid(ideally it is the NFT collection id, inside the collection, you will get a serial number(token inside NFT)) you can only associate that token and transfer the same.
check the .env file to identify the id which is referring to making this transaction.

Cloud Functions and Cloud Firestore USER ASSOCIATION FUNCTION

Shortly, imagine I have a Cloud Firestore DB where I store some users data such as email, geo-location data (as geopoint) and some other things.
In Cloud Functions I have "myFunc" that runs trying to "link" two users between them based on a geo-query (I use GeoFirestore for it).
Now everything works well, but I cannot figure out how to avoid this kind of situation:
User A calls myFunc trying to find a person to be associated with, and finds User B as a possible one.
At the same time, User B calls myFunc too, trying to find a person to be associated with, BUT finds User C as possible one.
In this case User A would be associated with User B, but User B would be associated with User C.
I already have a field called "associated" set to FALSE on each user initialization, that becomes TRUE whenever a new possible association has been found.
But this code cannot guarantee the right association if User A and User B trigger the function at the same time, because at the moment in which the function triggered by User A will find User B, the "associated" field of B will be still set to false because B is still searching and has not found anybody yet.
I need to find a solution otherwise I'll end up having
wrong associations ( User A pointing at User B, but User B pointing at User C ).
I also thought about adding a snapshotListener to the user who is searching, so in that way if another User would update the searching user's document, I could terminate the function, but I'm not really sure it will work as expected.
I'd be incredibly grateful if you could help me with this problem.
Thanks a lot!
Cheers,
David
HERE IS MY CODE:
exports.myFunction = functions.region('europe-west1').https.onCall( async (data , context) => {
const userDoc = await firestore.collection('myCollection').doc(context.auth.token.email).get();
if (!userDoc.exists) {
return null;
}
const userData = userDoc.data();
if (userData.associated) { // IF THE USER HAS ALREADY BEEN ASSOCIATED
return null;
}
const latitude = userData.g.geopoint["latitude"];
const longitude = userData.g.geopoint["longitude"];
// Create a GeoQuery based on a location
const query = geocollection.near({ center: new firebase.firestore.GeoPoint(latitude, longitude), radius: userData.maxDistance });
// Get query (as Promise)
let otherUser = []; // ARRAY TO SAVE THE FIRST USER FOUND
query.get().then((value) => {
// CHECK EVERY USER DOC
value.docs.map((doc) => {
doc['data'] = doc['data']();
// IF THE USER HAS NOT BEEN ASSOCIATED YET
if (!doc['data'].associated) {
// SAVE ONLY THE FIRST USER FOUND
if (otherUser.length < 1) {
otherUser = doc['data'];
}
}
return null;
});
return value.docs;
}).catch(error => console.log("ERROR FOUND: ", error));
// HERE I HAVE TO RETURN AN .update() OF DATA ON 2 DOCUMENTS, IN ORDER TO UPDATE THE "associated" and the "userAssociated" FIELDS OF THE USER WHO WAS SEARCHING AND THE USER FOUND
return ........update({
associated: true,
userAssociated: otherUser.name
});
}); // END FUNCTION
You should use a Transaction in your Cloud Function. Since Cloud Functions are using the Admin SDK in the back-end, Transactions in a Cloud Function use pessimistic concurrency controls.
Pessimistic transactions use database locks to prevent other operations from modifying data.
See the doc form more details. In particular, you will read that:
In the server client libraries, transactions place locks on the
documents they read. A transaction's lock on a document blocks other
transactions, batched writes, and non-transactional writes from
changing that document. A transaction releases its document locks at
commit time. It also releases its locks if it times out or fails for
any reason.
When a transaction locks a document, other write operations must wait
for the transaction to release its lock. Transactions acquire their
locks in chronological order.

Insert 1 million rows in postgresql (nodejs)

I'm creating an app in react-native with a nodejs backend. The app is almost done, and I'm now stress testing the backend.
In my postgresql database, I have a table called notifications to store all the notifications a user receives.
In my app, a user can follow pages. When a page posts a new message, I want to send a notification to all users following that page. Every user should receive an individual notification.
Let's say a page is followed by 1 million users, and the page posts a new message: this means 1 million notifications (eg. 1 million rows) should be inserted into the database.
My solution (for now) is by chunking up the array of user ID's (of the users following the page) into chunks of 1000 user ID's each, and doing an insert query using every chunk.
const db = require('./db');
const format = require('pg-format');
const userIds = [1,2, 3, 4, 5, ..., 1000000];
// split up the user ID's array into chunks of 1000 user ID's each
const chunks = chunkArray(userIds, 1000); // chunkArray is a function that splits up an array
into multiple arrays with x items, in this case x = 1000;
// loop over each chunk
chunks.forEach(chunk => {
const array = [];
// create an array containing 1000 objects, each containing a user ID, notification type and
// page ID (for inserting into the database)
chunk.forEach(userId => {
array.push({ userId, type: 'post', pageId: _PAGE_ID_ });
});
// create and run the query
const query = format("INSERT INTO notifications (userId, type, pageId) VALUES %L", array);
const result = await db.query(query);
});
I'm using node-postgres for the database connection, and I'm creating a connection pool. I fetch one client from the pool, so only 1 connection is used for all the queries in the forEach-loop.
This all works, but inserting 1 million rows takes a few minutes. I'm not sure this is the right way to do this.
Another solution I came up with is using "general notifications". When a page updates a post, I only insert 1 notification into the notifications table, and when I query for all notifications for a specific user, I check which pages the user is following, and fetch all the general notifications of that page with the query. Would this be a better solution? I would leave me with A LOT less notification-rows and I think it would increase performance.
Thank you for all responses!
I'm trying to implement my other solution. When a page updates a post, I insert only one notification without a user ID (because it has no specific destination), but with the page ID.
When I fetch all the notifications for a user, I first check for all the notifications with that user ID, and for all notification without a user ID but with a page ID of a page that the user is following.
I think this is not the easiest solution, but it will reduce the number of rows and if I do a good job with indexes and stuff, I think I'm able to write a pretty performant query.
Without getting into which solution would be better, one way to solve it could be this, provided that you keep all the pages and followers in the same database.
INSERT INTO notifications (userId, type, pageID)
SELECT users.id, 'post', pages.id
from pages
join followers on followers.pageId = pages.id
join users on followers.userId = users.id
where pages.id = _PAGE_ID_
This would allow the DB to handle everything, which should speed up the insert since you don't need to send each individual row from the server.
If you don't have the users/pages in the same DB then it's a bit more tricky.
You could prepare a CSV file, upload it to the database server and use the COPY command. If you don't have access to the server, you might be able to stream the data directly as the COPY command can read from stdin (that depends on the library, I'm not familiar with node-postgres so I can't tell if it's possible.)
Alternatively you can do everything in a transaction by issuing a BEGIN before you do the inserts, this is the slowest alternative, but save time in the overhead of postgres creating an implicit transaction for each statement. Just don't forget to commit after. The library might even have ways to create explicit transactions and insert data through.
That said, I would probably do a variation of your second solution since it would create less rows in the DB, but that depends on your other requirements, it might not be possible if you need to track notifications or perform other actions on them.
use async.eachOfLimit to insert X chunks in parallel
In the following example you will insert 10 chunks in parallel
const userIds = [1,2, 3, 4, 5, ..., 1000000];
const chunks = chunkArray(userIds, 1000);
var BATCH_SIZE_X = 10;
async.eachOfLimit(chunks, BATCH_SIZE_X, function(c, i, ecb){
c = c.map(function(e){ return { e, type: 'post', pageId: _PAGE_ID_ });
const query = format("INSERT INTO notifications (userId, type, pageId) VALUES %L", c);
const result = await db.query(query);
return ecb();
}, function(err){
if(err){
}
else{
}
});

How to update a quantity in another document when creating a new document in the firebase firestore collection?

When I create a new document in the note collection, I want to update the quantity in the info document. What am I doing wrong?
exports.addNote = functions.region('europe-west1').firestore
.collection('users/{userId}/notes').onCreate((snap,context) => {
const uid = admin.user.uid.toString();
var t;
db.collection('users').doc('{userId}').collection('info').doc('info').get((querySnapshot) => {
querySnapshot.forEach((doc) => {
t = doc.get("countMutable").toString();
});
});
let data = {
countMutable: t+1;
};
db.collection("users").doc(uid).collection("info").doc("info").update({countMutable: data.get("countMutable")});
});
You have... a lot going on here. A few problems:
You can't trigger firestore functions on collections, you have to supply a document.
It isn't clear you're being consistent about how to treat the user id.
You aren't using promises properly (you need to chain them, and return them out of the function if you want them to execute properly).
I'm not clear about the relationship between the userId context parameter and the uid you are getting from the auth object. As far as I can tell, admin.user isn't actually part of the Admin SDK.
You risk multiple function calls doing an increment at the same time giving inconsistent results, since you aren't using a transaction or the increment operation. (Learn More Here)
The document won't be created if it doesn't already exist. Maybe this is ok?
In short, this all means you can do this a lot more simply.
This should do you though. I'm assuming that the uid you actually want is actually the one on the document that is triggering the update. If not, adjust as necessary.
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
const db = admin.firestore();
exports.addNote = functions.firestore.document('users/{userId}/notes/{noteId}').onCreate((snap,context) => {
const uid = context.params.userId;
return db.collection("users").doc(uid).collection("info").doc("info").set({
countMutable: admin.firestore.FieldValue.increment(1)
}, { merge: true });
});
If you don't want to create the info document if it doesn't exist, and instead you want to get an error, you can use update instead of set:
return db.collection("users").doc(uid).collection("info").doc("info").update({
countMutable: admin.firestore.FieldValue.increment(1)
});

How can we remove specific user's session in ServiceStack?

Admin can disable or suspend user's entrance.
We can check "is user disabled" in "user login".
("Checking Db for each request" is not good option)
So we want to remove user's session when admin disables it's account.
How can we achieve it?
If you know or have kept the sessionId you can remove a session from the cache with:
using (var cache = TryResolve<ICacheClient>())
{
var sessionKey = SessionFeature.GetSessionKey(sessionId);
cache.Remove(sessionKey);
}
But ServiceStack doesn't keep a map of all User's Session ids itself. One way to avoid DB lookups on each request is when disabling the account keep a record of the disabled User Ids which you can later validate in a global Request Filter to ensure the user isn't locked.
Best way to store the locked user ids is in the cache that way the visibility and lifetime of the locked user ids is in the same cache storing the sessions. You can use a custom cache key to record locked user ids, e.g:
GlobalRequestFilters.Add((req, res, dto) =>
{
var session = req.GetSession();
using (var cache = TryResolve<ICacheClient>())
{
if (cache.Get<string>("locked-user:" + session.UserAuthId) != null)
{
var sessionKey = SessionFeature.GetSessionKey(session.Id);
cache.Remove(sessionKey);
req.Items.Remove(ServiceExtensions.RequestItemsSessionKey);
}
}
});
This will remove the locked users sessions the next time they try to access ServiceStack, forcing them to login again at which point they will notice they've been locked out.
A new RemoveSession API was added in this commit which makes this a little nicer (from v4.0.34+):
if (cache.Get<string>("locked-user:" + session.UserAuthId) != null)
req.RemoveSession(session.Id);

Resources