I am trying to update some Firestore documents in a batch in Nodejs. Some of the fields I'm updating are nested Map fields with periods in their names that are generated dynamically. I understand this has been covered before and the solution is:
var email = 'test#email.com';
var myPath = new admin.firestore.FieldPath('email', email);
batch.update(db.collection('collection').doc('document'), myPath, admin.firestore.FieldValue.delete());
This would delete the field "email.'test#email.com'". However, I'm trying to update multiple fields like this:
var email = 'test#email.com';
var myPath = new admin.firestore.FieldPath('email', email);
var updateObject = {[myPath]: admin.firestore.FieldValue.delete()};
updateObject = {...updateObject, count: admin.firestore.FieldValue.increment(1)};
batch.update(db.collection('collection').doc('document'), updateObject);
When I try this, the count field is updated, but the nested email field is unchanged. I'm assuming there is some issue with how I'm getting the FieldPath object in there. All the examples I can find only show updating one field at a time. There are also cases where I'll need to update multiple nested fields (such as two fields in the email map). How should this be done correctly?
I just ran this tiny test on a database on my own:
const docRef = admin.firestore().doc("68821373/i6ESA7AZwZRhPsdDHGmY");
const updates = {
toDelete: admin.firestore.FieldValue.delete(),
toIncrement: admin.firestore.FieldValue.increment(1)
};
docRef.update(updates);
This incremented the toIncrement field and removed the toDelete field. So the operations can be combined in a single call, although I am not sure how your code is different.
I also quickly ran a test with a batch, just in case that makes a difference:
const docRef = admin.firestore().doc("68821373/i6ESA7AZwZRhPsdDHGmY");
const batch = admin.firestore().batch();
const updates = {
toDelete: admin.firestore.FieldValue.delete(),
toIncrement: admin.firestore.FieldValue.increment(1)
};
batch.update(docRef, updates);
await batch.commit();
But here too, the increment and delete are both executed without problems for me.
Related
Here's an abbreviated version of the data model for my firestore collections:
// /posts/{postId}
interface Post {
id: string;
lastCommentedAt: Timestamp | null
}
// /posts/{postId}/comments/{commentId}
interface Comment {
id: string;
createdAt: Timestamp;
}
So I have a collection of posts, and within each post is a subcollection of comments.
I want to do the following:
When a comment is created, update the lastCommentedAt field of the parent Post document with the comment's createdAt value
When a comment is deleted, the parent Post's lastCommentedAt field may no longer be valid so we need to get all of the Post's comments and get the most recent comment to update lastCommentedAt.
I see a couple ways to do this:
In my client code, I can do the above logic inside of functions like createComment(post, comment), and deleteComment(post, comment)
Especially in the case of deleting it seems like it is not ideal to require the client to fetch all comments for the post and iterate through them just to delete one
Would I need to use transactions for this since someone could be deleting a comment at the same time someone was creating a new one?
In my cloud functions I could create triggers on /posts/{postId}/comments/{commentId} for create and delete and do this logic on the backend
Is there risk of race conditions here as well? Again maybe I should use a transaction?
The use case for this lastCommentedAt field is that I want to be able to query for posts and sort them by the ones that have recent comments.
Edit: possible implementation for deleteComment using a batched write. Is it actually safe to do a query for documents before the writes though?
async function deleteComment(post, comment) {
const batch = writeBatch(firestore);
const postRef = doc("posts", post.id);
const commentRef = doc("posts", post.id, "comments", comment.id);
const commentsCollection = collection("posts", post.id, "comments");
const recentCommentSnapshot = await getDocs(
query(
commentsCollection,
where("id", "!=", comment.id),
orderBy("createdAt", "desc"),
limit(1)
)
);
let lastCommentedAt = null;
if (recentCommentSnapshot.docs.length > 0) {
lastCommentedAt = recentCommentSnapshot.docs[0].data().createdAt;
}
batch.delete(commentRef);
batch.update(postRef, { lastCommentedAt });
await batch.commit();
}
For adding comments you can use batched writes to ensure the comment is added and the parent document is update with current timestamp.
const batch = db.batch();
const postDocRef = db.collection('posts').doc('postId');
const commentRef = postDocRef.collection("comments").doc();
batch.update(postDocRef, { lastCommentedAt: FieldValue.serverTimestamp() });
batch.set(commentRef, { createdAt: FieldValue.serverTimestamp() });
await batch.commit();
When deleting comments you might have to query the latest comment using createdAt field and then update parent document.
const getLastComment = (postId: string) => {
const postRef = db.collection('posts').doc(postId);
return await postRef.orderBy('createdAt', 'desc').limit(1).get();
}
Something strange is going on or something idiotic I did but I got the following problem.
I got aenter code here web app where I have an online menu for a restaurant.
The structure of the products is as follows.
Facility->Category->Item->Name
So all item models have saved the name of the category they belong to as a string.
But sometimes you want to change the name of the category. What I wanted to do was find all the items in this category and change the name of the assigned category to the new one. Everything looked great until I saw that it took two times to run the controller that changed the name of the category and on the items to fully save the new name to the items.
The category changed the name but the items updated to the new name on the second run. Weird right?
So, what is that you can see that I don't and I implemented the silliest way of bugfix in the history of bugfixes.
Here is the controller - route.
module.exports.updateCtg = async(req,res)=>{
const {id} = req.params;
for(i=0;i<2; i++){
category = await CategoryModel.findByIdAndUpdate(id,{...req.body.category});
await category.save();
items = await ItemModel.find({});
for(item of items){
if(item.facility === category.facility){
item.category = category.name;
await single.save();
}
}
}
res.render('dashboard/ctgview', {category._id});
}
The findByIdAndUpdate function returns the found document, i.e. the document before any updates are applied.
This means that on the first run through category is set to the original document. Since the following loop uses category.name, it is setting the category of each item to the unmodified name.
The second iteration find the modified document, and the nested loop uses the new value in category.name.
To get this in a single pass, use
item.category = req.category.name;
or if you aren't certain it will contain a new name, use
item.category = req.category.name || category.name;
Or perhaps instead of a loop, use updateMany
if (req.category.name) {
ItemModel.updateMany(
{"item.facility": category.facility},
{"item.category": req.category.name}
)
}
I am using graphql to get some data from mongodb database. So, I was making an api which on running saves data in main collection but also saves data in some other collection with a couple of more data. I was trying to delete _id from the data that I get on saving on main collection but it's not working and I can't figure out why.
Here's what's happening
const data = await User.getResolver('updateById').resolve({ //default resolver from graphql-compose library to update a record. Returns the whole document in record //
args: {...rp.args}
})
const version = '4'
const NewObject = _.cloneDeep(data.record);
NewObject.version = version;
NewObject.oldId = data._id;
_.unset(NewObject, '_id'); //this doesn't work neither does delete NewObject._id//
console.log('new obj is', NewObject, version, NewObject._id, _.unset(NewObject, '_id')); //prints _id and surprisingly the unset in console.log is returning true that means it is successfully deleting the property//
I am very confused to what I am doing wrong.
Edit: Sorry should have mentioned, but according to lodash docs _.unset returns true if it successfully deletes a property.
Turn's out mongoDb makes _id as a Non configurable property. So, to do this I had to make use of toObject() method to make an editable copy of my object.
const NewObject = _.cloneDeep(data.record).toObject();
With this I was able to delete _id.
Alternatively _.omit of lodash also works.
I'm trying to create a new Mongoose document first
let newTrade = new TradeModel({
userId: userId,
symbol: symbol
})
Then I need to send this item to another server, to get the other details
let orderReceived = await sendOrderToServer(newTrade);
And then I want to merger this in with the new document and save
newTrade = {...newTrade, ...orderReceived}
But once I alter the original document, I loose access to the .save() method. I cant run .save() first becasue its missing required fields. I really just need the Trade._id first before sending to the other server which is why I'm doing it this way. Any suggestions? thanks.
You can use the mongoose.Types.ObjectId() constructor to create an id and then send that to your server, when the response comes back, create a document based on that.
EDIT: Adding few examples for clarity
let newTradeId = new mongoose.Types.ObjectId(); // With "new" or without, Javascript lets you use object constructors without instantiating
let orderReceived = await sendOrderToServer(newTradeId);
let newTrade = new TradeModel({ ...orderReceived }); // Create the newTrade by destructuring the order received.
// TADA! We are done!
I am running an iOS app where I display a list of users that are currently online.
I have an API endpoint where I return 10 (or N) users randomly, so that you can keep scrolling and always see new users. Therefore I want to make sure I dont return a user that I already returned before.
I cannot use a cursor or a normal pagination as the users have to be returned randomly.
I tried 2 things, but I am sure there is a better way:
At first what I did was sending in the parameters of the request the IDs of the user that were already seen.
ex:
But if the user keeps scrolling and has gone through 200 profiles then the list is long and it doesnt look clean.
Then, in the database, I tried adding a field to each users "online_profiles_already_sent" where i would store an array of the IDs that were already sent to the user (I am using MongoDB)
I can't figure out how to do it in a better/cleaner way
EDIT:
I found a way to do it with MySQL, using RAND(seed)
but I can't figure out if there is a way to do the same thing with Mongo
PHP MySQL pagination with random ordering
Thank you :)
I think the only way that you will be able to guarentee that users see unique users every time is to store the list of users that have already been seen. Even in the RAND example that you linked to, there is a possibility of intersection with a previous user list because RAND won't necessarily exclude previously returned users.
Random Sampling
If you do want to go with random sampling, consider Random record from MongoDB which suggests using an an Aggregation and the $sample operator. The implementation would look something like this:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
const randomDocs = await collection
.aggregate([{
$sample: {
size: 5
}
}])
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
});
randomDocs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
client.close();
}());
Cache of Previous Users
If you go with maintaining a list of previously viewed users, you could write an implementation using the $nin filter and store the _id of previously viewed users.
Here is an example using a weather database that I have returning entries 5 at a time until all have been printed:
const {
MongoClient
} = require("mongodb");
const
DB_NAME = "weather",
COLLECTION_NAME = "readings",
MONGO_DOMAIN = "localhost",
MONGO_PORT = "32768",
MONGO_URL = `mongodb://${MONGO_DOMAIN}:${MONGO_PORT}`;
(async function () {
const client = await MongoClient.connect(MONGO_URL),
db = await client.db(DB_NAME),
collection = await db.collection(COLLECTION_NAME);
let previousEntries = [], // Track ids of things we have seen
empty = false;
while (!empty) {
const findFilter = {};
if (previousEntries.length) {
findFilter._id = {
$nin: previousEntries
}
}
// Get items 5 at a time
const docs = await collection
.find(findFilter, {
limit: 5,
projection: {
main: 1
}
})
.map(doc => {
return {
id: doc._id,
temperature: doc.main.temp
}
})
.toArray();
// Keep track of already seen items
previousEntries = previousEntries.concat(docs.map(doc => doc.id));
// Are we still getting items?
console.log(docs.length);
empty = !docs.length;
// Print out the docs
docs.forEach(doc => console.log(`ID: ${doc.id} | Temperature: ${doc.temperature}`));
}
client.close();
}());
I have encountered the same issue and can suggest an alternate solution.
TL;DR: Grab all Object ID of the collections on first landing, randomized using NodeJS and used it later on.
Disadvantage: slow first landing if have million of records
Advantage: subsequent execution is probably quicker than the other solution
Let's get to the detail explain :)
For better explain, I will make the following assumption
Assumption:
Assume programming language used NodeJS
Solution works for other programming language as well
Assume you have 4 total objects in yor collections
Assume pagination limit is 2
Steps:
On first execution:
Grab all Object Ids
Note: I do have considered performance, this execution takes spit seconds for 10,000 size collections. If you are solving a million record issue then maybe used some form of partition logic first / used the other solution listed
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id; });
OR
db.getCollection('my_collection').find({}, {_id:1}).map(function(item){ return item._id.valueOf(); });
Result:
ObjectId("FirstObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ThirdObjectID"),
ObjectId("ForthObjectID"),
Randomized the array retrive using NodeJS
Result:
ObjectId("ThirdObjectID"),
ObjectId("SecondObjectID"),
ObjectId("ForthObjectID"),
ObjectId("FirstObjectID"),
Stored this randomized array:
If this is a Server side script that randomized pagination for each user, consider storing in Cookie / Session
I suggest Cookie (with timeout expired linked to browser close) for scaling purpose
On each retrieval:
Retrieve the stored array
Grab the pagination item, (e.g. first 2 items)
Find the objects for those item using find $in
.
db.getCollection('my_collection')
.find({"_id" : {"$in" : [ObjectId("ThirdObjectID"), ObjectId("SecondObjectID")]}});
Using NodeJS, sort the retrieved object based on the retrived pagination item
There you go! A randomized MongoDB query for pagination :)