I have set up a selector and my pouchDB syncs documents based on that selector.
Here is the selector. It will get all documents of type="job" and that include 1371 ID in permission array OR any document of type="load".
const myselector = {
$or: [
{
type: {
$eq: 'job'
},
permission: {
$elemMatch: { $eq: 1371 }
}
},
{
type: {
$eq: 'load'
}
}
]}
Here's how I sync
this.db.replicate
.from(this.remoteDB, {
selector: myselector
})
.on('complete', info => {
// then two-way, continuous, retriable sync
this.db
.sync(this.remoteDB, {
live: true,
retry: true,
selector: myselector
})
.on('change', change => {
console.log('my change0', change);
})
.on('error', error => {
console.log('my error', error);
});
})
.on('error', error => {
console.log('my error2', error);
});
This works perfectly and tells me if theres a change in any document but If e.g I make a change in permissions array and id 1371 gets changed then it doesnot notify me of the change. Now if I change any other field in that document, it doesnot get replicated as its not been synced anymore.
I'm using Selector instead of CouchDB Filters as they're apparently 10x faster but this sync becomes a problem.
If the document gets changed and doesnot fall in the selector criteria anymore then the change function is not called.
The behaviour you described is correct.
The filtered changes feed only includes the documents that currently match the selector conditions. The filtering is not considering the previous state of the document.
This question exposes a similar issue: CouchDB filter function and continuous feed
I see 2 approaches to manage this:
Make filtering data part of the document identity:
Design your docs by including the filtering attributes as part of the document IDs, so the change of any of the filtering attributes should be managed as a deletion and a creation of a new document. The local database will contain only the valid documents.
Keep history of the filtering data in the doc:
You should keep the history of the values of the filtering attributes in the documents and expand your selector to check the current and old values. In this case, is your client application the responsible to determine which of the documents in the local database are valid in a moment as not every document in the local database is valid for the client.
Related
After having watched numerous videos about GCP Firestore, I'm still asking myself what is a best way to work with huge amount of data coming from Firestore ?
Like 800 000 products, I would like to display all of them in such datatable with Quasar:
How do I make it work in real-time, like listening for each item's changes without having exceeded my quota usage to avoid pay high bills for stupid code ?
Backend:
app.get('/products/get', async (req, res) => {
const snapshot = await db.collection('products3P').limit(20).get()
const result = snapshot.docs.map(doc => doc.data())
res.json(result)
})
Frontend:
<q-table title="Products"
:rows="rows"
:columns="columns"
:filter="filter"
row-key="name"
no-data-label="No results"
/>
// Fetching results from backend once page is loaded:
api.get('/api/products/get').then(({ data }) => {
data.forEach(i => {
rows.value.push({
name: i.name,
sku: i.sku,
model: i.model,
brand: i.brand,
description: i.description
})
})
})
Binance's Market page is a perfect example, what is the best solutions to make datatables efficient ?
Any links or suggestions will be highly appreciate.
You can paginate and show only 20 (or the amount you prefer) items at a time. When the user changes the page load the next set of items and show them. Quasar has a loading state in their tables which you can use while the next documents are loading for the first time.
To show realtime updates, you can listen to documents of those 20 items only.
db.collection("products3P").orderBy("somefield")
.onSnapshot((querySnapshot) => {
const products = [];
querySnapshot.forEach((doc) => {
products.push(doc.data().name);
});
console.log("Current products: ", products.join(", "));
// Update the Table
});
You can set that array (of objects) to that table (rows.values) and render the data. You can make sure the changes are just current documents being updated and not new one's being added (if you want to) by checking the change type.
querySnapshot.docChanges().forEach((change) => {
if (change.type === "modified") {
console.log("Modified product: ", change.doc.data());
}
// else don't update in that array
})
To update the changes in existing array (rows), you can follow this answer to update an item in array.
Do note that your server is not involved in this process. If you need to do this through your servers then you'll have to use web sockets but using Firestore directly as shown above is definitely easier. Also make sure you detach listeners of the previous products.
I begin with mongoose and I have to use watch() method on a collection.
When i want to catch insert, there are no problems.
Nevertheless, when I want to retrieve the changes of an update, I don't know why, in some cases, mongoose changes the name of my fields?
registration.watch(). on('change', data => {
if(data.operationType == "update") {
console.log(data.updateDescription.updatedFields);
}
)};
my registration's collection is made up of persons who can accept or decline an invitation, and a person can change they answer. So it's basically a removal of the person from one array of data to be put in the other one.
The only problem I have is my array's name sometimes "change" :
{
__v: 100,
accepted: [
{
_id: 5faa76d048dd6e0017e631d4,
user: 5faa752848dd6e0017e631d2
},
{
_id: 5faa9ab06048a20017774610,
user: 5fa8fabc60260ec31606d71e
},
],
'declined.1': { _id: 5faf037a141f030017863484, user: 5faa74de48dd6e0017e631d0 },
for example here, my field declined change to "declined.1", why it's happening ? and how to avoid this ? or at least, how can i get declined's array in this situation ?
When you update a document in MongoDB, it only writes the deltas to the operations log, which is what the watch function pulls from.
The dot notation declined.1 means index 1 of the declined array. The change document you provided would be expected from pushing a new object onto the declined array. Essentially, it is saving space by not repeating all of the array elements that didn't change.
If you need to retrieve the entire document, you could set the fullDocument to updateLookup. See http://mongodb.github.io/node-mongodb-native/3.0/api/Collection.html#watch
I have a collection in MongoDB with more than 5 million documents. Whenever I create a document inside the same collection I have to check if there exists any document with same title and if it exists then I don't have to add this to the database.
Example: here is my MongoDB document:
{
"_id":ObjectId("3a434sa3242424sdsdw"),
"title":"Lost in space",
"desc":"this is description"
}
Now whenever a new document is being created in the collection, I want to check if the same title already exists in any of the documents and if it does not exists, then only I want to add it to the database.
Currently, I am using findOne query and checking for the title, if it not available only then it is added to the database. I am facing the performance issue in this. It is taking too much time to do this process. Please suggest a better approach.
async function addToDB(data){
let result= await db.collection('testCol').findOne({title:data.title});
if(result==null){
await db.collection('testCol').insertOne(data);
}else{
console.log("already exists in db");
}
}
You can reduce the network round trip time which is currently 2X. Because you execute two queries. One for find then one for update. You can combine them into one query as below.
db.collection.update(
<query>,
{ $setOnInsert: { <field1>: <value1>, ... } },
{ upsert: true }
)
It will not update if already exists.
db.test.update(
{"key1":"1"},
{ $setOnInsert: { "key":"2"} },
{ upsert: true }
)
It looks for document with key1 is 1. If it finds, it skips. If not, it inserts using the data provided in the object of setOnInsert.
I've got a Page:
const PageSchema = new Schema({
children: [{type: mongoose.Schema.Types.ObjectId, ref: 'Page'}]
});
As you can see each Page has an array of children which is also a Page.
Now what I need to do is fetch all "main" pages and populate their children array, easy enough except for the fact that I need to do this recursively since a Page contains an array of Pages.
MongoDB doesn't have any out of the box support for this, it only supports a 2 level deep population.
Here's my current query (removed all extra stuff for readability) without using the current .populate method (since it's not gonna work anyway):
Page.find(query)
.exec((err, pages) => {
if (err) {
return next(err);
}
res.json(pages);
});
I looked at this question which is similar but not exactly what I need:
mongoose recursive populate
That seems to use a parent to populate recursively and it also starts from just 1 document, rather than my scenario which uses an array of documents since I'm using .find and not .findOne for example.
How can I create my own deep recursive populate function for this?
Sidenote:
I am aware that the solution I need isn't recommended due to performance but I've come to the conclusion that it is the only solution that is going to work for me. I need to do recursive fetching regardless if it's in the frontend or backend, and doing it right in the backend will simplify things massively. Also the number of pages won't be big enough to cause performance issues.
You can recursively populate a field also like:
User.findOne({ name: 'Joe' })
.populate({
path: 'blogPosts',
populate: {
path: 'comments',
model: 'comment',
populate: {
path: 'user',
model: 'user'
}
}
})
.then((user) => {});
Please note that for first population, you don't need to specify model attribute, as it is already defined in your model's schema, but for next nested populations, you need to do that.
The answer actually lied in one of the answers from the previous questions, although a bit vague. Here's what I ended up with and it works really well:
Page.find(query)
.or({label: new RegExp(config.query, 'i')})
.sort(config.sortBy)
.limit(config.limit)
.skip(config.offset)
.exec((err, pages) => {
if (err) {
return next(err);
}
// takes a collection and a document id and returns this document fully nested with its children
const populateChildren = (coll, id) => {
return coll.findOne({_id: id})
.then((page) => {
if (!page.children || !page.children.length) {
return page;
}
return Promise.all(page.children.map(childId => populateChildren(coll, childId)))
.then(children => Object.assign(page, {children}))
});
}
Promise.all(pages.map((page) => {
return populateChildren(Page, page._id);
})).then((pages) => {
res.json({
error: null,
data: pages,
total: total,
results: pages.length
});
});
});
The function itself should be refactored into a utils function that can be used anywhere and also it should be a bit more general so it can be used for other deep populations as well.
I hope this helps someone else in the future :)
I have a User schema with basic fields which include interests, location co-ordinates
I need to perform POST request with a specific UserId to get the results
app.post('api/users/search/:id',function(err,docs)
{ //find all the documents whose search is enabled.
//on documents returned in above find the documents who have atleast 3 common interests(req.body.interests) with the user with ':id'
// -----OR-----
//find the documents who stay within 'req.body.distance' compared to location of user with':id'
//Something like this
return User
.find({isBuddyEnabled:true}),
.find({"interests":{"$all":req.body.interests}}),
.find({"_id":req.params.id},geoLib.distance([[req.body.gcordinates],[]))
});
Basically i need to perform find inside find or Query inside query..
As per your comments in the code you want to use multiple conditions in your find query such that either one of those condition is satisfied and returns the result based on it. You can use $or and $and to achieve it. A sample code with conditions similar to yours is given below.
find({
$or:[
{ isBuddyEnabled:true },
{ "interests": { "$all":req.body.interests }},
{ $and:[
{ "_id":req.params.id },
{ geoLib.distance...rest_of_the_condition }
]
}
]
});