How to create a linked list structure in MongoDB? - node.js

I have billions of documents in a collection. I'm trying to store reference next document for this particular account id.
{"_id": "1234", "title": "Document1", accountId:145, "next": "1236"}
{"_id": "1235", "title": "Document2", accountId:146, "next": "1238"}
{"_id": "1236", "title": "Document1a", accountId:145, }
{"_id": "1238", "title": "Document2a", accountId:146,"next": "1240"} }
{"_id": "1239", "title": "Document3", accountId:147}
{"_id": "1240", "title": "Document2b", accountId:146} }
How I get documents list with limit
Since I'll need the whole 'history' of a document including next documents I guess I'll have to perform a multitude of queries depending on the size of the list?
Any suggestions on how to create a performant index? A different structure for storing linked lists would also be interesting.

Related

Best Practice to Update MongoDB Document

I'm sorry it's a technical question about how to implement transaction or pipeline in mongodb. So I have orders collection, in order document it has array of object which contains order items. In each order item, it has an array that contains material object. Let's say those materials are ingredients of an item, this object contains inventoryId and amount.
I have inventories collection which has availability property.
The order is finished so it needs to update the inventory availability based on material amount?
Here I give an example of the document.
// orders collection
{
"_id": "SomeId",
"items": [
{
"_id": "SomeId",
"materials": {
"inventoryId": "abc",
"amount": 5,
}
},
],
}
// inventories collection
{
"_id": "abc",
"availability": 100,
}
Thanks in advance for any help.

i can't query over populated children attributes

I am trying to query over populated children attributes using mongoose but it straight up doesn't work and will return empty arrays all the time.
even hardcoding right and existing information as values for the query would return empty arrays.
my schema is a business schema with a 1 to 1 relationship with user schema via the attribute createdBy. the user schema has an attribute name which I am trying to query on.
so if I make a query like this :
business.find({'createdBy.name': {$regex:"steve"}}).populate('createdBy')
the above will never return any documents. although, without the find condition, everything works fine.
Can I search by the name inside a populated child or not? all tutorials say this should work fine but it just doesn't.
EDIT : an example of what the record looks like :
{
"_id": "5fddedd00e8a7e069085964f",
"status": 6,
"addInfo": "",
"descProduit": "",
"createdBy": {
"_id": "5f99b1bea9ba194dec3bd6aa",
"status": 1,
"fcmtokens": [
],
"emailVerified": 1,
"phoneVerified": 0,
"userType": "User",
"name": "steve buschemi",
"firstName": "steve",
"lastName": "buschemi",
"tel": "",
"email": "steve#buschemi.com",
"register_token": "747f1e1e8fa1ecd2f1797bb402563198",
"createdAt": "2020-10-28T18:00:30.814Z",
"updatedAt": "2020-12-18T13:52:07.430Z",
"__v": 19,
"business": "5f99b1e101bfff39a8259457",
"credit": 635,
},
"createdAt": "2020-12-19T12:10:57.703Z",
"updatedAt": "2020-12-19T12:11:16.538Z",
"__v": 0,
"nid": "187"
}
It seems there is no way to filter parent documents by conditions on child documents:
From the official documentation:
In general, there is no way to make populate() filter stories based on properties of the story's author. For example, the below query won't return any results, even though author is populated.
const story = await Story.
findOne({ 'author.name': 'Ian Fleming' }).
populate('author').
exec();
story; // null
If you want to filter stories by their author's name, you should use denormalization.

How to delete specific object under document in DocumentDB?

I am using documentDB as backend for my project.
I have created a collection named ResellerCollection.
Under it I added document as Reseller with Id's assign to it.
Now under Reseller document I have added a list of Customer and now I wanted to delete customer of reseller document by specific Id.
My JSON generated in documentDB is as follow.
{
"id": "73386791-5895-4a56-9108-df4a773331fe",
"Name": "Nadeem",
"PrimaryContact": "1234",
"Address": "bhusari clny",
"City": "pune",
"State": "maharashtra",
"Country": "india",
"ZipCode": "222",
"Telephone": "45234343",
"Email": "abc#xyz.com",
"Website": "asdfsd.com",
"Customer": [
{
"id": "4acf3ca9-f9e4-4117-a471-7ce8f905baec",
"FullName": "Test Cust1",
"Company": "safds",
"JobTitle": "sadf",
"Email": "abcd#xyz.com",
"Address": "asdfsaf",
"City": "sdf",
"State": "sdf",
"Country": "sadf",
"ZipCode": "2343",
"Telephone": "45234343",
"MerchantID": "232",
"IdentificationNo": "2342343",
"IsActive": true,
"CustomerGroupID": "34",
"ResellerID": "73386791-5895-4a56-9108-df4a773331fe"
},
{
"id": "e0d6d099-3d5d-4776-9b84-14b7ae0b9911",
"FullName": "Test Cust2",
"Company": "safds",
"JobTitle": "sadf",
"Email": "abcd#xyz.com",
"Address": "asdfsaf",
"City": "sdf",
"State": "sdf",
"Country": "sadf",
"ZipCode": "2343",
"Telephone": "sadf",
"MerchantID": "232",
"IdentificationNo": "2342343",
"IsActive": true,
"CustomerGroupID": "34",
"ResellerID": "73386791-5895-4a56-9108-df4a773331fe"
}
],
"UserId": "f807f027-2e21-45b1-b786-e4d2b3d677cb",
"_rid": "+JBQAOQWHQENAAAAAAAAAA==",
"_self": "dbs/+JBQAA==/colls/+JBQAOQWHQE=/docs/+JBQAOQWHQENAAAAAAAAAA==/",
"_etag": "\"0a004764-0000-0000-0000-583bd8b50000\"",
"_attachments": "attachments/",
"_ts": 1480317104
}
Please suggest me how to write delete function for customer in MVC. Whether I should write delete for it to delete specific customer or whether I have to update the whole customer list?
Two ways of doing this.
Implement "AddCustomer" and "RemoveCustomer" stored procedures in DocumentDB. These stored procedures will read the reseller document, then append or remove the new customer passed in as argument. Then just call ExecuteStoredProcedureAsync within your controller.
Instead of the stored procedure approach, perform three steps within the controller. Read the document, make changes, and replace the document.
In both implementations, you'll probably want to use the ETag to perform a conditional write to avoid any clobbering across multiple clients/writers.
As Larry and David pointed out, you should also consider different JSON modeling options such as storing the customers data as separate documents, or by storing only the IDs within the array vs. all relevant customer data.
The only way to delete an element of an array (or any other change to the document) is to fetch the entire document, modify it, then create/upsert the entire document. You can do this client-side or in a stored procedure.
Try not to think of collections as tables in traditional databases or collections in MongoDB. I used to say, think of them as "partitions" but with partitioned collections, even that distinction is not useful. I use a single partitioned collection for everything now. I distinguish document types by having an element in each document: isReseller = true or isCustomer = true for your example. Alternatively, you can do type = 'Reseller' or type = 'Customer'. However, if the customer of one reseller is ever a reseller themselves, the former will allow you to add both is___ = true and the latter will not.
What David Makogon says is definitely a worry. Unless Resellers are restricted to a small number of Customers, you are better off storing them separately and having a foreign key link them. That way, deleting or adding one is a single step. However, getting the Reseller and all its Customers will be two round trips.

Elasticsearch delete/update a document in index1 and index2

If I have two index, Ex: sample1 and sample2.
If I delete or update a value in sample1 then the corresponding document should also deleted or updated in sample2?
Data : sample1 : {name: 'Tom', id: '1', city: 'xx', state, 'yy', country: 'zz'}
sample2 : {id: '1', city: 'xx', state, 'yy', country: 'zz'}
If I delete id: '1' then this document should be deleted from both the index from the server side itself. How to do this ?
The problem will be if I delete the values separatley then if I end up in network issue after deleting value from one index the other index will have values how to avoid this ?
You can use the bulk API for doing this and you'll have better guarantees that both delete/update operations succeed or fail since everything happens in a single network call:
For deleting both documents in two different indices:
POST _bulk
{"delete": {"_index": "index1", "_type": "type1", "_id": "1"}}
{"delete": {"_index": "index2", "_type": "type2", "_id": "1"}}
For updating both documents in two different indices:
POST _bulk
{"index": {"_index": "index1", "_type": "type1", "_id": "1"}}
{"name": "Tom", id: "1", "city": "xx", "state": "yy", "country": "zz"}
{"index": {"_index": "index2", "_type": "type2", "_id": "1"}}
{"id": "1", "city": "xx", "state": "yy", "country": "zz"}
UPDATE
After discussing this, it seemed the needed solution was a mix of using
the delete-by-query API (don't forget to install the plugin if you're on ES 2.x) for deleting documents matching a country in multiple indices
and the update-by-query API for updating documents in multiple indices.
There is no clean way to do this with elasticsearch. What you want/need feels like a transaction and that is not possible with elastic. What you could do is do a bulk request with the 2 queries to update/delete the item in there. Still you have to check the response of the bulk to see if both queries went well. The chances for one of them failing might be a little bit smaller.
I don't think you can do both at the same time, I mean deleting the same document in two different indexes.
But then deleting a document from an index could be done using the Delete By Query API by giving a matchin query so that the appropriate document could be deleted.
Source: Delete By Query API
Elasticsearch cannot guarantee you that it will do those two operations atomically, like a transaction in RDBs. I suggest looking at nested documents or parent/child relationships for what Elasticsearch understands by joined documents.
In this way, if you deleted the parent, the children will be deleted as well.

Optimal way to model documents hierarchy in CouchDB

I'm trying to model document a hierarchy in CouchDB to use in my system, which is conceptually similar to a blog. Each blog post belongs to at least one category and each category can have many posts. Categories are hierarchical, meaning that if a post belongs to CatB in the hierarchy "CatA->CatB" ("CatB is in CatA)", it belongs also to CatA.
Users must be able to quickly find all post in a category (and all its children).
Solution 1
Each document of the post type contains a "category" array representing its position in the hierarchy (see 2).
{
"_id": "8e7a440862347a22f4a1b2ca7f000e83",
"type": "post",
"author": "dexter",
"title": "Hello",
"category":["OO","Programming","C++"]
}
Solution 2
Each document of the post type contains the "category" string representing its path in the hierarchy (see 4).
{
"_id": "8e7a440862347a22f4a1b2ca7f000e83",
"type": "post",
"author": "dexter",
"title": "Hello",
"category": "OO/Programming/C++"
}
Solution 3
Each document of the post type contains its parent "category" id representing its path in the hierarchy (see 3). A hierarchical category structure is built through linked "category" document types.
{
"_id": "8e7a440862347a22f4a1b2ca7f000e83",
"type": "post",
"author": "dexter",
"title": "Hello",
"category_id": "3"
}
{
"_id": "1",
"type": "category",
"name": "OO"
}
{
"_id": "2",
"type": "category",
"name": "Programming",
"parent": "1"
}
{
"_id": "3",
"type": "category",
"name": "C++",
"parent": "2"
}
Question
What's the best way to store this kind of relationship in CouchDB? What's the most efficient solution in terms of disk space, scalability and retrieval speed?
Can such a relation be modelled to take into account localised category names?
Disclaimer
I know this question has been asked a few times already here on SO, but it seems there's no definitive answer to it nor an answer which deals with the pros and cons of each solution. Sorry for the length of the question :)
Read so far
CouchDB - The Definitive Guide
Storing Hierarchical Data in CouchDB
Retrieving Hierarchical/Nested Data From CouchDB
Using CouchDB group_level for hierarchical data
There's no right answer to this question, hence the lack of a definitive answer. It mostly depends on what kind of usage you want to optimize for.
You state that retrieval speed of documents that belong to a certain category (and their children) is most important. The first two solutions allow you to create a view that emits a blog post multiple times, once for each category in the chain from the leaf to the root. Thus selecting all documents can be done using a single (and thus fast) query. The only difference of second solution to first solution is that you move the parsing of the category "path" into components from the code that inserts the document to the map function of the view. I would prefer the first solution as it's simpler to implement the map function and a bit more flexible (e.g. it allows a category's name to contain a slash character).
In your scenario you probably also want to create a reduced view which counts the number of blog posts for each category. This is very simple with either of these solutions. With a fitting reduction function, the number of post in every category can be retrieved using a single request.
A downside of the first two solutions is that renaming or moving a category from one parent to another requires every document to be updated. The third solution allows that without touching the documents. But from the description of your scenario I assume that retrieval by category is very frequent and category renaming/moving is very rare.
Solution 4 I propose a fourth solution where blog post documents hold references to category documents but still reference all the ancestors of the post's category. This allows categories to be renamed without touching the blog posts and allows you to store additional metadata with a category (e.g. translations of the category name or a description):
{
"_id": "8e7a440862347a22f4a1b2ca7f000e83",
"type": "post",
"author": "dexter",
"title": "Hello",
"category_ids": [3, 2, 1]
}
{
"_id": "1",
"type": "category",
"name": "OO"
}
{
"_id": "2",
"type": "category",
"name": "Programming",
"parent": "1"
}
{
"_id": "3",
"type": "category",
"name": "C++",
"parent": "2"
}
You will still have to store the parents of categories with the categories, which is duplicating data in the posts, to allow categories to be traversed (e.g. for displaying a tree of categories for navigation).
You can extend this solution or any of your solutions to allow a post to be categorized under multiple categories, or a category to have multiple parents. When a post is categorized in multiple categories, you will need to store the union of the ancestors of each category in the post's document while preserving the categories selected by the author to allow them to be displayed with the post or edited later.
Lets assume that there is an additional category named "Ajax" with anchestors "JavaScript", "Programming" and "OO". To simplify the following example, I've chosen the document IDs of the categories to equal the category's name.
{
"_id": "8e7a440862347a22f4a1b2ca7f000e83",
"type": "post",
"author": "dexter",
"title": "Hello",
"category_ids": ["C++", "Ajax"],
"category_anchestor_ids": ["C++", "Programming", "OO", "Ajax", "JavaScript"]
}
To allow a category to have multiple parents, just store multiple parent IDs with a category. You will need to eliminate duplicates while finding all the ancestors of a category.
View for Solution 4 Suppose you want to get all the blog posts for a specific category. We will use a database with the following sample data:
{ "_id": "100", "type": "category", "name": "OO" }
{ "_id": "101", "type": "category", "name": "Programming", "parent_id": "100" }
{ "_id": "102", "type": "category", "name": "C++", "parent_id": "101" }
{ "_id": "103", "type": "category", "name": "JavaScript", "parent_id": "101" }
{ "_id": "104", "type": "category", "name": "AJAX", "parent_id": "103" }
{ "_id": "200", "type": "post", "title": "OO Post", "category_id": "104", "category_anchestor_ids": ["100"] }
{ "_id": "201", "type": "post", "title": "Programming Post", "category_id": "101", "category_anchestor_ids": ["101", "100"] }
{ "_id": "202", "type": "post", "title": "C++ Post", "category_id": "102", "category_anchestor_ids": ["102", "101", "100"] }
{ "_id": "203", "type": "post", "title": "AJAX Post", "category_id": "104", "category_anchestor_ids": ["104", "103", "101", "100"] }
In addition to that, we use a view called posts_by_category in a design document called _design/blog with the the following map function:
function (doc) {
if (doc.type == 'post') {
for (i in doc.category_anchestor_ids) {
emit([doc.category_anchestor_ids[i]], doc)
}
}
}
Then we can get all the posts in the Programming category (which has ID "101") or one of it's subcategories using a GET requests to the following URL.
http://localhost:5984/so/_design/blog/_view/posts_by_category?reduce=false&key=["101"]
This will return a view result with the keys set to the category ID and the values set to the post documents. The same view can also be used to get a summary list of all categories and the number of post in that category and it's children. We add the following reduce function to the view:
function (keys, values, rereduce) {
if (rereduce) {
return sum(values)
} else {
return values.length
}
}
And then we use the following URL:
http://localhost:5984/so/_design/blog/_view/posts_by_category?group_level=1
This will return a reduced view result with the keys again set to the category ID and the values set to the number of posts in each category. In this example, the categories name's would have to be fetched separately but it is possible to create view where each row in the reduced view result already contains the category name.

Resources