Consider the following document
{
"title": "My first blog entry",
"text": "Starting to get the hang of this...",
"tags": [ "testing" ],
"views": 0
}
I need to run kind of an upsert operation. If I encounter data like
{
"id": 1,
"tags": [ "new tag" ]
}
I want to update the existing document with same id. So result should be :
{
"id": 1,
"title": "My first blog entry",
"text": "Starting to get the hang of this...",
"tags": [ "testing", "new tag" ],
"views": 0
}
If the document with same id does not exist, I want to create a new one.
Now in databases like mongoDB, I could use update with $addToSet or $push operation. I could not find similar operation in Elasticsearch.
I read that it can be done by writing scripts in groovy. However, this needs to be done on a file containing 200 million records. I am not sure if I can use groovy in combination with bulk API. Is it possible ?
You dont need to use bulk API for this. You can use an upsert request. Upsert request can ALSO be embedded in the bulk request.
curl -XPOST 'localhost:9200/test/type1/1/_update' -d '{
"script": "if (ctx._source.tags.contains(\"tags\")) {ctx._source.tags += tag;} else {ctx._source.tags = [tag]}",
"params": {
"tag": "newTag"
},
"upsert": {
"title": "My first blog entry",
"text": "Starting to get the hang of this...",
"tags": [
"newTag"
],
"views": 0
}
}'
Related
I am new to MongoDB. This is a MERN project. I am using the MongoDB database. I am trying to delete the comments using id which are nested in an array of objects. My document looks like this,
{
"_id": "60aa8ed98ad79d380cb250b4",
"course_id": "609a2cedd07ba73200ada318",
"instructor_id": "609a2875aeefe40a6c9befc5",
"title": "New Annoucement",
"description": "new announcement is posted",
"video_url": "youtube.com",
"comments": [
{
"comments": [
{
"comments": [
{
"comments": [
{
"comments": [],
"_id": "60aa912b301ccc397057ded6",
"username": "Sandy",
"user_id": "609a2875aeefe40a6c9befc5",
"content": "1",
"parentId": "60aa9123301ccc397057ded5",
"type": "reply",
"annoucementId": "60aa8ed98ad79d380cb250b4"
}
],
"_id": "60aa9123301ccc397057ded5",
"username": "Sandy",
"user_id": "609a2875aeefe40a6c9befc5",
"content": "why its not deleting",
"parentId": "60aa8f648ad79d380cb250b6",
"type": "reply",
"annoucementId": "60aa8ed98ad79d380cb250b4"
}
],
"_id": "60aa8f648ad79d380cb250b6",
"username": "Sandy",
"user_id": "609a2875aeefe40a6c9befc5",
"content": "two",
"parentId": "60aa8edd8ad79d380cb250b5",
"type": "reply",
"annoucementId": "60aa8ed98ad79d380cb250b4"
}
],
"_id": "60aa8edd8ad79d380cb250b5",
"username": "Sandy",
"user_id": "609a2875aeefe40a6c9befc5",
"content": "one",
"type": "comment",
"annoucementId": "60aa8ed98ad79d380cb250b4"
}
],
"createdAt": "2021-05-23T17:20:25.579Z",
"updatedAt": "2021-05-23T17:30:19.116Z",
"__v": 6
}
this is a big object called announcements which has a comments array and that contains an array of objects which are comments. And if someone replies to a comment I am making another array called comments and storing all the replies objects.
I want to delete the object using ._id: 60aa912b301ccc397057ded6, which is the reply object with the content "1".
I have tried to find, findById, findOneAndRemove, findOneAndDelete it's not finding the document.
You could perhaps change the way you structure the comments and not nest them? Since each comment already has a unique id, nesting them only makes it more difficult to access the comment object, and it's a rabbit's hole, where the nesting might never end?
I propose you add an property in the comment object A to point to other comment object B, if A is a reply or quote of B.
That way you can have an array of comment objects and modify them must easily.
Cheers.
I have a database with two different document types:
{
"id": "1",
"type": "User",
"username": "User 1"
}
and a second document type with the following structure:
{
"id": "2",
"type": "Asset",
"name": "Asset one",
"owner_id": "1" //id of the user who owns the asset
}
We need to display the list of existing assets and the name of the owner (side by side). We were able to achieve this by using views and linked documents. The problem is, now we need to be able to search and sort which is not supported by views.
Is what we're trying to accomplish possible using CouchDB? Can we do this using search indexes?
We're using CouchDB 2.3.1 and we're not able to upgrade (at least for now).
I need to search for username and asset name and also be able to sort by these fields. We don't need a full featured search. Something like matches (case insensitive) is good enough.
The id / owner_id specified in the examples, represent the document _id. A user will not own more than ~10 assets. The normal scenario will be 2/3 assets.
Without knowing the complete nature of the asset documents (e.g. lifetime, immutability etc) this may get you moving in a positive direction. The problem appears that information from both documents is needed to generate a meaningful view, which isn't happening.
Assuming asset names are immutable and the number of assets per user are low, consider decoupling and denormalizing the owner_id relationship by keeping a list of assets in the User document.
For example, a User document Where the assets property contains a collection of owned asset document information (_id, name):
{
"_id": "1",
"type": "User",
"username": "User 1",
"assets": [
[
"2",
"Asset one"
],
[
"10",
"Asset ten"
]
]
}
Given this structure, an Asset document is fairly thin
{
"_id": "2",
"type": "Asset",
"name": "Asset one"
}
I will assume there is much more information in the Asset documents than presented.
So how to get search and sorted results? Consider a design doc _design/user/_view/assets with the following map function:
function (doc) {
if(doc.type === "User" && doc.assets) {
for(var i = 0; i < doc.assets.length; i++) {
/* emit user name, asset name, value as asset doc id */
emit(doc.username + '/' + doc.assets[i][1], { _id: doc.assets[i][0] });
/* emit asset name with leading /, value as User doc _id */
emit('/' + doc.assets[i][1], { _id: doc._id })
}
}
}
Let's assume the database only has the one user "User 1" and two Asset documents "Asset one" and "Asset ten".
This query (using cUrl)
curl -G <db endpoint>/_design/user/_view/assets
yields
{
"total_rows":4,"offset":0,"rows":[
{"id":"1","key":"/Asset one","value":{"_id":"1"}},
{"id":"1","key":"/Asset ten","value":{"_id":"1"}},
{"id":"1","key":"User 1/Asset one","value":{"_id":"2"}},
{"id":"1","key":"User 1/Asset ten","value":{"_id":"10"}}
]
}
Not very interesting, except notice the rows are returned in ascending order according to its key. To reverse the order simply adding the descending=true parameter
curl -G <db endpoint>/_design/user/_view/assets?descending=true
yields
{
"total_rows":4,"offset":0,"rows":[
{"id":"1","key":"User 1/Asset ten","value":{"_id":"10"}},
{"id":"1","key":"User 1/Asset one","value":{"_id":"2"}},
{"id":"1","key":"/Asset ten","value":{"_id":"1"}},
{"id":"1","key":"/Asset one","value":{"_id":"1"}}
]
}
Now here's where things get cool, and those cool things are startkey and endkey.
For the nature of the keys we can query all assets for "User 1" and have the Asset documents returned in ordered fashion according to the asset name, leveraging the slash in the key
curl -G <db endpoint>/_design/user/_view/assets
-d "startkey="""User%201/"""" -d "endkey="""User%201/\uFFF0""""
note I'm on Windows, where we have to escape double quotes ;(
yields
{
"total_rows":4,"offset":2,"rows":[
{"id":"1","key":"User 1/Asset one","value":{"_id":"2"}},
{"id":"1","key":"User 1/Asset ten","value":{"_id":"10"}}
]
}
This is a prefix search. Note the use of the high unicode character \uFFF0 as a terminator; we're asking for all documents in the view that start with "User 1/".
Likewise to get a sorted list of all Assets
curl -G <db endpoint>/_design/user/_view/assets
-d "startkey="""/"""" -d "endkey="""/\uFFF0""""
yields
{
"total_rows":4,"offset":0,"rows":[
{"id":"1","key":"/Asset one","value":{"_id":"1"}},
{"id":"1","key":"/Asset ten","value":{"_id":"1"}}
]
}
Since the Asset document _id is emit'ed, use include_docs to fetch the Asset document:
curl -G <db endpoint>_design/user/_view/assets -d "include_docs=true"
-d "startkey="""User%201/"""" -d "endkey="""User%201/\uFFF0""""
yields
{
"total_rows": 4,
"offset": 2,
"rows": [
{
"id": "1",
"key": "User 1/Asset one",
"value": {
"_id": "2"
},
"doc": {
"_id": "2",
"_rev": "2-f4e78c52b04b77e4b5d2787c21053155",
"type": "Asset",
"name": "Asset one"
}
},
{
"id": "1",
"key": "User 1/Asset ten",
"value": {
"_id": "10"
},
"doc": {
"_id": "10",
"_rev": "2-30cf9245b2f3e95f22a06cee6789d91d",
"type": "Asset",
"name": "Asset 10"
}
}
]
}
Same goes for Assets where the User _id is emit'ted.
Caveat
The major drawback here is that deleting an Asset document requires updating the User document; not the end of the world but it would be ultra nice to avoid that dependency.
Given the original 1-1 relationship of asset to user, totally getting rid of the Asset document all together and simply storing all Asset data with the User document might be feasible depending on your usage, and wildly reduces complexity.
I hope the above inspires a solution. Good luck!
I would like to do a query matches against two properties of the same item in a sub-collection.
Example:
[
{
"name": "Person 1",
"contacts": [
{ "type": "email", "value": "person.1#xpto.org" },
{ "type": "phone", "value": "555-12345" },
]
}
]
I would like to be able to search by emails than contain xpto.org but,
doing something like the following doesn't work:
search.ismatchscoring('email','contacts/type,','full','all') and search.ismatchscoring('/.*xpto.org/','contacts/value,','full','all')
instead, it will consider the condition in the context of the main object and objects like the following will also match:
[
{
"name": "Person 1",
"contacts": [
{ "type": "email", "value": "555-12345" },
{ "type": "phone", "value": "person.1#xpto.org" },
]
}
]
Is there any way around this without having an additional field that concatenates type and value?
Just saw the official doc. At this moment, there's no support for correlated search:
This happens because each clause applies to all values of its field in
the entire document, so there's no concept of a "current sub-document
https://learn.microsoft.com/en-us/azure/search/search-howto-complex-data-types
and https://learn.microsoft.com/en-us/azure/search/search-query-understand-collection-filters
The solution I've implemented was creating different collections per contact type.
This way I'm able to search directly in, lets say, the email collection without the need for correlated search. It might not be the solution for all cases but it works well in this case.
I have an content type entry in Contentful that has fields like this:
"fields": {
"title": "How It Works",
"slug": "how-it-works",
"countries": [
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "3S5dbLRGjS2k8QSWqsKK86"
}
},
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "wHfipcJS6WUSaKae0uOw8"
}
}
],
"content": [
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "72R0oUMi3uUGMEa80kkSSA"
}
}
]
}
I'd like to run a query that would only return entries if they contain a particular country.
I played around with this query:
https://cdn.contentful.com/spaces/aoeuaoeuao/entries?content_type=contentPage&fields.countries=3S5dbLRGjS2k8QSWqsKK86
However get this error:
The equals operator cannot be used on fields.countries.en-AU because it has type Object.
I'm playing around with postman, but will be using the .NET API.
Is it possible to search for entities, and filter on arrays that contain Objects?
Still learning the API, so I'm guessing it should be pretty straight forward.
Update:
I looked at the request the Contentful Web CMS makes, as this functionality is possible there. They use query params like this:
filters.0.key=fields.countries.sys.id&filters.0.val=3S5dbLRGjS2k8QSWqsKK86
However, this did not work in the delivery API, and might only be an internal query format.
Figured this out. I used the following URL:
https://cdn.contentful.com/spaces/aoeuaoeua/entries?content_type=contentPage&fields.countries.sys.id=wHfipcJS6WUSaKae0uOw8
Note the query parameter fields.countries.sys.id
I have the web-form builder for science events. The event moderator creates registration form with arbitrary amount of boolean, integer, enum and text fields.
Created form is used for:
register a new member to event;
search through registered members.
What is the best search tool for second task (to search memebers of event)? Is ElasticSearch well for this task?
I wrote a post about how to index arbitrary data into Elasticsearch and then to search it by specific fields and values. All this, without blowing up your index mapping.
The post is here: http://smnh.me/indexing-and-searching-arbitrary-json-data-using-elasticsearch/
In short, you will need to do the following steps to get what you want:
Create a special index described in the post.
Flatten the data you want to index using the flattenData function:
https://gist.github.com/smnh/30f96028511e1440b7b02ea559858af4.
Create a document with the original and flattened data and index it into Elasticsearch:
{
"data": { ... },
"flatData": [ ... ]
}
Optional: use Elasticsearch aggregations to find which fields and types have been indexed.
Execute queries on the flatData object to find what you need.
Example
Basing on your original question, let's assume that the first event moderator created a form with following fields to register members for the science event:
name string
age long
sex long - 0 for male, 1 for female
In addition to this data, the related event probably has some sort of id, let's call it eventId. So the final document could look like this:
{
"eventId": "2T73ZT1R463DJNWE36IA8FEN",
"name": "Bob",
"age": 22,
"sex": 0
}
Now, before we index this document, we will flatten it using the flattenData function:
flattenData(document);
This will produce the following array:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "2T73ZT1R463DJNWE36IA8FEN"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Bob"
},
{
"key": "age",
"type": "long",
"key_type": "age.long",
"value_long": 22
},
{
"key": "sex",
"type": "long",
"key_type": "sex.long",
"value_long": 0
}
]
Then we will wrap this data in a document as I've showed before and index it.
Then, the second event moderator, creates another form having a new field, field with same name and type, and also a field with same name but with different type:
name string
city string
sex string - "male" or "female"
This event moderator decided that instead of having 0 and 1 for male and female, his form will allow choosing between two strings - "male" and "female".
Let's try to flatten the data submitted by this form:
flattenData({
"eventId": "F1BU9GGK5IX3ZWOLGCE3I5ML",
"name": "Alice",
"city": "New York",
"sex": "female"
});
This will produce the following data:
[
{
"key": "eventId",
"type": "string",
"key_type": "eventId.string",
"value_string": "F1BU9GGK5IX3ZWOLGCE3I5ML"
},
{
"key": "name",
"type": "string",
"key_type": "name.string",
"value_string": "Alice"
},
{
"key": "city",
"type": "string",
"key_type": "city.string",
"value_string": "New York"
},
{
"key": "sex",
"type": "string",
"key_type": "sex.string",
"value_string": "female"
}
]
Then, after wrapping the flattened data in a document and indexing it into Elasticsearch we can execute complicated queries.
For example, to find members named "Bob" registered for the event with ID 2T73ZT1R463DJNWE36IA8FEN we can execute the following query:
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "eventId"}},
{"match": {"flatData.value_string.keyword": "2T73ZT1R463DJNWE36IA8FEN"}}
]
}
}
}
},
{
"nested": {
"path": "flatData",
"query": {
"bool": {
"must": [
{"term": {"flatData.key": "name"}},
{"match": {"flatData.value_string": "bob"}}
]
}
}
}
}
]
}
}
}
ElasticSearch automatically detects the field content in order to index it correctly, even if the mapping hasn't been defined previously. So, yes : ElasticSearch suits well these cases.
However, you may want to fine tune this behavior, or maybe the default mapping applied by ElasticSearch doesn't correspond to what you need : in this case, take a look at the default mapping or, for even further control, the dynamic templates feature.
If you let your end users decide the keys you store things in, you'll have an ever-growing mapping and cluster state, which is problematic.
This case and a suggested solution is covered in this article on common problems with Elasticsearch.
Essentially, you want to have everything that can possibly be user-defined as a value. Using nested documents, you can have a key-field and differently mapped value fields to achieve pretty much the same.