I have index with multiple documents. The documents contains below fields:
name
adhar_number
pan_number
acc_number
I want to create a elasticsearch dsl query. For this query two inputs are available like adhar_number and pan_number. This query should match OR Condition on this.
Example: If one document contains provided adhar_number only then I want that document too.
I have one dictionary with below contents (my_dict):
{
"adhar_number": "123456789012",
"pan_number": "BGPPG4315B"
}
I tried like below:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
s = Search(using=es, index="my_index")
for key, value in my_dict.items():
s = s.query("match", **{key:value})
print(s.to_dict())
response = s.execute()
print(response.to_dict())
It creates below query:
{
'query': {
'bool': {
'must': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
]
}
}
}
Above code is providing me the result with AND condition instead of OR Condition.
Please suggest me the good suggestions to include OR Condition.
To fix the ES query itself, all you need to do is use 'should' instead of 'must':
{
'query': {
'bool': {
'should': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
]
}
}
}
To achieve this in python, see the following example from the docs. The default logic is AND, but you can override it to OR as shown below.
Query combination Query objects can be combined using logical
operators:
Q("match", title='python') | Q("match", title='django')
# {"bool": {"should": [...]}}
Q("match", title='python') & Q("match", title='django')
# {"bool": {"must": [...]}}
~Q("match", title="python")
# {"bool": {"must_not": [...]}}
When you call the .query() method multiple times, the & operator will be used internally:
s = s.query().query() print(s.to_dict())
# {"query": {"bool": {...}}}
If you want to have precise control over the query form, use the Q shortcut to directly construct the combined
query:
q = Q('bool',
must=[Q('match', title='python')],
should=[Q(...), Q(...)],
minimum_should_match=1 ) s = Search().query(q)
So you want something like
q = Q('bool', should=[Q('match', **{key:value})])
You can use should as also mentioned by #ifo20. Note that you most likely want ot define the minimum_should_match parameters as well:
You can use the minimum_should_match parameter to specify the number or percentage of should clauses returned documents must match.
If the bool query includes at least one should clause and no must or filter clauses, the default value is 1. Otherwise, the default value is 0.
{
'query': {
'bool': {
'should': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
],
"minimum_should_match" : 1
}
}
}
Note also that the should clause contributes to the final score. I don't know how to avoid this but you may not want this to be part of an OR logic.
Related
On a daily basis, I'm pushing data (time_series) to Elasticsearch. I created an index pattern, and my index have the name: myindex_* , where * is today date (an index pattern has been setup). Thus after a week, I have: myindex_2022-06-20, myindex_2022-06-21... myindex_2022-06-27.
Let's assume my index is indexing products' prices. Thus inside each myindex_*, I have got:
myindex_2022-06-26 is including many products prices like this:
{
"reference_code": "123456789",
"price": 10.00
},
...
myindex_2022-06-27:
{
"reference_code": "123456789",
"price": 12.00
},
I'm using this query to get the reference code and the corresponding prices. And it works great.
const data = await elasticClient.search({
index: myindex_2022-06-27,
body: {
query: {
match: {
"reference_code": "123456789"
}
}
}
});
But, I would like to have a query that if in the index of the date 2022-06-27, there is no data, then it checks, in the previous index 2022-06-26, and so on (until e.g. 10x).
Not sure, but it seems it's doing this when I replace myindex_2022-06-27 by myindex_* (not sure it's the default behaviour).
The issue is that when I'm using this way, I got prices from other index but it seems to use the oldest one. I would like to get the newest one instead, thus the opposite way.
How should I proceed?
If you query with index wildcard, it should return a list of documents, where every document will include some meta fields as _index and _id.
You can sort by _index, to make elastic search return the latest document at position [0] in your list.
const data = await elasticClient.search({
index: myindex_2022-*,
body: {
query: {
match: {
"reference_code": "123456789"
}
}
sort : { "_index" : "desc" },
}
});
I have documents in a MongoDB as below -
[
{
"_id": "17tegruebfjt73efdci342132",
"name": "Test User1",
"obj": "health=8,type=warrior",
},
{
"_id": "wefewfefh32j3h42kvci342132",
"name": "Test User2",
"obj": "health=6,type=magician",
}
.
.
]
I want to run a query say health>6 and it should return the "Test User1" entry. The obj key is indexed as a text field so I can do {$text:{$search:"health=8"}} to get an exact match but I am trying to incorporate mathematical operators into the search.
I am aware of the $gt and $lt operators, however, it cannot be used in this case as health is not a key of the document. The easiest way out is to make health a key of the document for sure, but I cannot change the document structure due to certain constraints.
Is there anyway this can be achieved? I am aware that mongo supports running javascript code, not sure if that can help in this case.
I don't think it's possible in $text search index, but you can transform your object conditions to an array of objects using an aggregation query,
$split to split obj by "," and it will return an array
$map to iterate loop of the above split result array
$split to split current condition by "=" and it will return an array
$let to declare the variable cond to store the result of the above split result
$first to return the first element from the above split result in k as a key of condition
$last to return the last element from the above split result in v as a value of the condition
now we have ready an array of objects of string conditions:
"objTransform": [
{ "k": "health", "v": "9" },
{ "k": "type", "v": "warrior" }
]
$match condition for key and value to match in the same object using $elemMatch
$unset to remove transform array objTransform, because it's not needed
db.collection.aggregate([
{
$addFields: {
objTransform: {
$map: {
input: { $split: ["$obj", ","] },
in: {
$let: {
vars: {
cond: { $split: ["$$this", "="] }
},
in: {
k: { $first: "$$cond" },
v: { $last: "$$cond" }
}
}
}
}
}
}
},
{
$match: {
objTransform: {
$elemMatch: {
k: "health",
v: { $gt: "8" }
}
}
}
},
{ $unset: "objTransform" }
])
Playground
The second upgraded version of the above aggregation query to do less operation in condition transformation if it's possible to manage in your client-side,
$split to split obj by "," and it will return an array
$map to iterate loop of the above split result array
$split to split current condition by "=" and it will return an array
now we have ready a nested array of string conditions:
"objTransform": [
["type", "warrior"],
["health", "9"]
]
$match condition for key and value to match in the array element using $elemMatch, "0" to match the first position of the array and "1" to match the second position of the array
$unset to remove transform array objTransform, because it's not needed
db.collection.aggregate([
{
$addFields: {
objTransform: {
$map: {
input: { $split: ["$obj", ","] },
in: { $split: ["$$this", "="] }
}
}
}
},
{
$match: {
objTransform: {
$elemMatch: {
"0": "health",
"1": { $gt: "8" }
}
}
}
},
{ $unset: "objTransform" }
])
Playground
Using JavaScript is one way of doing what you want. Below is a find that uses the index on obj by finding documents that have health= text followed by an integer (if you want, you can anchor that with ^ in the regex).
It then uses a JavaScript function to parse out the actual integer after substringing your way past the health= part, doing a parseInt to get the int, and then the comparison operator/value you mentioned in the question.
db.collection.find({
// use the index on obj to potentially speed up the query
"obj":/health=\d+/,
// now apply a function to narrow down and do the math
$where: function() {
var i = this.obj.indexOf("health=") + 7;
var s = this.obj.substring(i);
var m = s.match(/\d+/);
if (m)
return parseInt(m[0]) > 6;
return false;
}
})
You can of course tweak it to your heart's content to use other operators.
NOTE: I'm using the JavaScript regex capability, which may not be supported by MongoDB. I used Mongo-Shell r4.2.6 where it is supported. If that's the case, in the JavaScript, you will have to extract the integer out a different way.
I provided a Mongo Playground to try it out in if you want to tweak it, but you'll get
Invalid query:
Line 3: Javascript regex are not supported. Use "$regex" instead
until you change it to account for the regex issue noted above. Still, if you're using the latest and greatest, this shouldn't be a limitation.
Performance
Disclaimer: This analysis is not rigorous.
I ran two queries against a small collection (a bigger one could possibly have resulted in different results) with Explain Plan in MongoDB Compass. The first query is the one above; the second is the same query, but with the obj filter removed.
and
As you can see the plans are different. The number of documents examined is fewer for the first query, and the first query uses the index.
The execution times are meaningless because the collection is small. The results do seem to square with the documentation, but the documentation seems a little at odds with itself. Here are two excerpts
Use the $where operator to pass either a string containing a JavaScript expression or a full JavaScript function to the query system. The $where provides greater flexibility, but requires that the database processes the JavaScript expression or function for each document in the collection.
and
Using normal non-$where query statements provides the following performance advantages:
MongoDB will evaluate non-$where components of query before $where statements. If the non-$where statements match no documents, MongoDB will not perform any query evaluation using $where.
The non-$where query statements may use an index.
I'm not totally sure what to make of this, TBH. As a general solution it might be useful because it seems you could generate queries that can handle all of your operators.
In the following example, assume the document is in the db.people collection.
How to remove the 3rd element of the interests array by it's index?
{
"_id" : ObjectId("4d1cb5de451600000000497a"),
"name" : "dannie",
"interests" : [
"guitar",
"programming",
"gadgets",
"reading"
]
}
This is my current solution:
var interests = db.people.findOne({"name":"dannie"}).interests;
interests.splice(2,1)
db.people.update({"name":"dannie"}, {"$set" : {"interests" : interests}});
Is there a more direct way?
There is no straight way of pulling/removing by array index. In fact, this is an open issue http://jira.mongodb.org/browse/SERVER-1014 , you may vote for it.
The workaround is using $unset and then $pull:
db.lists.update({}, {$unset : {"interests.3" : 1 }})
db.lists.update({}, {$pull : {"interests" : null}})
Update: as mentioned in some of the comments this approach is not atomic and can cause some race conditions if other clients read and/or write between the two operations. If we need the operation to be atomic, we could:
Read the document from the database
Update the document and remove the item in the array
Replace the document in the database. To ensure the document has not changed since we read it, we can use the update if current pattern described in the mongo docs
You can use $pull modifier of update operation for removing a particular element in an array. In case you provided a query will look like this:
db.people.update({"name":"dannie"}, {'$pull': {"interests": "guitar"}})
Also, you may consider using $pullAll for removing all occurrences. More about this on the official documentation page - http://www.mongodb.org/display/DOCS/Updating#Updating-%24pull
This doesn't use index as a criteria for removing an element, but still might help in cases similar to yours. IMO, using indexes for addressing elements inside an array is not very reliable since mongodb isn't consistent on an elements order as fas as I know.
in Mongodb 4.2 you can do this:
db.example.update({}, [
{$set: {field: {
$concatArrays: [
{$slice: ["$field", P]},
{$slice: ["$field", {$add: [1, P]}, {$size: "$field"}]}
]
}}}
]);
P is the index of element you want to remove from array.
If you want to remove from P till end:
db.example.update({}, [
{ $set: { field: { $slice: ["$field", 1] } } },
]);
Starting in Mongo 4.4, the $function aggregation operator allows applying a custom javascript function to implement behaviour not supported by the MongoDB Query Language.
For instance, in order to update an array by removing an element at a given index:
// { "name": "dannie", "interests": ["guitar", "programming", "gadgets", "reading"] }
db.collection.update(
{ "name": "dannie" },
[{ $set:
{ "interests":
{ $function: {
body: function(interests) { interests.splice(2, 1); return interests; },
args: ["$interests"],
lang: "js"
}}
}
}]
)
// { "name": "dannie", "interests": ["guitar", "programming", "reading"] }
$function takes 3 parameters:
body, which is the function to apply, whose parameter is the array to modify. The function here simply consists in using splice to remove 1 element at index 2.
args, which contains the fields from the record that the body function takes as parameter. In our case "$interests".
lang, which is the language in which the body function is written. Only js is currently available.
Rather than using the unset (as in the accepted answer), I solve this by setting the field to a unique value (i.e. not NULL) and then immediately pulling that value. A little safer from an asynch perspective. Here is the code:
var update = {};
var key = "ToBePulled_"+ new Date().toString();
update['feedback.'+index] = key;
Venues.update(venueId, {$set: update});
return Venues.update(venueId, {$pull: {feedback: key}});
Hopefully mongo will address this, perhaps by extending the $position modifier to support $pull as well as $push.
I would recommend using a GUID (I tend to use ObjectID) field, or an auto-incrementing field for each sub-document in the array.
With this GUID it is easy to issue a $pull and be sure that the correct one will be pulled. Same goes for other array operations.
For people who are searching an answer using mongoose with nodejs. This is how I do it.
exports.deletePregunta = function (req, res) {
let codTest = req.params.tCodigo;
let indexPregunta = req.body.pregunta; // the index that come from frontend
let inPregunta = `tPreguntas.0.pregunta.${indexPregunta}`; // my field in my db
let inOpciones = `tPreguntas.0.opciones.${indexPregunta}`; // my other field in my db
let inTipo = `tPreguntas.0.tipo.${indexPregunta}`; // my other field in my db
Test.findOneAndUpdate({ tCodigo: codTest },
{
'$unset': {
[inPregunta]: 1, // put the field with []
[inOpciones]: 1,
[inTipo]: 1
}
}).then(()=>{
Test.findOneAndUpdate({ tCodigo: codTest }, {
'$pull': {
'tPreguntas.0.pregunta': null,
'tPreguntas.0.opciones': null,
'tPreguntas.0.tipo': null
}
}).then(testModificado => {
if (!testModificado) {
res.status(404).send({ accion: 'deletePregunta', message: 'No se ha podido borrar esa pregunta ' });
} else {
res.status(200).send({ accion: 'deletePregunta', message: 'Pregunta borrada correctamente' });
}
})}).catch(err => { res.status(500).send({ accion: 'deletePregunta', message: 'error en la base de datos ' + err }); });
}
I can rewrite this answer if it dont understand very well, but I think is okay.
Hope this help you, I lost a lot of time facing this issue.
It is little bit late but some may find it useful who are using robo3t-
db.getCollection('people').update(
{"name":"dannie"},
{ $pull:
{
interests: "guitar" // you can change value to
}
},
{ multi: true }
);
If you have values something like -
property: [
{
"key" : "key1",
"value" : "value 1"
},
{
"key" : "key2",
"value" : "value 2"
},
{
"key" : "key3",
"value" : "value 3"
}
]
and you want to delete a record where the key is key3 then you can use something -
db.getCollection('people').update(
{"name":"dannie"},
{ $pull:
{
property: { key: "key3"} // you can change value to
}
},
{ multi: true }
);
The same goes for the nested property.
this can be done using $pop operator,
db.getCollection('collection_name').updateOne( {}, {$pop: {"path_to_array_object":1}})
{
"_id" : ObjectId("5852725660632d916c8b9a38"),
"response_log" : [
{
"campaignId" : "AA",
"created_at" : ISODate("2016-12-20T11:53:55.727Z")
},
{
"campaignId" : "AB",
"created_at" : ISODate("2016-12-20T11:55:55.727Z")
}]
}
I have a document which contains an array. I want to select all those documents that do not have response_log.created_at in last 2 hours from current time and count of response_log.created_at in last 24 is less than 3.
I am unable to figure out how to go about it. Please help
You can use the aggregation framework to filter the documents. A pipeline with $match and $redact steps will do the filtering.
Consider running the following aggregate operation where $redact allows you to proccess the logical condition with the $cond operator and uses the system variables $$KEEP to "keep" the document where the logical condition is true or $$PRUNE to "remove" the document where the condition was false.
This operation is similar to having a $project pipeline that selects the fields in the collection and creates a new field that holds the result from the logical condition query and then a subsequent $match, except that $redact uses a single pipeline stage which is more efficient:
var moment = require('moment'),
last2hours = moment().subtract(2, 'hours').toDate(),
last24hours = moment().subtract(24, 'hours').toDate();
MongoClient.connect(config.database)
.then(function(db) {
return db.collection('MyCollection')
})
.then(function (collection) {
return collection.aggregate([
{ '$match': { 'response_log.created_at': { '$gt': last2hours } } },
{
'$redact': {
'$cond': [
{
'$lt': [
{
'$size': {
'$filter': {
'input': '$response_log',
'as': 'res',
'cond': {
'$lt': [
'$$res.created_at',
last24hours
]
}
}
}
},
3
]
},
'$$KEEP',
'$$PRUNE'
]
}
}
]).toArray();
})
.then(function(docs) {
console.log(docs)
})
.catch(function(err) {
throw err;
});
Explanations
In the above aggregate operation, if you execute the first $match pipeline step
collection.aggregate([
{ '$match': { 'response_log.created_at': { '$gt': last2hours } } }
])
The documents returned will be the ones that do not have "response_log.created_at" in last 2 hours from current time where the variable last2hours is created with the momentjs library using the subtract API.
The preceding pipeline with $redact will then further filter the documents from the above by using the $cond ternary operator that evaluates this logical expression that uses $size to get the count and $filter to return a filtered array with elements that match other logical condition
{
'$lt': [
{
'$size': {
'$filter': {
'input': '$response_log',
'as': 'res',
'cond': { '$lt': ['$$res.created_at', last24hours] }
}
}
},
3
]
}
to $$KEEP the document if this condition is true or $$PRUNE to "remove" the document where the evaluated condition is false.
I know that this is probably not the answer that you're looking for but this may not be the best use case for Mongo. It's easy to do that in a relational database, it's easy to do that in a database that supports map/reduce but it will not be straightforward in Mongo.
If your data looked different and you kept each log entry as a separate document that references the object (with id 5852725660632d916c8b9a38 in this case) instead of being a part of it, then you could make a simple query for the latest log entry that has that id. This is what I would do in your case if I ware to use Mongo for that (which I wouldn't).
What you can also do is keep a separate collection in Mongo, or add a new property to the object that you have here which would store the latest date of campaign added. Then it would be very easy to search for what you need.
When you are working with a database like Mongo then how your data looks like must reflect what you need to do with it, like in this case. Adding a last campaign date and updating it on every campaign added would let you search for those campaign that you need very easily.
If you want to be able to make any searches and aggregates possible then you may be better off using a relational database.
Is it possible to $addToSet and determine which items were added to the set?
i.e. $addToSet tags to a post and return which ones were actually added
Not really, and not with a single statement. The closest you can get is with the findAndModify() method, and compare the orginal document form to the fields that you submitted in your $addToSet statement:
So considering an initial document:
{
"fields": [ "B", "C" ]
}
And then processing this code:
var setInfo = [ "A", "B" ];
var matched = [];
var doc = db.collection.findAndModify(
{ "_id": "myid" },
{
"$addToSet": { "fields": { "$each": setInfo } }
}
);
doc.fields.forEach(function(field) {
if ( setInfo.indexOf(field) != -1 ) {
matched.push(field);
}
});
return matched;
So that is a basic JavaScript abstraction of the methods and not actually nodejs general syntax for either the native node driver or the Mongoose syntax, but it does describe the basic premise.
So as long as you are using a "default" implementation method that returns the "original" state of the document before it was modified the you can play "spot the difference" as it were, and as is shown in the code example.
But doing this over general "update" operations is just not possible, as they are designed to possibly affect one or more objects and never return this detail.