When I query from collection in MongoDB and it has results:
[ { details:
[ { owner: '57f52829bcc705bb1c37d611',
nameprd: 'fsfsdaf',
price: 15000000,
descrice: 'sfsdfsdaf',
number: 4,
dateOff: '2016-06-12T17:00:00.000Z',
_csrf: 'CPlxeLpq-vYfTTWTgSpR6bsyapbDVgDCKzTc',
image: 'samsung-galaxy-note-7.png',
createdAt: '2016-10-06T16:43:11.109Z',
updatedAt: '2016-10-06T16:43:13.061Z',
id: '57f67f1f7ab99e5824652208' } ],
name: 'Máy tính bảng',
_csrf: 'Ze6OhtgL-2hZvG7TuP9NO4fjY90rA7x46bWA',
createdAt: '2016-10-05T16:19:53.331Z',
updatedAt: '2016-10-06T16:43:13.021Z',
id: '57f52829bcc705bb1c37d611' },
]
Now, how to get value which called details in this result.
Thanks.
You should add the query the following syntax: ,{'details':1}
For example:
If that is the original query:
db.person.find({'name':'joe'})
Than the following query returns only the details value of the query:
db.person.find({'name':'joe'},{'details':1})
The addition of the ,{'details':1} means that you want to get only the data for the details. It is uses as a filter to the extensive query.
Related
In official documentations, it's already shown how to do that. Below, an example that working fine:
Example: 1
LET documents = [
{ name: 'Doc 1', value: 111, description: 'description 111' },
{ name: 'Doc 2', value: 222, description: 'description 2' },
{ name: 'Doc 3', value: 333, description: 'description 3' }
]
FOR doc IN documents
UPSERT { name: doc.name, description: doc.description }
INSERT doc
UPDATE doc
IN MyCollection
But, I want to check different multiple keys for each document on UPSERT, like:
Example: 2
LET documents = [
{ name: 'Doc 1', value: 777, description: 'description 111' },
{ name: 'Doc 2', value: 888, description: 'description 2' },
{ name: 'Doc 3', value: 999, description: 'description 3' }
]
FOR doc IN documents
UPSERT {
{ name: doc.name, description: doc.description },
{ value: doc.value, description: doc.description },
{ name: doc.name, value: doc.value }
}
INSERT doc
UPDATE doc
IN MyCollection
Or, any other other way (using filter or something). I had tried but nothing works
If I understand your problem, you would want to update a document, if there's an existing one with at least 2 fields matching, otherwise insert it as new.
UPSERT won't be able to do that. It can only do one match. So a subquery is necessary. In the solution below, I ran a query to find the key of the first document that matches at least 2 fields. If there's no such document then it will return null.
Then the UPSERT can work by matching the _key to that.
LET documents = [
{ name: 'Doc 1', value: 777, description: 'description 111' },
{ name: 'Doc 2', value: 888, description: 'description 2' },
{ name: 'Doc 3', value: 999, description: 'description 3' }
]
FOR doc IN documents
LET matchKey= FIRST(
FOR rec IN MyCollection
FILTER (rec.name==doc.name) + (rec.value==doc.value) + (rec.description==doc.description) > 1
LIMIT 1
RETURN rec._key
)
UPSERT {_key:matchKey}
INSERT doc
UPDATE doc
IN MyCollection
Note: There's a trick with adding booleans together which works because true will be converted to 1, while false is zero. You can write it out explicitly like this: (rec.name==doc.name?1:0)
While this will work for you it's not a very effective solution. Actually there's no effective one in this case because all the existing documents need to be scoured through to find a matching one, for each document to be added/updated. I'm not sure what kind of problem you are trying to solve with this, but it might be better to re-think your design so that the matching condition could be more simple.
Imaging I have an array of objects, available before the aggregate query:
const groupBy = [
{
realm: 1,
latest_timestamp: 1318874398, //Date.now() values, usually different to each other
item_id: 1234, //always the same
},
{
realm: 2,
latest_timestamp: 1312467986, //actually it's $max timestamp field from the collection
item_id: 1234,
},
{
realm: ..., //there are many of them
latest_timestamp: ...,
item_id: 1234,
},
{
realm: 10,
latest_timestamp: 1318874398, //but sometimes then can be the same
item_id: 1234,
},
]
And collection (example set available on MongoPlayground) with the following schema:
{
realm: Number,
timestamp: Number,
item_id: Number,
field: Number, //any other useless fields in this case
}
My problem is, how to $group the values from the collection via the aggregation framework by using the already available set of data (from groupBy) ?
What have been tried already.
Okay, let skip crap ideas, like:
for (const element of groupBy) {
//array of `find` queries
}
My current working aggregation query is something like that:
//first stage
{
$match: {
"item": 1234
"realm" [1,2,3,4...,10]
}
},
{
$group: {
_id: {
realm: '$realm',
},
latest_timestamp: {
$max: '$timestamp',
},
data: {
$push: '$$ROOT',
},
},
},
{
$unwind: '$data',
},
{
$addFields: {
'data.latest_timestamp': {
$cond: {
if: {
$eq: ['$data.timestamp', '$latest_timestamp'],
},
then: '$latest_timestamp',
else: '$$REMOVE',
},
},
},
},
{
$replaceRoot: {
newRoot: '$data',
},
},
//At last, after this stages I can do useful job
but I found it a bit obsolete, and I already heard that using [.mapReduce][1] could solve my problem a bit faster, than this query. (But official docs doesn't sound promising about it) Does it true?
As for now, I am using 4 or 5 stages, before start working with useful (for me) documents.
Recent update:
I have checked the $facet stage and I found it curious for this certain case. Probably it will help me out.
For what it's worth:
After receiving documents after the necessary stages I am building a representative cluster chart, that you may also know as a heatmap
After that I was iterating each document (or array of objects) one-by-one to find their correct x and y coordinated in place which should be:
[
{
x: x (number, actual $price),
y: y (number, actual $realm),
value: price * quantity,
quantity: sum_of_quantity_on_price_level
}
]
As for now, it's old awful code with for...loop inside each other, but in the future, I will be using $facet => $bucket operators for that kind of job.
So, I have found an answer to my question in another, but relevant way.
I was thinking about using $facet operator and to be honest, it's still an option, but using it, as below is a bad practice.
//building $facet query before aggregation
const ObjectQuery = {}
for (const realm of realms) {
Object.assign(ObjectQuery, { `${realm.name}` : [ ... ] }
}
//mongoose query here
aggregation([{
$facet: ObjectQuery
},
...
])
So, I have chosen a $project stage and $switch operator to filter results, such as $groups do.
Also, using MapReduce could also solve this problem, but for some reason, the official Mongo docs recommends to avoid using it, and choose aggregation: $group and $merge operators instead.
I encountered a strange mistake. I add a data that I send with Vue axios post to the database, a column of this data that is converted to JSON object with JSON.stringfy. When I try to update this line later, can I share the idea that I am having the following error? I shared SQL string below.
{
id: null,
table_id: 1425,
user_id: 15,
order_time: 1586975000253,
order_status: 1,
order: [
{
product: 8,
amount: 1,
portion: [Object],
order_time: 1586974979254,
status: 0,
desc: ''
},
{
product: 4,
amount: 1,
portion: [Object],
order_time: 1586974979707,
status: 0,
desc: ''
},
{
product: 8,
amount: 1,
portion: [Object],
order_time: 1586974980271,
status: 0,
desc: ''
}
],
corp: 'sssxx'
}
ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'order = '[{\"product\":8,\"amount\":1,\"portion\":{\"id\":1,\"title\":\"Tam Pors' at line 1
UPDATE orders SET order = '[{\\"product\\":8,\\"amount\\":1,\\"portion\\":{\\"id\\":1,\\"title\\":\\"Tam Porsiyon\\",\\"price\\":\\"25\\"},\\"order_time\\":1586974979254,\\"status\\":0,\\"desc\\":\\"\\"},{\\"product\\":4,\\"amount\\":1,\\"portion\\":{\\"id\\":1,\\"title\\":\\"Tam Porsiyon\\",\\"price\\":\\"25\\"},\\"order_time\\":1586974979707,\\"status\\":0,\\"desc\\":\\"\\"},{\\"product\\":8,\\"amount\\":1,\\"portion\\":{\\"id\\":1,\\"title\\":\\"Tam Porsiyon\\",\\"price\\":\\"25\\"},\\"order_time\\":1586974980271,\\"status\\":0,\\"desc\\":\\"\\"}]', order_status = 1 WHERE id = NULL AND table_id = 1425 and corp = 'sssxx'
The problem with your code has nothing to do with using JSON.
The word order is a reserved keyword. See https://mariadb.com/kb/en/reserved-words/ You can't use it as a column name unless you delimit it with back-ticks:
UPDATE orders SET `order` = ...whatever...
The clue in the error message is that it complained about the word order, not about anything in your JSON.
...for the right syntax to use near 'order = ...
Syntax errors show you exactly the point in your SQL syntax where the parser got confused.
In the MongoDB collection I'm querying, each document represents an item at a specific time. When updating a document, a new document is created with the same item id and a new timestamp. All items have unique item ids.
To illustrate, consider this example. We start with one revision of an item:
{
_id: x,
itemId: 123,
createdOn: ISODate("2013-01-30T11:16:20.102Z"),
field1: "foo",
field2: "bar
}
After an update, we have two revisions of the item, with the same itemId and different timestamps.
[{
_id: x,
itemId: 123,
createdOn: ISODate("2013-01-30T11:16:20.102Z"),
field1: "foo",
field2: "bar"
},
{
_id: y,
itemId: 123,
createdOn: ISODate("2014-02-09T14:26:20.102Z"),
field1: "baz",
field2: "fiz"
}]
How can I find all items that in their most recent revision satisfy a certain query?
My current (wrong) approach is to first find the matching documents, then sort by timestamp, group them by itemId, and return the values from the first document in the group:
ItemModel.aggregate({ $match: { field1: "foo"} }).sort({createdOn: -1}).group(
{
_id: '$itemId', // grouping key
createdOn: {$first: '$createdOn'},
field1: {$first: '$field1'},
field2: {$first: '$field2'}
}).exec(...);
This is wrong because it matches old revisions of items. Only the latest revisions of items should match. In the example above, this approach returns item "123", while the correct result is an empty result set.
You are mixing a few methods here when you can be doing everything in the aggregation pipeline. Otherwise it's just a matter of getting your steps in the right order:
db.collection.aggregate([
{$sort: { createdOn: -1 }},
{$group: { _id: "$itemId",
createdOn: {$first: "$createdOn"},
field1: {$first: "$field1" },
field2: {$first: "$field2" }
}},
{$match: { field1: "foo" }}
])
So sort first for newest documents. Group on the itemId ( order will be maintained for $first ), and then filter with $match if you must. But your grouped documents will be latest ones.
One can consider changing the document's schema to better suit your queries, and reduce the overhead of aggregation. Instead of creating a new document for each revision, you could push revision sub-documents onto an array and maintain the latest revision at the parent document; for example:
{
_id: x,
itemId: 123,
createdOn: ISODate("2014-02-09T14:26:20.102Z"),
field1: "baz",
field2: "fiz,
revisions: [
{createdOn: ISODate("2013-01-30T11:16:20.102Z"), field1: "foo", field2: "bar"},
{createdOn: ISODate("2014-02-09T14:26:20.102Z"), field1: "baz", field2: "fiz"}
]
}
Keep in mind that MongoDB enforces a document-size limit of 16MB; this should suffice for most use cases. This would make your queries very simple: db.collection.find({field1: "foo"})
Just another approach...
Coverage Model.
var CoverageSchema = new Schema({
module : String,
source: String,
namespaces: [{
name: String,
types: [{
name: String,
functions: [{
name: String,
coveredBlocks: Number,
notCoveredBlocks: Number
}]
}]
}]
});
I need coveredBlocks aggregations on every level:
*Module: {moduleBlocksCovered}, // SUM(blocksCovered) GROUP BY module, source
**Namespaces: [{nsBlocksCovered}] // SUM(blocksCovered) GROUP BY module, source, ns
****Types: [{typeBlocksCovered}] // SUM(blocksCovered) BY module, source, ns, type
How do I get this result with Coverage.aggregate in Mongoose ?
{
module: 'module1',
source: 'source1',
coveredBlocks: 7, // SUM of all functions in module
namespaces:[
name: 'ns1',
nsBlocksCovered: 7, // SUM of all functions in namespace
types:[
{
name: 'type1',
typeBlocksCovered: 7, // SUM(3, 4) of all function in type
functions[
{name: 'func1', blocksCovered: 3},
{name:'func2', blocksCovered: 4}]
}
]
]
}
My ideas is to deconstruct everything using $unwind then reconstruct the document back again using group and projection.
aggregate flow:
//deconstruct functions
unwind(namesapces)
unwind(namespaces.types)
unwind(namespace.types.functions)
//cal typeBlocksCovered
group module&source ,ns,type to sum functions blocksCovered->typeBlocksCovered + push functions back to types
project to transform fields to be easier for next group
// cal nsBlocksCovered
group module&source ,ns to sum typeBlocksCovered -> nsBlocksCovered) + push types back to ns
project to transform fields to be easier for next group
// cal coveredBlocks
group module&source to sum nsBlocksCovered -> coveredBlocks
project to transform fields to match your mongoose docs
My sample query with mongo shell syntax and its seem working , guess is you are using collection name "Coverage"
db.Coverage.aggregate([
{"$unwind":("$namespaces")}
,{"$unwind":("$namespaces.types")}
,{"$unwind":("$namespaces.types.functions")}
,{"$group": {
_id: {module:"$module", source:"$source", nsName: "$namespaces.name", typeName : "$namespaces.types.name"}
, typeBlocksCovered : { $sum : "$namespaces.types.functions.blocksCovered"}
, functions:{ "$push": "$namespaces.types.functions"}}}
,{"$project" :{module:"$_id.module", source:"$_id.source"
,namespaces:{
name:"$_id.nsName"
,types : { name: "$_id.typeName",typeBlocksCovered : "$typeBlocksCovered" ,functions: "$functions"}
}
,_id:0}}
,{"$group": {
_id: {module:"$module", source:"$source", nsName: "$namespaces.name"}
, nsBlocksCovered : { $sum : "$namespaces.types.typeBlocksCovered"}
, types:{ "$push": "$namespaces.types"}}}
,{"$project" :{module:"$_id.module", source:"$_id.source"
,namespaces:{
name:"$_id.nsName"
,nsBlocksCovered:"$nsBlocksCovered"
,types : "$types"
}
,_id:0}}
,{"$group": {
_id: {module:"$module", source:"$source"}
, coveredBlocks : { $sum : "$namespaces.nsBlocksCovered"}
, namespaces:{ "$push": "$namespaces"}}}
,{"$project" :{module:"$_id.module", source:"$_id.source", coveredBlocks : "$coveredBlocks", namespaces: "$namespaces",_id:0}}
])