Liferay 7 ignores AssetEntryQuery parameters - liferay

AssetEntryQuery is ignoring all the parameters I set, thus executing a wildcard query and retrieving all my asset entries.
Snippet of what I'm doing right now:
long[] classTypeIds = new long[] { 30837 };
AssetEntryQuery assetQuery = new AssetEntryQuery();
assetQuery.setClassTypeIds(classTypeIds);
List<AssetEntry> entries = AssetEntryLocalServiceUtil.getEntries(assetQuery);
foreach (AssetEntry entry : entries) {
System.out.printf("Entry with classTypeId %d expected %d", entry.getClassTypeId(), 30837);
}
When I execute this code, it returns a List of ALL the entries from liferay without their classTypeId being filtered.
Output:
[...]
Entry with classTypeId 0 expected 30837
Entry with classTypeId 0 expected 30837
Entry with classTypeId 28920 expected 30837
Entry with classTypeId 30331 expected 30837
Entry with classTypeId 30837 expected 30837
Entry with classTypeId 30837 expected 30837
Entry with classTypeId 30837 expected 30837
Entry with classTypeId 30921 expected 30837
[...]
Am I missing something so Liferay is ignoring my AssetEntryQuery params or is it a bug within Liferay itself?

I don't know if it's intentional or not (probably not), but in the assetentry table of my DB (I'm only looking at it for dev purposes, as it's never recommended to do anything directly to the DB), the classtypeid field is 0 for all entries in my build of the latest liferay-portal master branch. This is just a shot in the dark, but you could try calling
AssetEntryQuery.setClassNameIds
instead of setClassTypeIds, and see if that does the trick.

Related

Get 1st & last item from array of unknown length - MongoDB

I have documents in a collection. Each document may or may not have a log field. If it does, this log field will be an array. This array is of unknown length. I've been trying to use the $slice operator here as best I can & I have gotten it to return the last item in the array with log: { $slice: -1 } or the 1st item in the array with log: { $slice: 1 } but I cannot figure out how to get both from a single db find query. My query to the db is this:
db.collection('entities').find({}, {
name: 1,
log: {
$slice: -1 // returning the last item
}
})
Is this possible with a simple find query or will I have to use an aggregation query?
I did attempt something like:
db.collection('entities').find({}, {
name: 1,
"log.0": 1,
log: {
$slice: -1
}
})
But this failed due to a conflict with $slice apparently but I imagine it would fail anyway as the log field may or may not exist.

Sorting CouchDB result by unix timestamp and paginate

I am struggeling to get pagination working for a few days now. I have a database with docs and have a few to have the timestamp as key to sort descending. But I can't seem to get the next set of docs ...
If I run it, I get the top 5 docs. When I try to use startkey or startkey_docid I only seem to get the same lines again.
Trying the couch documentation I am not sure what I need to make it work.
couchdb has a design like:
{
"_id": "_design/filters",
"views": {
"blog": {
"map": "function (doc) { if (doc.published && doc.type == 'post') emit(doc.header.date); }"
}
}
}
... header.date is generated with +new Date()
on the nodejs side, with github/nano, I use something similar to:
import nano from 'nano';
let db = nano(SERVICE_URL).use('blog_main');
let lastDocId = ctx.query.lastDocId;
let lastSkip = ctx.query.lastSkip ? +ctx.query.lastSkip + 5 : null;
let query = {
limit: 1 + 4, // limit to 5
descending: true, // reverse order: newest to top
include_docs: true,
}
if (lastDocId) { // initally off
query.startkey = lastDocId;
}
if (lastSkip) { // other method for tests
query.skip = lastSkip; // ----> this results in some previous and some new items
}
let itemRows = await db.view('filters','blog', query);
let items = itemRows.rows;
// each doc is in items[].doc
I have seen sort by value, sorting works for me - but I cant seem to get pagination to work.
I'm uncertain regarding the statement "I get the same lines again". That is reproducible if startkey is the first rather than the last key of the prior result - and that would be the first problem.
Regardless, assuming startkey is correct the parameters skip and startkey are conflicting. Initially skip should be 0 and afterwards it should be 1 in order to skip over startkey in successive queries.
This technique is clearly outlined in the CouchDB pagination documentation1.
Details
Assume the complete view (where key is a unix timestamp) is
{
"total_rows":7,
"offset":0,
"rows":[
{"id":"821985c5140ca583e108653fb6091ac8","key":1580050872331,"value":null},
{"id":"821985c5140ca583e108653fb6092c3b","key":1580050872332,"value":null},
{"id":"821985c5140ca583e108653fb6093f47","key":1580050872333,"value":null},
{"id":"821985c5140ca583e108653fb6094309","key":1580050872334,"value":null},
{"id":"821985c5140ca583e108653fb6094463","key":1580050872335,"value":null},
{"id":"821985c5140ca583e108653fb60945f4","key":1580050872336,"value":null},
{"id":"821985c5140ca583e108653fb60949f3","key":1580050872339,"value":null}
]
}
Given the initial query conditions
{
limit: 5,
descending: true,
include_docs: false // for brevity
}
indeed produces the expected result, 5 rows with the most recent first
{
"total_rows":7,
"offset":0,
"rows":[
{"id":"821985c5140ca583e108653fb60949f3","key":1580050872339,"value":null},
{"id":"821985c5140ca583e108653fb60945f4","key":1580050872336,"value":null},
{"id":"821985c5140ca583e108653fb6094463","key":1580050872335,"value":null},
{"id":"821985c5140ca583e108653fb6094309","key":1580050872334,"value":null},
{"id":"821985c5140ca583e108653fb6093f47","key":1580050872333,"value":null}
]
}
Now assuming the second query is so
{
limit: 5,
descending: true,
include_docs: false, // for brevity
startkey: 1580050872333,
skip: 5
}
startkey (the key of the last row of the prior result) is correct but the skip parameter is literally skipping past the next (logical) set of rows. Specifically with those parameters and the example view above, the query would blow past the remaining keys resulting in an empty row set.
This is what is desired:
{
limit: 5,
descending: true,
include_docs: false, // for brevity
startkey: 1580050872333,
skip: 1 // just skip the last doc (startkey)
}
1 CouchDB Pagination Recipes 3.2.5.5. Paging (Alternate Method)
Using startkey or skip, returned results that included some of the skipped results as well, or all previous ones (strangely mixed up).
I solved it, by extending the result keys with - second part.
Since the key was based on a date without time, it seemed to have rearranged the entries on each request due to similar date-timestamps. Adding a second part that was was sortable as well (used the created timestamp as second part) fixed it.
The key is now [datetimestamp, createdtimestamp] .. both can be sorted descending.

How to skip an update if one of the fields in the document has changed? Cloud Firestore

Given a Firebase Document:
{ a: "hello", b: False}
When I do a request:
update {b=True} but only if {a == "hello"}
And in the meanwhile, a is set to "world".
I expect that the document should stay like: {a: "world", b: False}
But what I see is this: {a: "world", b: True}
Is it even possible to drop a Cloud Firestore update request on a document, when the document has been changed in the meanwhile?
What I turned out to do was to not check on 1 field, but to check whether the whole document has changed in the meanwhile and then abort the update action.
from google.api_core.exceptions import FailedPrecondition
[...]
doc_ref = self.client.collection(path).document(id)
snapshot = doc_ref.get()
if snapshot.exists:
if snapshot.get("a") == "hello":
write_option = self.client.write_option(
last_update_time=snapshot.update_time
)
try:
doc_ref.update({"b": True}, option=write_option)
except FailedPrecondition:
# here you can try again as the document has updated in the meanwhile, so `a` could no longer be "hello"
[...]

Handling errors with bulkinsert in Mongo NodeJS [duplicate]

This question already has answers here:
How to Ignore Duplicate Key Errors Safely Using insert_many
(3 answers)
Closed 5 years ago.
I'm using NodeJS with MongoDB and Express.
I need to insert records into a collection where email field is mandatory.
I'm using insertMany function to insert records. It works fine when unique emails are inserted, but when duplicate emails are entered, the operation breaks abruptly.
I tried using try catch to print the error message, but the execution fails as soon as a duplicate email is inserted. I want the execution to continue and store the duplicates. I want to get the final list of the records inserted/failed.
Error Message:
Unhandled rejection MongoError: E11000 duplicate key error collection: testingdb.gamers index: email_1 dup key: 
Is there any way to handle the errors or is there any other approach apart from insertMany?
Update:
Email is a unique field in my collection.
If you want to continue inserting all the non-unique documents rather than stopping on the first error, considering setting the {ordered:false} options to insertMany(), e.g.
db.collection.insertMany(
[ , , ... ],
{
ordered: false
}
)
According to the docs, unordered operations will continue to process any remaining write operations in the queue but still show your errors in the BulkWriteError.
I can´t make comment, so goes as answer:
is you database collection using unique index for this field, or your schema has unique attribute for the field? please share more information about you code.
From MongoDb docs:
"Inserting a duplicate value for any key that is part of a unique index, such as _id, throws an exception. The following attempts to insert a document with a _id value that already exists:"
try {
db.products.insertMany( [
{ _id: 13, item: "envelopes", qty: 60 },
{ _id: 13, item: "stamps", qty: 110 },
{ _id: 14, item: "packing tape", qty: 38 }
] );
} catch (e) {
print (e);
}
Since _id: 13 already exists, the following exception is thrown:
BulkWriteError({
"writeErrors" : [
{
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: restaurant.test index: _id_ dup key: { : 13.0 }",
"op" : {
"_id" : 13,
"item" : "envelopes",
"qty" : 60
}
}
],
(some code omitted)
Hope it helps.
Since you know that the error is occurring due to duplicate key insertions, you can separate the initial array of objects into two parts. One with unique keys and the other with duplicates. This way you have a list of duplicates you can manipulate and a list of originals to insert.
let a = [
{'email': 'dude#gmail.com', 'dude': 4},
{'email': 'dude#yahoo.com', 'dude': 2},
{'email': 'dude#hotmail.com', 'dude': 2},
{'email': 'dude#gmail.com', 'dude': 1}
];
let i = a.reduce((i, j) => {
i.original.map(o => o.email).indexOf(j.email) == -1? i.original.push(j): i.duplicates.push(j);
return i;
}, {'original': [], 'duplicates': []});
console.log(i);
EDIT: I just realised that this wont work if the keys are already present in the DB. So you should probably not use this answer. But Ill just leave it here as a reference for someone else who may think along the same lines.
Nic Cottrell's answer is right.

Model.aggregate gives me different results as in the mongo console (none for mongoose)

When I execute the following in the console I get one nameID and a min max.
db.prices.aggregate([
{"$match":{ nameID: { '$in': [ ObjectId('longid') ] } } }//replaced the actual value with longid
,{"$group":{"_id":"$nameID","nameID":{"$first": "$nameID"} ,"min":{"$min":"$price"},"max":{"$max":"$price"}}}
,{"$sort":{"min":1}}
,{"$limit":5}
])
When in mongoose I get an empty array:
Price.aggregate([
{"$match":{ nameID: { '$in': [ 'longid' ] } }}//replaced the actual value with longid
,{"$group":{"_id":"$nameID","nameID":{"$first": "$nameID"} ,"min":{"$min":"$price"},"max":{"$max":"$price"}}}
,{"$sort":{"min":1}}
,{"$limit":5}
],function(){}//needs a callback passed or no promise is returned
).then(function(res){
console.log('got a totals:',res);//res is [] here
return Price.find({ nameID: { '$in': [ 'longid' ] } }}).exec();
}).then(function(res){
console.log(res.length);//yes, got 3 records
console.log(res[0]['price']);//yes, the price
Doing a Price.find with the query gets me 3 records.
Am I missing something or is this a known problem with mongoose?
[UPDATE]
With aggregate I cannot pass the id as a string so converted to ObjectId and it worked. Here is an example how:
Price.aggregate([
{"$match":{ nameID: { '$in': [
new mongoose.Types.ObjectId('longid')//replaced the actual value with longid
]
}
}}
...
Oops. From your last comment to my last comment you have highlighted the problem here. Mongoose will "inspect" and convert "types" based on the applied schema to the model in most cases for operations. It cannot and "will not" do this in aggregation pipelines. So you need to do this:
var ObjectID = require("mongodb").ObjectID;
var actual_hex_string; // as received from input parameters
// then
Price.aggregate([
{"$match":{
"nameID": {
"$in": [ ObjectID(actual_hex_string) ]
}
}}
(...) // the rest of the pipeline
Also $in seems not to be appropriate here unless you are intending to use an actual "array" of possible ( and already converted ) ObjectID values here. Don't make the common mistake of thinking "nameID" is an "array" therefore you must use $in. MongoDB does not care if this is an array or a single value. $in is for "multiple matches" in an $or context for the same field. A simple equality match works just as well for an "array" in the document.

Resources