I have Profile table with a huge number of rows. I was trying to filter out profiles based on super_category and account_id (these are the fields in the model Profile).
Assume I have a list of ids in the form of bulk_account_ids and super_categories
list_of_ids = Profile.objects.filter(account_id__in=bulk_account_ids, super_category__in=super_categories).values_list('id', flat=True))
list_of_ids = list(list_of_ids)
SomeTask.delay(ids=list_of_ids)
This particular query is timing out while it gets evaluated in the second line.
Can I use .iterator() at the end of the query to optimize this?
i.e list(list_of_ids.iterator()), if not what else I can do?
Related
pymongo: 3.12.0
mongoengine: 0.23.1
I have a document:
class Logs(Document):
reference_id = StringField(default=None)
data = DictField(default=None)
In data field, i have a list failed_stories. This can have hundreds of elements and I want to perform pagination on it. So, i write this query as:
start_idx = 0
page_size = 10
reference_id = 'asdfg345678'
Logs.objects(reference_id=reference_id).fields(slice__data__failed_stories=[start_idx, page_size])
With this, i get one document in which all field are None except the dociment id (_id).
The following query results in document with correct data in document fields.
Logs.objects(reference_id=reference_id).get()
Is there any issue with the way I am writing this?
Note: I would like to do this with mongoengine only, if possible.
I have two DAC's POReceipt, and and POReceiptLine. POReceiptLine containts a field called MfrPartNbr.
I want the user to be able to lookup all the POReceipts where the POReceiptLine.MfrPartNbr is equal to an entered value.
The SQL would be
SELECT *
FROM dbo.POReceipt
WHERE POReceipt.ReceiptNbr IN
(
SELECT ReceiptNbr
FROM dbo.POReceiptLine
WHERE MfrPartNbr = 'MY_ENTERED_PART_NBR'
)
Any idea how to write the BQL Statement for this?
As stated, an inner join won't work in this case because you will receive the same POReceipt multiple times (once for each POReceiptLine). The following BQL query shows how you can get the desired results using a sub query. If mfrPartNbr is an extension field, then replace POReceiptLine.mfrPartNbr with the correct extension name (e.g. POReceiptLineExtension.mfrPartNbr).
PXSelect<POReceipt, Where<Exists<
Select<POReceiptLine,
Where<POReceiptLine.receiptNbr, Equal<POReceipt.receiptNbr>,
And<POReceiptLine.mfrPartNbr, Equal<Required<POReceiptLine.mfrPartNbr>>>>>>>>.Select(this, "MY_ENTERED_PART_NBR");
Ok
i have this class in my model :
i want to get the agencys value which is a many to many on this class and store them in a list or array . Agency which store agency_id with the id of my class on a seprate table.
Agency has it's own tabel as well
class GPSpecial(BaseModel):
hotel = models.ForeignKey('Hotel')
rooms = models.ManyToManyField('Room')
agencys = models.ManyToManyField('Agency')
You can make it a bit more compact by using the flat=True parameter:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True))
The list(..) part is not necessary: without it, you have a QuerySet that contains the ids, and the query is postponed. By using list(..) we force the data into a list (and the query is executed).
It is possible that multiple GPSpecial objects have a common Agency, in that case it will be repeated. We can use the .distinct() function to prevent that:
agencys_spe = list(GPSpecial.objects.values_list('agencys', flat=True).distinct())
If you are however interested in the Agency objects, for example of GPSpecials that satisfy a certain predicate, you better query the Agency objects directly, like for example:
agencies = Agency.objects.filter(gpspecial__is_active=True).distinct()
will produce all Agency objects for which a GPSpecial object exists where is_active is set to True.
I think i found the answer to my question:
agencys_sp = GPSpecial.objects.filter(agencys=32,is_active=True).values_list('agencys')
agencys_spe = [i[0] for i in agencys_sp]
I have a Paginated List displayed on the visual force page and in the backend I was using a StandardSetController to control the pagination. However, one column on the table is an aggregated field whose calculation is done in a wrapper class. Recently, I want to sort the paginated list against the calculated field. And unfortunately the calculated result cannot be done on the data model(SObject) level.
So I am thinking to passed a sorted list of SObject to the StandardSetController constructor. That is to sort the record before it has been pass into the StandardSetController.
The code is like below:
List<Job__c> jobs = new List<Job__c>();
List<Job__c> tempJobs = Database.Query(basicQuery + filterExpression);
//sort with values
List<JobWrapper> jws = createJobWrappers(tempJobs);
JobWrapper.sortBy = JobWrapper.SORTBY_CALCULATEDFIELD_ASC;
jws.sort();
for(JobWrapper jw : jws){
jobs.add(jw.JobRecord);
}
jobs = jobs.deepClone(true, true, true);
StandardSetController con = new ApexPages.StandardSetController(jobs);
con.setPageSize(10);
However after executing the last line system throw exception:Modified rows exist in the records collection!
I did not modify any rows in the controller. Could anyone help me understanding the exception?
I have a Couchdb database with documents of the form: { Name, Timestamp, Value }
I have a view that shows a summary grouped by name with the sum of the values. This is straight forward reduce function.
Now I want to filter the view to only take into account documents where the timestamp occured in a given range.
AFAIK this means I have to include the timestamp in the emitted key of the map function, eg. emit([doc.Timestamp, doc.Name], doc)
But as soon as I do that the reduce function no longer sees the rows grouped together to calculate the sum. If I put the name first I can group at level 1 only, but how to I filter at level 2?
Is there a way to do this?
I don't think this is possible with only one HTTP fetch and/or without additional logic in your own code.
If you emit([time, name]) you would be able to query startkey=[timeA]&endkey=[timeB]&group_level=2 to get items between timeA and timeB grouped where their timestamp and name were identical. You could then post-process this to add up whenever the names matched, but the initial result set might be larger than you want to handle.
An alternative would be to emit([name,time]). Then you could first query with group_level=1 to get a list of names [if your application doesn't already know what they'll be]. Then for each one of those you would query startkey=[nameN]&endkey=[nameN,{}]&group_level=2 to get the summary for each name.
(Note that in my query examples I've left the JSON start/end keys unencoded, so as to make them more human readable, but you'll need to apply your language's equivalent of JavaScript's encodeURIComponent on them in actual use.)
You can not make a view onto a view. You need to write another map-reduce view that has the filtering and makes the grouping in the end. Something like:
map:
function(doc) {
if (doc.timestamp > start and doc.timestamp < end ) {
emit(doc.name, doc.value);
}
}
reduce:
function(key, values, rereduce) {
return sum(values);
}
I suppose you can not store this view, and have to put it as an ad-hoc query in your application.