Manual Pagination in Cassandra - node.js

I use the manual pagination feature in Cassandra.
client.eachRow(query, params, options, function (n, row) {
// Invoked per each row in all the pages
console.log("row",row);
}, function (err, result) {
if(typeof result !== undefined) {
pageState = result.pageState;
console.log("pagestate output : ", pageState);
if(pageState != null) {
//
}
}
}
);
Say we have 4 row / entries in a table 'test'.
When I try to query with fetchsize as '2' It returns two entres with result.pageState then I used the same pageState to query the next page and it fetched successfully.
But the problem is
Since the total entry is 4 and the fetch size is 2, In the get next page, I expect the 3rd and 4th entry with the pageState to be null (Since there is no more entries available) but, It is returning 3rd and 4th entry with another page state.
1) In this Case : Is that the page state received is next value's page state (or) last received value's page state? AFAIK It is the last received value's (4th entries) page state, So to identify that we reached to the last entry, I always make one more call (result.nextpage) and If it is undefined then I consider that there is no more entries available, I feel it is an over head to make one more call for every pagination.
2) How to identify that we reached to the end without making the result.nextpage check.

Related

SAP UI5 to Implement "go to" specific page on the table

I am new to SAP UI5 development. Currently the table is using "growing" and "growingThreshhold", then users can click more to see data of next page. Since we have thousands of data in that table, it takes user time to click more and more again to load next page data. we try to implement a function, that user can enter the page number then click a button and go to the specific page.
<Table id="genTable" growing="true" growingThreshold="60" fixedLayout="false" selectionChange="onHandleSelectChange"
backgroundDesign="Solid" updateFinished="onHandleGeneratorQueueUpdateFinished">
Expected UI:
I added a bar then UI display is good.
<Bar design="SubHeader">
<contentMiddle>
<Input type="Number" id="pageNumber" width="50px"></Input>
<Button id="goToButton" text="Go to" type="Emphasized" press="onHandleGoTo"></Button>
</contentMiddle>
</Bar>
For the backend logic, I refer to below articles, but still doesn't work.
https://blogs.sap.com/2016/12/14/sapui5-pagination-in-sap.m-table-on-button-click-using-odata-service/
https://sapyard.com/advance-sapui5-19-pagination-in-table-control-with-top-and-skip-query-options/
I tried to use read, the it can get the data back from odata service, but the data can't be refreshed in the table.
oModel.read("/ViewQueueSet", {
urlParameters: {
"$top": top,
"$skip": count
},
filters: [new Filter("RoleCode", FilterOperator.EQ, "G")],
useBatch: true,
success: function (tdata) { //successful Read in the server
var json = new JSONModel();
json.setData(tdata);
that.getView().setModel(json,"sapmodel");
sap.ui.core.BusyIndicator.hide();
},
error: function () {
sap.ui.core.BusyIndicator.hide();
}
});
}
also tried to call bindItems
//that.getView().setModel(json,"sapmodel");
//oTable.setModel(json); //JSON is preferred data format
//oTable.bindItems("/results",that.oGenQueueTemplate);
that.getView().byId("genTable").setModel(json);
that.getView().byId("genTable").bindItems("/results",that.oGenQueueTemplate);
Another approach I tried is to use bindItems, it call send the request to odata service, but it doesn't add the parameter top and skip parameter.
oTable.bindItems({
path: "/ViewQueueSet",
model: "sapmodel",
filters: [new Filter("RoleCode", FilterOperator.EQ, "G")],
template: this.oGenQueueTemplate,
// urlParameters: {
// "$top": top,
// "$skip": count
// },
parameters: {
"$top": top,
"$skip": count
}
});
Anyone has any idea about how to implement this functionality?
before I go into detail, please consider using other controls and/or ux patterns. imagine having thousands or millions of elements in backend and user equests to scroll to page 9292929 => for a responsive table (sap.m.Table) you would need to load all elements up to that page. Maybe filtering or some completely different approach could be tha right one.
The correct way to do this is by getting the listbinding and ask it to load more elements. how to ask the binding, may depend on the type of binding as well.
oTable = ... // get a reference on table
oItemsBinding = oTable.getBinding("items");
oItemsBinding.getLength() // will give you total number of elements
oItemsBinding.isLengthFinal() // will tell you if the length is final
oItemsBinding.getCurrentContexts() // will give you array of all loaded contexts.
now a few words to length and the length being final. If you have a binding implementation that knows the total number of objects (e.g. json - since it loads all elements to client, or OData, if cont is implemented in backend) then getLength will tell you the total number of objects.
if the backend doesnt have the count feature implemented, the length becomes final once you reach the end of the list (backend gives you less elements than you require - e.g. top=10,skip=90 returns 10 elements => length 100, not final; top=10,skip=100 returns 4 elements => length=104 becomes final)
Now, you can have a look at various binding implementations. But be aware that there is a lot to consider (direction of growing - upwards/downwards), at least you dont need to think about filtering/sorting - as this is part of the binding.
There is a nice (private) feature in sap.m.Table (or in sap.m.ListBase, to be more precise), which is called GrowingEnablement. you can use it like this:
// dont forget if _oGrowingDelagate is not undefined or similar
oTable._oGrowingDelegate.requestNewPage()
this will load one more page => you could start from reading the implementation of this method if you want to load several pages in one go.
you could also do a simple trick:
// assume you have 20 elements per page (default)
// and want to get to 7th page (elements 121 - 140)
// ckecks for 7th page exists and 7th page not yet loaded are omitted
oTable.setGrowingThreshold(70) // half of 140, so following load will load second page => 71 to 140
oTable._oGrowingDelegate.requestNewPage() // this will load the second page 71 - 140
// once loading is finished (take care of asynchronity)
oItemsBinding.attachEventOnce("dataReceived", function(oEvent){
// reset the growing threshold to 20
oTable.setGrowingThreshold(20)
// scroll to first element of 7th page (index 120, since count starts from 0)
oTable.scrollToInex(120)
})

How to filter large array based on "in-between" value in a sub-array? (Node.js)

I have a large database of items that have somewhat fluid statuses. I need to get an array of those items based on what each items's status was on a given date.
Here's an excerpt from an example record:
{"status":[
{"date":{"$date":"2019-06-14T06:17:41.625Z"},"statusCode":200},
{"date":{"$date":"2019-11-04T02:02:58.020Z"},"statusCode":404},
{"date":{"$date":"2020-08-07T01:11:16.184Z"},"statusCode":200},
{"date":{"$date":"2020-08-07T03:54:09.703Z"},"statusCode":404}
]}
Using this example, the status on 2020-01-13 would be 404 (as it would be also on 2020-01-12 or any other givenDate until the status changed back to 200).
So how would I filter my big array to this record (and others like it) to only items with status 404 as of 2020-01-13? (And I would do the same for 200.)
Note that I can't simply filter for objects with date < givenDate && statusCode == 200 because that would ignore if the status changed after those records. (The above example would return for either 200 or 404 since both records exist before givenDate.)
My only idea at the moment is that I could first filter the status array to anything before givenDate, and then compare based on the last record (since this filtered array's last record would then always be before givenDate). But this seems more complicated than necessary.
Processing time isn't important to me on this because I'm trying to make some one-time corrections to past statistics.
Thanks in advance!
A bit verbose, but I think this should do what you want.
var feedHistory = {"status":[
{"date":{"$date":"2019-06-14T06:17:41.625Z"},"statusCode":200},
{"date":{"$date":"2019-11-04T02:02:58.020Z"},"statusCode":404},
{"date":{"$date":"2020-08-07T01:11:16.184Z"},"statusCode":200},
{"date":{"$date":"2020-08-07T03:54:09.703Z"},"statusCode":404}
]};
const filterByStatus = (feedHistory,statusDate) => {
let foundRecord = false;
feedHistory.forEach((record) => {
let recordDate = new Date(Date.parse(record.date['$date']));
if (recordDate < statusDate && (!foundRecord || foundRecord.parsedDate < recordDate)) {
record.parsedDate = recordDate;
foundRecord = record;
}
});
return foundRecord;
};
var statusDate = new Date('2019-06-15');
var statusOnDate = filterByStatus(feedHistory.status,statusDate);
console.log(`On ${statusDate} the status was ${statusOnDate.statusCode}`);

firebase Starting point was already set

I use firebase admin and realtime database on node.js
Data look like
When I want to get data where batch = batch-7, I was doing
let batch = "batch-7";
let ref = admin.database().ref('qr/');
ref.orderByChild("batch").equalTo(batch).on('value', (snapshot) =>
{
res.json(Object.assign({}, snapshot.val()));
ref.off();
});
All was OK!
But now i should create pagination, i.e. I should receive data on 10 elements depending on the page.
I use this code:
let page = req.query.page;// num page
let batch = req.params.batch;// batch name
let ref = admin.database().ref('qr/');
ref.orderByChild("batch").startAt(+page*10).limitToFirst(10).equalTo(batch)
.on('value', (snapshot) =>
{
res.json(Object.assign({}, snapshot.val()));
ref.off();
});
But I have error:
Query.equalTo: Starting point was already set (by another call to startAt or equalTo)
How do I get data in the amount of N, starting at position M, where batch equal my batch
You can only call one startAt (and/or endAt) OR equalTo. Calling both is not possible, nor does it make a lot of sense.
You seem to have a general misunderstanding of how startAt works though, as you're passing in an offset. Firebase queries are not offset based, but work purely on the value, often also referred to as an anchor node.
So when you want to get the data for a second page, and you order by batch, you need to pass in the value of batch for the anchor node; first item that you want to be returned. This anchor node is typically the last item of the previous page, since you don't know the first item of the next page yet. And for this anchor node, you need to know the value of the item you order on (batch) and usually also its key (if/when there may be multiple nodes with the same value for batch).
It also means that you usually request one item more than you need, which is the anchor node.
So when you request the first page, you should track the key/batch of the last node:
var lastKey, lastValue;
ref.orderByChild("batch").equalTo(batch).limitToFirst(10).on('value', (snapshot) => {
snapshot.forEach((child) => {
lastKey = child.key;
lastValue = child.child('batch').value();
})
})
Then when you need the second page, you do a query like that:
ref.orderByChild("batch").start(lastValue, lastKey).endAt(lastValue+"\uf8ff").limitToFirst(11).on('value', (snapshot) => {
snapshot.forEach((child) => {
lastKey = child.key;
lastValue = child.child('batch').value();
})
})
There's one more trick above here: I use startAt instead of equalTo, so that we can get pagination working. But it then uses endAt to ensure we still end at the correct item, by using the last known Unicode character as the last batch value to return.
I'd also highly recommend checking out some of the previous questions on pagination with the Firebase Realtime Database.

Paginating a mongoose mapReduce, for a ranking algorithm

I'm using a MongoDB mapReduce to code a ranking feed algorithm, it almost works but the latest thing to implement is the pagination. The map reduce supports the results limitation but how could I implement the offset (skipping) based e.g. on the latest viewed _id of the results, knowing that I'm using mongoose?
This is the procedure I wrote:
o = {};
o.map = function() {
//log10(likes+comments) / elapsed hours from the post creation
emit(Math.log(this.likes + this.comments + 1) / Math.LN10 / Math.abs((now - this.createdAt) / 6e7 + 1), this);
};
o.reduce = function(key, values) {
//sort the values, when they have the same score
values.sort(function(a, b) {
a.createdAt - b.createdAt;
});
//serialize the values, because mongoose does not support multiple returned values
return JSON.stringify(values);
};
o.scope = {now: new Date()};
o.limit = 15;
Posts.mapReduce(o, function(err, results) {
if (err) return console.log(err);
console.log(results);
});
Also, if the mapReduce it's not the way to go, do you suggest other on how to implement something like this?
What you need is a page delimiter which is not the id of the latest viewed as you say, but your sorting property. In this case, it seems to be the formula Math.log(this.likes + this.comments + 1) / Math.LN10 / Math.abs((now - this.createdAt) / 6e7 + 1).
So, in your mapReduce query needs to hold a where value of that formula above. Or specifically, 'formula >= . And also it needs to hold the value of createdAt at the last page, since you don't sort by that. (Assuming createdAt is unique). So yourqueryof mapReduce would saywhere: theFormulaExpression, createdAt: { $lt: lastCreatedAt }`
If you do allow multiple identical createdAt values, you have to play a little outside of the database itself.
So you just search by formula.
Ideally, that gives you one element with exactly that value, and the next ones sorted after that. So in reply to the module caller, remove this first element off the array (and make sure you actually ask for more results then you need because of this).
Now, since you allow for multiple similar values, you need another identifying prop, say, object id or created_at. Your consumer (caller of this module) will have to provide both (last value of the score, createdAt of the last object). Say you have a page split exactly in the middle - one or more objects is on the previous page, another set on the next
. You'd have to not simply remove the top value (because that same score is already served on the previous page), but possibly several of them from the top.
Then it goes really crazy, because potentially your whole page was already served - compare the _ids, look for the first one after the one your module caller has provided you with. Or look into the data and determine how many matching values like that are there, try to get at least as many more values from mapReduce then you have on your actual page size.
Aside from that, I would do this with aggregation instead, it should be much more preformant.

mongoose limit & nin not working properly

i am trying to limit the number of records returned in a query:
Property.find(searchParams).nin('_id', prop_ids).limit(5).exec(function(err, properties) {
when the first call comes in, i get 5 records back. then i make a second call and pass in an array of ids (prop_ids). This array has all of the ids that were records that were returned in the first call... in this case i get no records back. I have a total of 7 records in my database, so the second call should return 2 records. How should I go about doing this?
I think mongoose might apply the limit before the nin query is applied so you will always just get those five. If it's a type of pagination you want to perform where you get 5 objects and then get 5 others, you can use the option skip instead:
var SKIP = ... // 0, 5, 10...
Property.find(searchParams, null, {
skip: SKIP,
limit: 5,
}, function(err, properties) {
})
This is what I took from your question, maybe you had something other in mind with the nin call?

Resources