REST API: Infinite scroll pagination in the GUI, but allow searching through all entries - node.js

I have Express running in a Node.js server, which serves as a backen for my React frontend application.
The frontend application fetches data from the backend (which is stored in Mongo) through a REST call, and display this data in a table.
The amount of data is growing by the day, so I though I should look into reducing the abount of data transferred to the frontend application, so avoid unnecessary strain on the backend.
I'm not sure if this is the right way to approach this, but I've been thinking I would look into having the backen fetch a limited amount of entries, so that only these data will be displayed in the frontend table.
The problem arises with searching - when the user wants to search the data in the table, I'll need to be able to search through all entries, not just the data loaded into the table.
I guess one option would be to have the search function actually query the REST API, instead of searching the table itself.
If I'm on the right track, I guess I could implement REST API pagination, somewhere along the example found in https://refactoringfactory.wordpress.com/2012/09/08/pagination-in-node-js-and-express/. Other suggestions on how to implement pagination are welcome.
I'd very much like some input on the approach I described, and suggestions for smarter ways implement this.
EDIT: I changed the title somewhat to include "Infinite scroll pagination". This is what I'm looking to implement. At the moment I have a click on pages pagination setup, but would like to replace this for the infinite scroll pagination.

I've been thinking I would look into having the backen fetch a limited amount of entries, so that only these data will be displayed in the frontend table.
This is common practice in my experience. The term for it is "pagination." Have a look at this SO question regarding best practices for pagination in REST API's: API pagination best practices.
The problem arises with searching - when the user wants to search the data in the table, I'll need to be able to search through all entries, not just the data loaded into the table.
I guess one option would be to have the search function actually query the REST API, instead of searching the table itself.
Again, you got it. Doing small filters/searches on the client is fine for a limited number of entries, but if you need to only retrieve items matching search criteria in the first place, then adding that functionality to your REST API is the right choice.

Right, you should do
pagination: you might implement it by exposing 2 arguments in the rest endpoint for the listing
?p=<number>: page number, defaults to 1
?l=<number>: number of items per page / page length, defaults to a number maybe from 10 to 100
search: implement it by exposing 1 argument in the rest endpoint for the listing
/?q=<string>: you can define to be what you want, maybe a string that matches with one or multiple fields of the data
If you want to minimize the network traffic, you might also add one more parameter to explicitly select the fields you want to be returned, like this
/?f=<string>: string could be something like id,name,age, and so the api should return only those three fields per record.
All this parameters should be accepted by a list endpoint in your RESTful API
Example:
http://example.com/api/cars/?p=2&l=15&q=toyota&f=id,brand,model,color

Related

How to better implement a more complex sorting strategy

I have an application with posts. Those posts are shown in the home view in descending order with the creation date.
I want to implement a more complex sorting strategy based on for example, posts which users have more posts, posts which have more likes, or views. Not complex, simple things. Everything picking random ones. Let's say I have the 100 posts more liked, I pick 10 of them.
To achieve this I don't want to do it in the same query, since I don't want to affect it's performance. I am using mongodb, and I need to use lookup which wouldn't be advisable to use in the most critical query of the app.
What would be the best approach to implement this?.
I thought doing all those calculations using for example AWS Lambda, or maybe triggers in mongo atlas, each 30 seconds and store the resultant information in database, which could be consumed by query.
That way each 30 seconds lets say the first 30 posts will be updated depending on the criteria.
I don't really know if this is a good approach or not. I need something not complex, but be able to "mix" all the post and show first the ones the comply with the criteria.
Thanks!

Does Powerapps return the delegatable filtered results, prior to performing the non-delegatable filtering on the app?

I am setting up a large (2000+ records) "task tracking register" using a SharePoint List, and intend to use Powerapps as the UI.
As you would imagine there numerous drop drown fields in the list which I would like to use as a filter within the Powerapp, but being that these are "Complex" fields, they are non-delegatable.
I'm lead to believe that I can avoid this by creating additional Columns in the SharePoint list that use a Flow that populates them with plain text based on the Drop-down selected.
This is a bit of pain, so I'd like to limit the quantity of these helper columns as much as possible.
Can anyone advise if a Powerapps Gallery will initially filter the results being returned using the delegateable functions first, and then perform the non-delegatable search functions on those items, or whether the inclusion of a non-delgatable search criteria means that the whole query is performed in a non-delegatable manner?
i.e.
Filter 3000 records down to 800 using delegatable search, then perform the additional filtering of those 800 on the app for the non-delegatable search criteria.
I understand that it may be possible to do this via loading the initial filtered results into a collection within the app and potentially filtering that list, but have read some conflicting information as to the efficacy of this method, so not such if this is the route I should take.
Delegation can be a challenge. Here are some methods for handling it:
Users rarely need more than a few dozen records at any time in a mobile app. Try to use delegable queries to create a Collection locally. From there, its lightning fast.
If you MUST pull in all 3k+ of your records, here's my favorite hack. Collect chunks of your data source then combine into a single collection.
If you want the function to scale (and the user's wait time) you can determine the first and last ID to dynamically build a function.
Good luck!

Redis pagination strategy for infinite scrolling page

TL;DR: which of the three options below is the most efficient for paginating with Redis?
I'm implementing a website with multiple user-generated posts, which are saved in a relational DB, and then copied to Redis in form of Hashes with keys like site:{site_id}:post:{post_id}.
I want to perform simple pagination queries against Redis, in order to implement lazy-load pagination (ie. user scrolls down, we send an Ajax request to the server asking for the next bunch of posts) in a Pinterest-style interface.
Then I created a Set to keep track of published posts ids, with keys like site:{site_id}:posts. I've chosen Sets because I don't want to have duplicated IDs in the collection and I can do it fastly with a simple SADD (no need to check if id exists) on every DB update.
Well, as Sets aren't ordered, I'm wheighting the pros and cons of the options I have to paginate:
1) Using SSCAN command to paginate my already-implemented sets
In this case, I could persist the returned Scan cursor in the user's
session, then send it back to server on next request (it doesn't seem
reliable with multiple users accessing and updating the database: at
some time the cursor would be invalid and return weird results -
unless there is some caveat that I'm missing).
2) Refactor my sets to use Lists or Sorted Sets instead
Then I could paginate using LRANGE or ZRANGE. List seems to
be the most performant and natural option for my use case. It's
perfect for pagination and ordering by date, but I simply can't check
for a single item existence without looping all list. Sorted Sets
seems to join the advantages of both Sets and Lists, but consumes more
server resources.
3) Keep using regular sets and store the page number as part of the key
It would be something like site:{site_id}:{page_number}:posts. It
was the recommended way before Scan commands were implemented.
So, the question is: which one is the most efficient / simplest approach? Is there any other recommended option not listed here?
"Best" is best served subjective :)
I recommend you go with the 2nd approach, but definitely use Sorted Sets over Lists. Not only do the make sense for this type of job (see ZRANGE), they're also more efficient in terms of complexity compared to LRANGE-ing a List.

Dojo JsonRest on one side and Mongodb on the other side: pagination/filtering?

I am experimenting with Dojo's dgrid (which is great!). I am using Nodejs/Mongoose on the server side.
I want to write a "log browser": I have a big mongodb table containing lots of log entries; using dgrid, I want to be able to 1) Filter by certain parameters 2) Paginate using dgrid's native pagination.
Hence the problem: dojo's JsonRest stores will send a request like this:
Accept:application/javascript, application/json
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
...
Host:localhost:3000
Range:items=0-24
Hence the problem: it will give a range (that's all it can do, really) and will display things on the client side according to what it receives from the server.
It's unrealistic to expect a cliend side JsonRest object to make requests other than "ranges". However, I am aware that skip/limit doesn't go very well with Mongoose:
What is the best way to do ajax pagination with MongoDb and Nodejs?
My idea was to render the dgrid, allowing the users to pick filters, and let them happily paginate through their logs. However, the fact that skip/limit are out of question, I am in a bit of a pickle...
Any pearls of wisdom, other than ditch dgrid altogether and implementing pagination on my own without using Dojo stores?
Merc.
Front-end
The filtering isn't as feature-full in dgrid as it is in the dojo EnhancedGrid filter plugin so you will probably need to implement that part yourself.
The good news is you get the paging simply by mixing-in "dgrid/OnDemandGrid" when you create your grid.
Back-end
The docs seem to indicate that your best bet for performance is to do some tricks with indices and query based on those to get your ranges.
You are probably already referencing these, but here they are;
http://mongoosejs.com/docs/api.html#query_Query-skip
http://docs.mongodb.org/manual/reference/method/cursor.skip/
Since log data is usually sequential and rarely modified, you could probably just use a monotonically increasing index for each row of log data and query using those to get the right offset into and count of the rows.

How does solr work with data split into different services and therefore not synchronously available?

take for instance an ecommerce store with catalog and price data in different web services. Now, we know that solr does not allow partial updates to a document field(JIRA bug), so how do you index these two services ?
I had three possibilities, but I'm not sure which one is correct:
Partial update - not possible
Solr join - have price and catalog in separate index and join them in solr. You cant join them in your client side code, without screwing up pagination and facet counts. I dont know if this is possible in pre-solr 4.0
have some sort of intermediate indexing service, which composes an entire document based on the results from both these services and sends this for indexing. however there are two problems with this approach:
3.1 You can still compose documents partially, and then when the document is complete, you can set a flag indicating that this is a complete document. However, to do this each time a document has to be indexed, it has to first check whether the document exists in the index, edit it and push it back. So, big performance hit.
3.2 Your intermediate service checks whether a particular id is available from all services - if not silently drops it and hopes that when it appears in the other service, the first service will already be populated. This is OK, but it means that an item is not available in search until all fields are available (not desirable always - if u dont have price, you can simply set it to out-of-stock and still have it available)
Of all these methods, only #3.2 looks viable to me - does anyone know how you do this kind of thing with DIH? Because now, you have two different entry points (2 different web services) into indexing and each has to check the other
The usual way to solve this is close to your 3.2: write code that creates the document you want to index from the different available services. The usual flow would be to fetch all the items from the catalog, then fetch the prices when indexing. Wether you want to have items in the search from the catalog that doesn't have prices available depends on your business rules for the service. If you want to speed up the process (fetch product, fetch price, repeat), expand the API to fetch 1000 products and then prices for all the products at the same time.
There is no reason why you should drop an item from the index if it doesn't have price, unless you don't want items without prices in your index. It's up to you and your particular need what kind of information you need to have available before indexing the document.
As far as I remember 4.0 will probably support partial updates as it moves to the new abstraction layer for the index files, although I'm not sure it'll make your situation that much more flexible.
Approach 3.2 is the most common, though I think about it slightly differently. First, think about what you want in your search results, then create one Solr document for each potential result, with as much information as you can get. If it is OK to have a missing price, then add the document that way.
You may also want to match the documents in Solr, but get the latest data for display from the web services. That gives fresh results and avoids skew between the batch updates to Solr and the live data.
Don't hold your breath for fine-grained updates to be added to Solr and Lucene. It gets a lot of its speed from not having record-level locking and update.

Resources