Does Powerapps return the delegatable filtered results, prior to performing the non-delegatable filtering on the app? - sharepoint

I am setting up a large (2000+ records) "task tracking register" using a SharePoint List, and intend to use Powerapps as the UI.
As you would imagine there numerous drop drown fields in the list which I would like to use as a filter within the Powerapp, but being that these are "Complex" fields, they are non-delegatable.
I'm lead to believe that I can avoid this by creating additional Columns in the SharePoint list that use a Flow that populates them with plain text based on the Drop-down selected.
This is a bit of pain, so I'd like to limit the quantity of these helper columns as much as possible.
Can anyone advise if a Powerapps Gallery will initially filter the results being returned using the delegateable functions first, and then perform the non-delegatable search functions on those items, or whether the inclusion of a non-delgatable search criteria means that the whole query is performed in a non-delegatable manner?
i.e.
Filter 3000 records down to 800 using delegatable search, then perform the additional filtering of those 800 on the app for the non-delegatable search criteria.
I understand that it may be possible to do this via loading the initial filtered results into a collection within the app and potentially filtering that list, but have read some conflicting information as to the efficacy of this method, so not such if this is the route I should take.

Delegation can be a challenge. Here are some methods for handling it:
Users rarely need more than a few dozen records at any time in a mobile app. Try to use delegable queries to create a Collection locally. From there, its lightning fast.
If you MUST pull in all 3k+ of your records, here's my favorite hack. Collect chunks of your data source then combine into a single collection.
If you want the function to scale (and the user's wait time) you can determine the first and last ID to dynamically build a function.
Good luck!

Related

Creating a Bot Framework lookup field

I use MS Bot Framework for creating bot for MS Teams.
I need to figure out how to implement a lookup field so that it fetches information through an odata feed from a sharepoint list
I can give little help on the SharePoint side. If possible, you might consider using a filter to limit the amount of values returned:
http://www.andrewconnell.com/blog/Applying-Filters-to-Lookup-Fields-with-the-SP2013-REST-API
SharePoint 2013 REST How to select a look up field and also filter based on look up field?
https://sharepoint.stackexchange.com/questions/118633/how-to-select-and-filter-list-items-lookup-column-with-sharepoint-2013-rest-feat/118659#118659
Depending on the size that a filtered set returns (if you do so), you might also/instead want to use paging:
https://platinumdogs.me/2013/05/14/client-and-server-driven-paging-with-the-sharepoint-rest-api/
https://sharepoint.stackexchange.com/questions/45719/paging-using-rest-odata-with-sp-2013
On the botframework side; how do you want to present to the users? Do you want to supply a limited amount of data (say 20 values, for example), then have them give feedback on whether or not that contains the data they need? If so, you can split the data returned from SharePoint into chunks and use a waterfall dialog to accomplish that. Optionally, if you paged data from SharePoint you could; get one page, query user, get another page, query user again, etc. until whatever goal was achieved.
Unfortunately, you're not giving enough information on the actual goal or what you are expecting this functionality to look like in the end.

Can you find a specific documents position in a sorted Azure Search index

We have several Azure Search indexes that use a Cosmos DB collection of 25K documents as a source and each index has a large number of document properties that can be used for sorting and filtering.
We have a requirement to allow users to sort and filter the documents and then search and jump to a specific documents page in the paginated result set.
Is it possible to query an Azure Search index with sorting and filtering and get the position/rank of a specific document id from the result set? Would I need to look at an alternative option? I believe there could be a way of doing this with a SQL back-end but obviously that would be a major undertaking to implement.
I've yet to find a way of doing this other than writing a query to paginate through until I find the required document which would be a relatively expensive and possibly slow task in terms of processing on the server.
There is no mechanism in Azure Search for filtering within the resultset of another query. You'd have to page through results, looking for the document ID on the client side. If your queries aren't very selective and produce many pages of results, this can be slow as $skip actually re-evaluates all results up to the page you specify.
You could use caching to make this faster. At least one Azure Search customer is using Redis to cache search results. If your queries are selective enough, you could even cache the results in memory so you'd only pay the cost of paging once.
Trying this at the moment. I'm using a two step process:
Generate your query but set $count=true and $top=0. The query result should contain a field named #odata.count.
You can then pick an index, then use $top=1 and $skip=<index> to return a single entry. There is one caveat: $skip will only accept numbers less than 100000

Best way to build a custom table linked to a NotesView with sorting and paging

I have a view in my Xpage application that contains a lot of elements. From this view I need to build a table with custom rows (I can't just display the view, I need to build the rows to display myself because I need to compute data from other database, things that you can't do directly in a view).
In order to do so I know that I can use Dataview, Datatable or repeat control (other ideas maybe?). For sure I can't bring all the data on the client, it's way too much.
I am looking for a solution that will allow me to do paging (easy to do with the pager component) but more important sorting on header click. To be clear, I need sorting for all the entries of the view and not only for the current displayed page on the client.
What can be the more efficient way to do so ? I really have a lot of data to compute so I need the fastest way to do it.
(I can create several views with different sorting criteria if needed).
Any repeating control can have pagers applied to it. Also View Panels can include data not in the current view - just set the columnName property to blank and compute the value property. bear in mind you will not be able to sort on those column though - they're not columns, they're values computed at display time.
Any computed data is only computed for the entries currently shown. So if you have 5000 entries in the view but are only displaying 30 at a time, the computed data will only be computed for the current 30.
If your users need to be able to sort on all columns and you have a lot of data, basically they have to accept that they're requirements mean all that data needs computing when they enter the view...and whenever it's updated, by themselves or any other users. That's never going to be quick, and the requirements are the issue there, not the architecture. RDBMS may be better as a back-end, if that's a requirement, as long as the data doesn't go across multiple tables. Otherwise graph database structure may be a better alternative.
The bigger question is why the users need to sort on any column. Do the users really want to sort on the fifth column and then scroll down to entries beginning with a "H"? Do they want to sort on the fourth column and scroll down to entries for May 2014? On a Notes Client, that's a traditional approach, because it's easier than filtering. But usually users know what they're looking for - they don't want entries beginning "H", they want entries where the department is HR. If that's the case, sorting on all columns and paging is not the most efficient method either from a database design or a usability point of view.
To keep the processing faster and lightweight, I use JSON with JQuery DataTables.
Depending on the Data-size and usage, JSON could be generated on the fly or scheduled basis and saved in Lotus Notes Documents or ApplicationScope variables.
$.each(data, function(i, item) {
dataTable.row.add( [data[i].something1,data[i].something2,data[i].something3])
});
You can compute a viewColumn but if you have a lot going on I wouldn't go that route.
This is where Java in XPages SHINE!
Build a Java object to represent your row. So in java use backend logic to get all the data you need. Let's say you have a report of Sales Orders for a a company. And sales orders is pulling data from different places. Your company object would have a method like:
List<salesOrder> getOrders() {}
so in the repeat you call company.getOrders() and it returns all the rows that you worked out in java and populated. So your "rowData" collection name in the repeat can access all the data you want. Just build it into a table.
But now the sorting... We've been using jQuery DataTables to do just this.. It's all client side... your repeat comes down and then the DataTables kicks in and can make everything sortable... no need to rely on views.. works great...
Now it's all client side but supports paging and works pretty decent. If you're just pumping out LOTS of records - 6,000+ then you might want to look at outputting the data as json and taking advatange of some server cacheing... We're starting to use it with some really big output.. LOTS of rows and it's working well so far. Hopefully I'll have some examples on NotesIn9.com in the near future.

How does solr work with data split into different services and therefore not synchronously available?

take for instance an ecommerce store with catalog and price data in different web services. Now, we know that solr does not allow partial updates to a document field(JIRA bug), so how do you index these two services ?
I had three possibilities, but I'm not sure which one is correct:
Partial update - not possible
Solr join - have price and catalog in separate index and join them in solr. You cant join them in your client side code, without screwing up pagination and facet counts. I dont know if this is possible in pre-solr 4.0
have some sort of intermediate indexing service, which composes an entire document based on the results from both these services and sends this for indexing. however there are two problems with this approach:
3.1 You can still compose documents partially, and then when the document is complete, you can set a flag indicating that this is a complete document. However, to do this each time a document has to be indexed, it has to first check whether the document exists in the index, edit it and push it back. So, big performance hit.
3.2 Your intermediate service checks whether a particular id is available from all services - if not silently drops it and hopes that when it appears in the other service, the first service will already be populated. This is OK, but it means that an item is not available in search until all fields are available (not desirable always - if u dont have price, you can simply set it to out-of-stock and still have it available)
Of all these methods, only #3.2 looks viable to me - does anyone know how you do this kind of thing with DIH? Because now, you have two different entry points (2 different web services) into indexing and each has to check the other
The usual way to solve this is close to your 3.2: write code that creates the document you want to index from the different available services. The usual flow would be to fetch all the items from the catalog, then fetch the prices when indexing. Wether you want to have items in the search from the catalog that doesn't have prices available depends on your business rules for the service. If you want to speed up the process (fetch product, fetch price, repeat), expand the API to fetch 1000 products and then prices for all the products at the same time.
There is no reason why you should drop an item from the index if it doesn't have price, unless you don't want items without prices in your index. It's up to you and your particular need what kind of information you need to have available before indexing the document.
As far as I remember 4.0 will probably support partial updates as it moves to the new abstraction layer for the index files, although I'm not sure it'll make your situation that much more flexible.
Approach 3.2 is the most common, though I think about it slightly differently. First, think about what you want in your search results, then create one Solr document for each potential result, with as much information as you can get. If it is OK to have a missing price, then add the document that way.
You may also want to match the documents in Solr, but get the latest data for display from the web services. That gives fresh results and avoids skew between the batch updates to Solr and the live data.
Don't hold your breath for fine-grained updates to be added to Solr and Lucene. It gets a lot of its speed from not having record-level locking and update.

SharePoint 2007: List theory Question

I'm writing a solution around MOSS 2007. And storing fairly large quantities of data in a list.
My first question is: Can lists handle large quantities of data - around 200 000 items. Now I've already read up about it, and it seems like the limitations of lists are on the number of items the views can display (2000). So question is: Is this a recommendation or a real limitation? No documentation actually confirms this.
second question if its a physical limitation in how many items the view can display, Does this mean that its impossible to check for duplicates in a sharepoint list that contains vast quantities of data?
In the sense that to perform a wsList.getListItems you have to pass a view (if the list contains 100 000 records, and the view can only contain 2000 records) how is it possible to check for duplicates?
Thanks
Huge list performance
You may want to read "Scaling to Extremely Large Lists and Performant Access Methods" and "Best Practices for LARGE SharePoint Lists and Documents Libraries".
Another thing this article does not mention that adding list items with SPList.Items.Add, because on large list it's a huge performance penality. What you do is build efficient query that returns no items and then add item to that collection (somwhere i was reading that webservices perform good on adding item, however i can't find that article no more).
You can also see some tests (or other tests) on how huge lists perform.
As for duplicates
You may want to create Scheduled job (SPJobDefinition) that runs somwhere at night and checks for duplicates.
Better idea than looping all SPListItem's and then Query list for each item to check for duplicates would probably be to get a DataTable (SPListItemCollection.GetDataTable()) for all items and use some technique to determine duplicates.
As for views
Filter items, order to see relevant ones and define your RowLimit. That's the key for views - you just need most relevant items, don't you?
You can have very large lists, but the performance is going to SUCK.
We had lists with 50,000+ items in a project and we found the best way we could query and process the contents was using SPSiteDataQuery and CrossListQueryCache and formatting the queries in the obscure, annoying SharePoint CAML dialect.
If possible breaking up the items into containers like folders would help with performance. If one of the list item fields is some type of classification lookup, then that could be replaced by putting items in folders of that classification type.

Resources