I am using NearBySearch from Microsoft Azure. In the official documentation it says that when you make a query the totalResults that the API can return is X. However, you can also read that there is a limit on the number of items returned which is at most 100.
In the case that the totalResults >= limit == 100, the API will only display the first 100 results, thus not showing the remaining ones.
Question: Would you be able to suggest a way to retrieve the additional results using the NearBySearch function?
Note: On the Google API NearBySearch there is a parameter called next_page_token, which allows to view all the possible results. Is there something similar in Azure?
You have a limit of 100 results each query. If you have 150 totalResults. You can execute the query with ofs= 0 and limit= 100 to get the first 100 entries. After that you execute the second query with the ofs=100 (because it is like an index). Your limit is 100. After that you will get the next 100 results. Because there are only 50 results left, your numResults will be 50.
I hope it is understandable
Would you be able to suggest a way to retrieve the additional results
using the NearBySearch function?
Looking at the documentation, I noticed that there is an offset parameter (ofs) which by default is zero. You should be able to use that to get the next set of results if the total results are more than the limit specified by you.
Related
I have a logic app that is fetching data from an API endpoint. The API is using pagination and has an limit of 50 objects per request and then provides an link for the next 50 objects until it gets all the objects, however I have no idea on how many objects there will be for each request. My flow is briefly described down below:
First make an initial HTTP request against the endpoint
Parsing the response HTTP Body to be able to use the nextLink url provided.
Until loop with the conditon to run until nextLink is equal to null.
In the until loop I have an action for Set Variable that get Set to a new URL for each request made with a new pagination in the end of the url: "&_offset=100"
The issue with the until loop is that you can set limits for count and timeout as you can see here. As I have no clue on how many pagination there will be I am expecting this loop to run until the condition specified is met. However, I have tried specify some different values listed below:
Count = 1 - Resulted in just 1 run
Count = empty - Resulted in it running for an hour (approx 3300 loops), as specified by the Timeout value.
Count = 60 - Resulted in it running for 60 times
I have researched on how many pagination this specific request has and it turns out it has 290 paginations. My expectations is that this until loop will run until nextLink is equal to null which will be after 290 loops. But I wonder if there is any possibiliy to specify a dynamic value for Count in the until action?
I am expecting the UNTIL action to run as many time as needed based on how many pagination there is, that is atleast what I suppose it should do because if I need to specify a value for how many times it needs to run then this action is pretty useless. Hopefully there is someone in here that maybe have faced the same issue.
Best regards
As far as I know, "Until" action requires us to define at least one limit to prevent endless loops.
For your problem, you can just define a count which is large enough to allow your endpoints show all of the pages. If you want to specify a dynamic value for the count, you need to meet two conditions:
You have to be able to access total number of pages (if your endpoint provides a url to get it).
The count set in "Until" action can only reference trigger inputs, trigger outputs and parameters.
According to the statement in your question, I guess you can't meet these two conditions. So I think we can just set a count which is large enough.
TableStorage & Nodejs
Using the function "queryEntities" sometimes result.entries.length is 0, even when I am pretty sure there are a lot of entries in the database. The "where" parameters are ok, but sometimes (maybe one every 100) it returns 0 entries. Not error returned. Just 0 entries.
And in my function that's causing troubles.
My theory is that the database sometimes is saturated because this function executes every 10 seconds and maybe sometimes before one finish another one starts and both operate over the same table, and instead of error it returns a length 0 , what is something awful.
There is any way to resolve this? Shouldn't it return error?
This is expected behavior. In this particular scenario, please check for the presence of continuation tokens in the response. Presence of these tokens in the response indicate that there may be entities available matching the query and you should execute the same query again with the continuation token you received.
Please read this document for explanation: https://learn.microsoft.com/en-us/rest/api/storageservices/query-timeout-and-pagination.
From this link:
A query against the Table service may return a maximum of 1,000 items
at one time and may execute for a maximum of five seconds. If the
result set contains more than 1,000 items, if the query did not
complete within five seconds, or if the query crosses the partition
boundary, the response includes headers which provide the developer
with continuation tokens to use in order to resume the query at the
next item in the result set.
In Instagram api documentation, for api GET/users/user-id/media/recent
it doesn't mention what is max count supported for each api call.
Does anyone has idea about it? Thanks.
It appears there is no limit for /users/user-id/media/recent - at least I just ran this and got 5500+ results back for user '787132' (natgeo). If you want to limit it use the count parameter.
Note that other endpoints seem to have limits e.g. /media/popular will usually return a max of 20.
Also be aware that if you do not limit using the count param you might reach your global / endpoint specific rate limit as per http://instagram.com/developer/limits/
Suppose I want to search for bugs reported in recent 2 years. The initial result page says "This result was limited to 500 bugs"
Apparently there are more than 500 bugs, so I click See all search results for this query. This time, it shows 10000 bugs, but with a message saying "This list is too long for Bugzilla's little mind; the Next/Prev/First/Last buttons won't appear on individual bugs"
So my question is:
How do I know the exact number of bugs returned by my query (it's unlikely to be exactly 10000)
How do I view the entire search results? Currently it seems like if the search results exceed 10000, the results are truncated. And I didn't find any prev/next page button to navigate the search results page.
You may not see all the bugs because of the configuration your administrator set on your bugzilla instance.
However, using the search function from the bugzilla webservice you can retrieve the list of bugs. If the number of bugs returned by the query is capped, then iterate on the search query using a higher offset and limit. Here is some pseudocode
offset = 0
limit = 5000
currentcount = ws.search(criterias, offset, limit).count
while currentcount == limit
{
offset += limit
currentcount = ws.search(criterias, offset, limit).count
}
totalbugs = currentcount + offset
The same algorithm would work if you also wanted to get the whole list of bugs instead of just the count.
If the idea of sending multiple queries to the webservice don't feel right you may have to talk to the admin to know what hard limits are set on your install of bugzilla and see how you can tweak them to get the results you need
there is no limit for coding how long is the code it doesn't matter.
I want to do this because i got it from my company and i have to write a script for this any idea regarding this is accepted.
nlapiSearchRecord(type, id, filters, columns)
Note:This API returns 1000 results at a time so if the saved search more than 1000 results then you would need to run the sorted search in a loop and concatenate with the results of the previous search
or nlapiLoadSearch(type, id)
Note: This returns in 4000 results at a time
these API's allow to fetch the results from the saved search.
and regarding API limit the restlet allows 5000 APIs so that's enough for serving most of the purposes. If at all you exceed this limit then you can make use of the API
nlapiYieldScript();
that creates the resume point and script resumes from that point.
** If any further clarification is needed please ask **
Cheers!!!!