I have a Spring CM folder that has 1000s of small files in it. I'm doing retrieval this way:
GET /v201411/folders/{id}/documents
but when it executes, I get back only 20 files. The sum of all of their sizes is: 1.8 MB and the Content Length of the response -> content -> headers is only 3.8 MB.
I didn't find anything in their documentations that mentions the limit of retrieving documents via the api.
Is that really the limitation of spring CM?
From the documentation on API Collections:
Limit (integer): The maximum number of elements retrieved per request.
Default limit is 20. Maximum limit is 100
When there are more items in the collection than the specified limit,
the application can page through the collection, retrieving the
objects in chunks by specifying the limit and/or offset on the query
string when the collection is requested. The first, previous, next,
and last properties are added as a convenience by appending the
appropriate limit and offset to the URI and a GET request can be done
to this URIs specified by these properties to navigate the collection.
To minimize the number of sequential calls you need to make, you can adjust the limit property up to the max, 100.
Related
I am using NearBySearch from Microsoft Azure. In the official documentation it says that when you make a query the totalResults that the API can return is X. However, you can also read that there is a limit on the number of items returned which is at most 100.
In the case that the totalResults >= limit == 100, the API will only display the first 100 results, thus not showing the remaining ones.
Question: Would you be able to suggest a way to retrieve the additional results using the NearBySearch function?
Note: On the Google API NearBySearch there is a parameter called next_page_token, which allows to view all the possible results. Is there something similar in Azure?
You have a limit of 100 results each query. If you have 150 totalResults. You can execute the query with ofs= 0 and limit= 100 to get the first 100 entries. After that you execute the second query with the ofs=100 (because it is like an index). Your limit is 100. After that you will get the next 100 results. Because there are only 50 results left, your numResults will be 50.
I hope it is understandable
Would you be able to suggest a way to retrieve the additional results
using the NearBySearch function?
Looking at the documentation, I noticed that there is an offset parameter (ofs) which by default is zero. You should be able to use that to get the next set of results if the total results are more than the limit specified by you.
I have created a Lambda function that uses awswrangler data api to read in data from an RDS Serverless Aurora PostgreSQL Database from a query. The query contains a conditional that is a list of IDs. If the query has less then 1K ids it works great, if over 1K I get this message:
Maximum BadRequestException retries reached for query
An example query is:
"""select * from serverlessDB where column_name in %s""" % ids_list
I adjusted the serverless RDS instance to force scaling as well as increased the concurrency on the lambda function. Is there a way fix this issue?
With PostgreSQL, 1000+ items in a WHERE IN clause should be just fine.
I believe you are running into a Data API limit.
There isn't a fixed upper limit on the number of parameter sets. However, the maximum size of the HTTP request submitted through the Data API is 4 MiB. If the request exceeds this limit, the Data API returns an error and doesn't process the request. This 4 MiB limit includes the size of the HTTP headers and the JSON notation in the request. Thus, the number of parameter sets that you can include depends on a combination of factors, such as the size of the SQL statement and the size of each parameter set.
The response size limit is 1 MiB. If the call returns more than 1 MiB of response data, the call is terminated.
The maximum number of requests per second is 1,000./
Either your request exceeds 4 MB or the response of your query exceeds 1 MB. I suggest you split your query into multiple smaller queries.
Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html
I have a Liferay 6.2 server that has been running for years and is starting to take a lot of database space, despite limited actual content.
Table Size Number of rows
--------------------------------------
DLFileRank 5 GB 16 million
DLFileEntry 90 MB 60,000
JournalArticle 2 GB 100,000
The size of the DLFileRank table sounds to me as abnormally big (if it is totally normal please let me know).
While the file ranking feature of Liferay is nice to have, we would not really mind resetting it if it halves the size of the database.
Question: Would a DELETE * FROM DLFileRank be safe? (stop Liferay, run that SQL command, maybe set dl.file.rank.enabled=false in portal-ext.properties, start Liferay again)
Is there any better way to do it?
Bonus if there is a way to keep recent ranking data and throw away only the old data (not a strong requirement).
Wow. According to the documentation here (Ctrl-F rank), I'd not have expected the number of entries to be so high - did you configure those values differently?
Set the interval in minutes on how often CheckFileRankMessageListener
will run to check for and remove file ranks in excess of the maximum
number of file ranks to maintain per user per file. Defaults:
dl.file.rank.check.interval=15
Set this to true to enable file rank for document library files.
Defaults:
dl.file.rank.enabled=true
Set the maximum number of file ranks to maintain per user per file.
Defaults:
dl.file.rank.max.size=5
And according to the implementation of CheckFileRankMessageListener, it should be enough to just trigger DLFileRankLocalServiceUtil.checkFileRanks() yourself (e.g. through the scripting console). Why you accumulate that large number of files is beyond me...
As you might know, I can never be quoted by stating that direct database manipulation is the way to go - in fact I refuse thinking about the problem from that way.
When using Python to make a GET request to the Instagram API, passing the required variables as shown below
photos = api.media_search(lat=latitude, lng=longitude, distance=distance, count=count)
I have attempted to set the count parameter to over 100, but the API returns a maximum of 100 results.
Is this a limitation set for the API or am I doing something wrong?
The Instagram API documentation says there is a max value for count for each endpoint, from the docs:
On views where pagination is present, we also support the "count" parameter. Simply set this to the number of items you'd like to receive. Note that the default values should be fine for most applications - but if you decide to increase this number there is a maximum value defined on each endpoint.
However, I couldn't find any indication for that number in the documentation, neither for the media request nor for other requests. So I would assume that they don't guarantee any specific number.
They do specify that if the application is in sandbox mode, the data is restricted to 20 most recent media.
I am able to use skip, limit (and order by) to fetch the contents of particular page in the UI.
E.g. to render nth page of page size m. UI asks for skip n*m and limit m.
But, UI wants to generate links for all the possible pages. For that i have to return it total rows available in neo4j.
E.g. for total p rows, the UI will generate hyperlink 1,2,3... (p/m).
What is the best(in terms of performance) way to get the total number of rows while using skip, limit in the the cypher?
In general it is not advisable as fetching all results requires you to fetch large swaths of the graph into memory.
You have two options:
use a simpler version of your query as separate count query (which might also run asynchronously)
merge the count query and your real query into one, but it will be much more expensive than your skip-limit query, in the worst case totalcount/pageSize times more expensive
start n=node:User(name={username})
match n-[:KNOWS]->()
with n,count(*) as total
match n-[:KNOWS]->m
return m.name, total
skip {offset}
limit {pagesize}