Google Photos API mediaItems list/search methods ignore pageSize param - google-photos-api

I am attempting to do a retrieve of all media items that a given Google Photos user has, irrespective of any album(s) that they are in. However when I attempt to use either the mediaItems.list or the mediaItems.search methods, the pageSize param I am including in either request is either being ignored or not fully fullfilled.
Details of mediaItems.list request
GET https://photoslibrary.googleapis.com/v1/mediaItems?pageSize=<###>
Details of mediaItems.search request
POST https://photoslibrary.googleapis.com/v1/mediaItems:search
BODY { 'pageSize': <###> }
I have made a simple implementation of these two requests here as an example for this question, it just requires a valid accessToken to use:
https://jsfiddle.net/zb2htog1/
Running this script with the following pageSize against a Google Photos account with 100s of photos and 10s of albums consistently returns the same unexpected amount of result for both methods:
Request pageSize
Returned media items count
1
1
25
9
50
17
100
34
I know that Google states the following for the pageSize parameter for both of these methods:
“Maximum number of media items to return in the response. Fewer media
items might be returned than the specified number. The default
pageSize is 25, the maximum is 100.”
I originally assumed that the reason fewer media items might be returned is because an account might have less media items in total than a requested pageSize, or that a request with a pageToken has reached the end of a set of paged results. However I am now wondering if this just means that results may vary in general?
Can anyone else confirm if they have the same experience when using these methods without an album ID for an account with a suitable amount of photos to test this? Or am I perhaps constructing my requests in an incorrect fashion?

I experience something similar. I get back half of what I expect.
If I don't set the pageSize, I get back just 13, If I set to 100, I get back 50.

Related

How to request all chats/groups of the user through Telegram Database Library(TDLib) for Node.js

The official example from telegram explains that in order to use getChats() command, one needs to set two parameters 'offset_order' and 'offset_chat_id'.
I'm using this node.js wrapper for the TDLib.
So when I use getChats() with the following params:
'offset_order': '9223372036854775807',
'offset_chat_id': 0,
'limit': 100
just like it is explained in the official docs:
For example, to get a list of chats from the beginning, the
offset_order should be equal to 2^63 - 1
as a result I get 100 chats from the top of the user's list.
What I can't understand is how do I iterate through that list? How do I use the API pagination?
When I try to enter a legitimate chat_id from the middle of the first 100, I still get the same first 100, so it seems like it makes no difference.
If I change that offset_order to ANY other number, I get an empty list of chats in return...
Completely lost here, as every single example I found says the same thing as the official docs, ie how to get the first 100.
Had the same problem, have tried different approaches and re-read documentation for a long time, and here is a decision:
do getChats as you do with '9223372036854775807' offset_order parameter
do getChat request with id of the last chat you have got. It's an offline request, just to tdlib.
here you get a chat object with positions property - get a position from here, looks like this:
positions: [
{
_: 'chatPosition',
list: [Object],
order: '6910658003385450706',
is_pinned: false
}
],
next request - getChats - use the positions[0].order from (3) as offset_order
goto (2) if there are more chats
It wasn't easy to come to this, so would be glad if it helps anybody who came from google like me :)

twitter api count more than 100, using twitter search api

i want to search-tweet related 'data' and count more than 100
this is python grammer
from twython import Twython
twitter= Twython(app_key=APP_KEY,app_secret=APP_SECRET)
for status in twitter.search(q='"data"',count =10000)["statuses"]:
user =status["user"]["screen_name"].encode('utf-8')
text =status["text"]
data = "{0} {1} {2}".format(user ,text,'\n\n')
print(data)
f.writelines(data)
So what you're trying to do uses the Twitter API. Specifically the GET search/tweets endpoint.
In the docs for this endpoint:
https://dev.twitter.com/rest/reference/get/search/tweets
We can see that count has a maximum value of 100:
So even though you specify 10000, it only returns 100 because that's the max.
I've not tried either, but you can likely use the until or max_id parameters also mentioned in the docs to get more results/the next 100 results.
Keep in mind: "that the search index has a 7-day limit. In other words, no tweets will be found for a date older than one week" - the docs
Hope this helps!
You can use the field next_token of the response to get more tweets.
Refer to these articles:
https://lixinjack.com/how-to-collect-more-than-100-tweets-when-using-twitter-api-v2/
https://developer.twitter.com/en/docs/twitter-api/tweets/search/integrate/paginate
The max_id parameter is the key and it is further explained here:
To use max_id correctly, an application’s first request to a timeline
endpoint should only specify a count. When processing this and
subsequent responses, keep track of the lowest ID received. This ID
should be passed as the value of the max_id parameter for the next
request, which will only return Tweets with IDs lower than or equal to
the value of the max_id parameter.
https://developer.twitter.com/en/docs/tweets/timelines/guides/working-with-timelines
In other words, using the lowest id retrieved from a search, you can access the older tweets. As mentioned by Tyler, the non-commercial version is limited to 7-day, but the commercial version can search up to 30 days.

How Do I Get Additional Results from EnvelopesAPI.ListStatusChanges

I'm trying to practice defensive programming. Following the advice from the documentation, I want to poll using the api passing in a value 3 minutes before the last time I polled. Considering, I could get a ResultSetSize less than the TotalSetSize, I'd like to ask for the next set of results starting at the next result.
So, as an example, I request the following (using the REST API explorer):
GET https://demo.docusign.net/restapi/v2/accounts/#####/envelopes?count=2&from_date=2017-01-01&from_to_status=changed HTTP/1.1
(note the count = 2)
This returns:
Object
resultSetSize: "2"
totalSetSize: "8"
startPosition: "0"
endPosition: "1"
nextUri: "/accounts/#####/envelopes?start_position=2&count=2&from_date=1%2f1%2f2017+12%3a00%3a00+AM&from_to_status=changed"
previousUri: ""
envelopes: Array [2]
Ok, great, exactly as I expect. Now, I want to get the second "page" of results. I add a start_position of 2, right? (Since the end position is 1, I'd expect to get startPosition 2 and endPosition 3 to be returned.)
GET https://demo.docusign.net/restapi/v2/accounts/#####/envelopes?count=2&from_date=2017-01-01&from_to_status=changed&start_position=2 HTTP/1.1
No dice... 400 Bad Request:
Object
errorCode: "INVALID_REQUEST_PARAMETER"
message: "The request contained at least one invalid parameter. Query parameter 'count' was not a positive integer."
The count parameter is a positive integer...
Please, someone tell me what I'm doing wrong. I would like to just request as many as they can pass at a time, and if there are more, I'd like to repeat until all envelopes have been retrieved, but that "count" error is concerning.
From documentation
start_position parameter is reserved for DocuSign use only.
Looks like pagination is not supported with the listStatusChanges api.
If you call the nextUri address, what happens? You will need to prepend your base URL.

How to modify Server side paging size

I have an OData Web API service using .NET 4.5. It has a WebApi controller derived from EntitySetController:
public class WorkItemsController : EntitySetController<WorkItem, string>
{
[Queryable(PageSize=100)]
public override IQueryable<WorkItem> Get()
{## Heading ##
// go to AWS DynamoDb, get the workitems and then return
}
}
As you can see, I set the server-side page size to 100 by default. Later I realize that I need to increase the size programmatically inside the Get() function. Does anyone know how to do it?
If you want to know the reason, here is why:
AWS DynamoDb doesn't support $skip or $top query. Each time a client wants to get a collection of workitems, I need to get all workitems from DynamoDb. When the number is big, it takes very long time if each time we only return 100 items back to user. So my strategy is to double/triple the number of workitems we return to user each time. So user will get 100, 200, 400, 800 workitems with consecutive requests. Assuming there are 1500 workitems in DynamoDb, I will query only 4 times to return all of them back to user. If we keep a constant pagesize, like 100, I need to query 15 times.
You can invoke the ODataQueryOptions in your method and set the page size.
public IQueryable Get(ODataQueryOptions queryOptions)
{
var settings = new ODataQuerySettings { PageSize = 100 };
var result = GetResult();
return queryOptions.ApplyTo(result, settings);
}
This is exactly the issue, that LINQ2DynamoDB addresses. To support $skip and $top (that is, Enumerable.Skip() and Enumerable.Take()), it caches the results returned by DynamoDb in ElastiCache. So that server-side paging works much more efficiently and the number of read operations is greatly reduced.
Moreover, LINQ2DynamoDB supports OData automatically, so maybe you don't even need to do any WebApi controllers.
Why not trying it? :)

there any way to show more than 20 photos of the instagram API?

I'm trying to display more than 20 photos feed in a website like this:
http://snap20.com.br/instagram/
There's any way to show?
Simple. Just append &count=-1 at the back of your api call.
For instance:
https://api.instagram.com/v1/tags/YOURTAG/media/recent?access_token=YOURACCESSTOKEN&count=-1
* Update April 2014 (credits: #user1406691): count=-1 is no longer available. Response:
{"meta":{"error_type":"APIInvalidParametersError","code":400,
"error_message":"Count must be larger than zero."}}
You may wish to use this instead:
https://api.instagram.com/v1/tags/YOURTAG/media/recent?access_token=YOURACCESSTOKEN&count=35
There's also another method via rss + db but it's longer, although it's not limited to 30 calls / hour.
Actually, there is a chance to get the next 20 pictures, and after that the next 20 and so on...
In the JSON responce there is an "pagination" array:
"pagination":{
"next_max_tag_id":"1411892342253728",
"deprecation_warning":"next_max_id and min_id are deprecated for this endpoint; use min_tag_id and max_tag_id instead",
"next_max_id":"1411892342253728",
"next_min_id":"1414849145899763",
"min_tag_id":"1414849145899763",
"next_url":"https:\/\/api.instagram.com\/v1\/tags\/lemonbarclub\/media\/recent?client_id=xxxxxxxxxxxxxxxxxx\u0026max_tag_id=1411892342253728"
}
this is the information on specific API call and the object "next_url" shows the URL to get the next 20 pictures so just take that URL and call it for the next 20 pictures...
for more information about Instagram API check this out: https://medium.com/#KevinMcAlear/getting-friendly-with-instagrams-api-abe3b929bc52
Instagram has a 20 image limit on their API, check out this thread and my answer:
What is the maximum number of requests for Instagram?
Also, have a look at this link to bypass the pagination and display all results:
http://thegregthompson.com/displaying-instagram-images-ignoring-page-pagination/

Resources