I'm trying to get paginated results from Youtube Data API v3,
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&playlistId=UUCj956IF62FbT7Gouszaj9w&key={YOUR_API_KEY}
now, in the response there's the handy parameter nextPageToken that links to the next page of the results set
"nextPageToken": "CAUQAA"
Is there any way to jump to a specific page of the results, say the 6th?
Apologies, but no, there is not. This is intentional. You can only move forward one page of results at a time.
Related
I am using the marketing API
I have an issue with the pagination, it seems like it ignores the start and count.
I am using the start and count query parameters and no matter what number I put in the count, I get a response with all the results
I followed this document for pagination:
https://learn.microsoft.com/en-us/linkedin/shared/api-guide/concepts/pagination
I use this endpoint:
https://api.linkedin.com/v2/adAnalyticsV2
my parameters include:
start=0&count=10&q=statistics&timeGranularity=MONTHLY ....
in the response, I received 821 elements without any pagination. instead of 10 per page.
if I use the logic in the docs the values for start and count will not affect the results or the query
what am I doing wrong?
I don't want to use it without pagination and find out later that I missed records.
Thanks,
Roiy
I have used pagination to retrieve the posts of the organization, and its working for me
my api call is
api_url = "https://api.linkedin.com/v2/shares?q=owners&owners=urn%3Ali%3Aorganization%3A123456&sharesPerOwner=1000&count=100&start=0"
one thing importantly i noticed is this does not requires restli2.0 version(dont pass this in header)
I have troubles paging through message search results with the rest API.
I have a request looking like this:
outlook.office.com/api/v2.0/me/messages/?$search="deni"
the request returns a proper result and also includes a 'next page' looking like this:
"#odata.nextLink": "https://outlook.office.com/api/v2.0/me/messages/?%24search=%22deni%22&%24top=10&%24skiptoken=aT01NjMzYWQ3OS02MmJjLTQ5ZDEtODg4ZC0zYTgwNDlhOTY3Nzkmcz0xMA%3d%3d"
I guess this link is URL encoded so I URL decode it to get this:
outlook.office.com/api/v2.0/me/messages/?$search="deni"&$top=10&$skiptoken=aT01NjMzYWQ3OS02MmJjLTQ5ZDEtODg4ZC0zYTgwNDlhOTY3Nzkmcz0xMA==
However, when i try to make request with the next link I'm getting 405 Method Not Allowed with the following error:
"The OData request is not supported."
I've tried it in the sandbox as well (oauthplay.azurewebsites.net) - same result. What could it be that I am doing wrong. What is the right way to page through the search results?
I know that there is a limit of 250 messages that could be searched, but this is not the case here. I have 10 and I am trying to read the next 10.
Of course I have tried paging with the $skip and $top parameters, but $kip is not supported together with $search.
I can't seem to find a definitive answer in the documentation on how to page through search results and is it possible at all.
Thanks to anyone willing to help.
I am using the google API and I wanted to do a youtube search.
I request a search with https://www.googleapis.com/youtube/v3/search?part=snippet&q=[word that I searched for]&key=my_keyto get the items array. I noticed though that the items array does not return the results in the same order as when you search youtube yourself.
Ex: if I search for the word 'bad' the first result in the items array is the "Young Lex ft AwKarin - BAD ( Official Music Video Clip )" while if I searched like a youtube user its the "Michael Jackson - Bad (Shortened Version)".
Perhaps they are not ordered and I have to order them using a property or something I have missed.
So my question is how can I make the items array return as the first item the first result that would have appeared in the youtube search.
edit: I have tried adding chart=MostPopular leaving to default videoCategoryId, but it still showed the same first result.
Well, it seems that the YouTube acts like that and it is a natural behavior of it. The only possible way that you can do to match the API and the YouTube site itself is by passing the same filter for it. Example is upload date and viewCount.
Here is the example request for viewCount.
https://developers.google.com/apis-explorer/#p/youtube/v3/youtube.search.list?part=snippet&maxResults=10&order=viewCount&q=spider&_h=1&
and this is for the YouTube site
https://www.youtube.com/results?q=spider&sp=CAM%253D
Hope this slight information helps you.
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed
the above query is suppose to return 76 results, and when I try to run it, it only returns 30. I guess GitHub return results in portions when it is over 30. Any idea how I can get the rest of the results?
You need to use page parameter, e.g. for next 30 page = 2
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&page=2
You can also use per_page parameter to change the default size of 30. It supports max size of 100. Like this:
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&per_page=100
More detail can be found here
The Problem: Github api response doesn't contain all the relevant data.
Solution: The api from server is limiting the amount of items the user gets and splitting it into pages (pagination). You should explicitly Specify in your request how many items you'd like to receive from server pagination engine ,using formula for Github pagination api
?page=1&per_page=<numberOfItemsYouSpecify>"
For example: I'd like to get all my collaborators info in my private repo. I'm performing curl request to Github contains: username, authentication token , Organization and repository name and api call with pagination magic.
curl -u johnDoe:abc123$%^ https://api.github.com/repos/MyOrganizationName/MyAwesomeRepo/collaborators?page=1&per_page=1000"
Explanation:
What is Pagination: Pagination is the process of splitting the contents or a section of a website into discrete pages. Users tend to get lost when there's bunch of data and with pagination splitting they can concentrate on a particular amount of content. Hierarchy and paginated structure improve the readability score of the content. Loading pages is due to the less content on each item and each page has a separate URL which is easy to refer.
In this use case Github api splits the result into 30 items per resonse, depends on the request
Github reference:
Different API calls respond with different defaults. For example, a
call to List public repositories provides paginated items in sets of
30, whereas a call to the GitHub Search API provides items in sets of
100
I'm working on an instagram scraper for something and I'm trying to figure out if it's possible to get all photos for a tag that have an id or timestamp later than the last one I have.
The instagram API docs are useless in that they don't have any real info on pagination (which I presume I'll have to abuse).
Does anyone have any ideas?
I've been slogging through the Instagram API for the last couple of days so here's my 2 cents worth:
As far as I can see it if you call the api with /tags/tag-name/media/recent it only return a list if items. If the amount exceeds about 25 you have to make another request with the pagination value returned in the previous request.
In order to gain some control I am initially iterating through all images and storing the results (just the URL not the actual image) to a database. Now I can manipulate however I want. When I feel like updating (I'm doing it manually now but could be a cron job or use the real-time api) I re-read all the images, compare to what I have in my DB and add possible new images. My app then reads out the url and info from my DB (which btw is a heck of a lot faster than going through the instagram api, which will only return about 25 images per request - regardless of any 'count' parameter value you put in the request url) and displays it.
I am developing this for a client who is afraid of people posting nsfw or whatever pics using their dedicated hashtag (for a contest) - with the above set up I can offer them an interface where they can check and mark images that are then displayed in the app.
One thing to watch out for is when a user deletes his picture; you will have to find a way to check for this. Currently (since I'm lazy) I load all images and use jquery to check for an error loading the image. If there is one I delete the image from the DB (via ajax).
I'm not sure the pagination is going to help you: as far as I can see the pagination response has no relation to the id's of the actual image objects on each page - so theoretically a pagination id that jumps to a certain page (i.e. date) might not work tomorrow if enough images have been deleted in the mean time.
to get all images instead of latest 20, just append &count=-1 to your api call - it's that simple.
In either case, there is a timestamp on each json object - or if you prefer, you can use max_tag_id
check out my post here: there any way to show more than 20 photos of the instagram API?
* Update April 2014: count=-1 is no longer available.