Colleting a neighborhood in foursquare - foursquare

im making a app in using the Foursquare API, using their API i can get some places in a neighborhood.
I'm using one like that:
https://api.foursquare.com/v2/venues/explore
?ll=40.7,-74
&limit=50
&venuePhotos=1
Using that API i can take 50 venues around my point, but how can I take more others venues like a second page of that?

You can use the "offset" parameter to query for subsequent pages of results. For example, to get the second page, you would supply "offset=50" to indicate skipping the first 50 results.

Related

FourSquare API returns venues that I have not called for

We are interested in producing a list of venues in an app that we are building. We call the FourSquare API for this, and we want it to return only venues that fit the parameters we specify in the call. We have specified that we want venues in the category "gay bar", within 100 meters of the coordinates that the call is made from.
When we make the call from coordinates in Oslo, the API returns one gay bar, but does not return two other gay bars that are within 100m of the coordinates (these are in FourSquare's database). Instead, the API returns a set of places that are not in the category we have specified (offices, convention centres, regular pubs, etc). We are obviously not interested in these venues - we are interested in the two venues that the API does not return.
The URL for the call is below. Please, if you can help me understand how to correct this, I would be very grateful.
Ben
https://api.foursquare.com/v2/venues/search?categoryIds=4bf58dd8d48988d1d8941735&ll=59.915286,10.740464&radius=100&limit=15&v=20210826&intent=browse
The documentation suggests that the parameter name is categoryId (not plural). Try that.

Cloudant/Couch db pagination in search API - How to skip n number of records

I am building a typical pagination that allows the user to click on a particular page number and view the results (similar to the google search result view). I am using the cloudant search API for this. The cloudant search API provides the limit option but no skip option. How can I skip n number of results if the user is on page 1 and clicks on page 4 ?
I can see that the pagination is implemented using bookmarks. Does it mean that I need to first get the bookmark for page 4 by sending 3 additional requests one after another to the search api ?
There are a couple of different ways of handling this - one is the one you already suggested, which is just to fetch the pages as needed to get the bookmarks. I'm not sure there are many alternatives for search results where we can't pre-calculate the results.
Another alternative, and this depends a bit on the details of what you are trying to do, is to create a view containing the data and use the keys to narrow down the view to the results you need. View outputs support use of limit and skip which would enable you to implement pagination.
There's also a good example of pagination in the docs: http://docs.couchdb.org/en/2.1.0/ddocs/views/pagination.html

GitHub Search API only return 30 results

https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed
the above query is suppose to return 76 results, and when I try to run it, it only returns 30. I guess GitHub return results in portions when it is over 30. Any idea how I can get the rest of the results?
You need to use page parameter, e.g. for next 30 page = 2
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&page=2
You can also use per_page parameter to change the default size of 30. It supports max size of 100. Like this:
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&per_page=100
More detail can be found here
The Problem: Github api response doesn't contain all the relevant data.
Solution: The api from server is limiting the amount of items the user gets and splitting it into pages (pagination). You should explicitly Specify in your request how many items you'd like to receive from server pagination engine ,using formula for Github pagination api
?page=1&per_page=<numberOfItemsYouSpecify>"
For example: I'd like to get all my collaborators info in my private repo. I'm performing curl request to Github contains: username, authentication token , Organization and repository name and api call with pagination magic.
curl -u johnDoe:abc123$%^ https://api.github.com/repos/MyOrganizationName/MyAwesomeRepo/collaborators?page=1&per_page=1000"
Explanation:
What is Pagination: Pagination is the process of splitting the contents or a section of a website into discrete pages. Users tend to get lost when there's bunch of data and with pagination splitting they can concentrate on a particular amount of content. Hierarchy and paginated structure improve the readability score of the content. Loading pages is due to the less content on each item and each page has a separate URL which is easy to refer.
In this use case Github api splits the result into 30 items per resonse, depends on the request
Github reference:
Different API calls respond with different defaults. For example, a
call to List public repositories provides paginated items in sets of
30, whereas a call to the GitHub Search API provides items in sets of
100

Instagram: Get photos from a tag after a specified photo

I'm working on an instagram scraper for something and I'm trying to figure out if it's possible to get all photos for a tag that have an id or timestamp later than the last one I have.
The instagram API docs are useless in that they don't have any real info on pagination (which I presume I'll have to abuse).
Does anyone have any ideas?
I've been slogging through the Instagram API for the last couple of days so here's my 2 cents worth:
As far as I can see it if you call the api with /tags/tag-name/media/recent it only return a list if items. If the amount exceeds about 25 you have to make another request with the pagination value returned in the previous request.
In order to gain some control I am initially iterating through all images and storing the results (just the URL not the actual image) to a database. Now I can manipulate however I want. When I feel like updating (I'm doing it manually now but could be a cron job or use the real-time api) I re-read all the images, compare to what I have in my DB and add possible new images. My app then reads out the url and info from my DB (which btw is a heck of a lot faster than going through the instagram api, which will only return about 25 images per request - regardless of any 'count' parameter value you put in the request url) and displays it.
I am developing this for a client who is afraid of people posting nsfw or whatever pics using their dedicated hashtag (for a contest) - with the above set up I can offer them an interface where they can check and mark images that are then displayed in the app.
One thing to watch out for is when a user deletes his picture; you will have to find a way to check for this. Currently (since I'm lazy) I load all images and use jquery to check for an error loading the image. If there is one I delete the image from the DB (via ajax).
I'm not sure the pagination is going to help you: as far as I can see the pagination response has no relation to the id's of the actual image objects on each page - so theoretically a pagination id that jumps to a certain page (i.e. date) might not work tomorrow if enough images have been deleted in the mean time.
to get all images instead of latest 20, just append &count=-1 to your api call - it's that simple.
In either case, there is a timestamp on each json object - or if you prefer, you can use max_tag_id
check out my post here: there any way to show more than 20 photos of the instagram API?
* Update April 2014: count=-1 is no longer available.

Facebook Place Search get no results of new places

After I create a new place on Facebook app, I use graph api to search the place with exact same location. However, I cannot get the place I just created even if I increase the distance to 1000 ft.
My search URL is as follow:
https://graph.facebook.com/search?type=place&center=25.091075, 121.55983449999997&distance=100&limit=100&offset=0&access_token=XXXX
In addition, if I add q="My Place" parameter, I can get the place.
Is it possible to get the new place information without parameter 'q=My Place'?
Most likely, it takes a while for Facebook's search service to index the new place. I would be very surprised if you still having trouble with this after a day or so.

Resources