There is a sort of a discrepancy in searching via the API endpoint provided by Flickr and when you actually search in Flickr via their text box search. When searching for certain words like Jerry Brown using flickr.photos.search I get different set of results when compared to searching directly on flickr.com.
There are no additional parameters provided for the API End point apart from the *per_page* and the page option, which defaults to 1.
This is the reason why it differs
Flickr Search (flickr.com) - displays the search results by relevance.
If NO Argument is specified to sort in the flickr.photos.search API. Then it defaults to Recent
Example
http://www.flickr.com/search/?q=cat&f=hp
http://www.flickr.com/services/api/explore/flickr.photos.search
Fill in the text = cat
sort = relevance
per_page = 1
You can verify the result by appropriately constructing the image URL as per http://farm{farm-id}.static.flickr.com/{server-id}/{id}_{secret}.jpg
Related
When using https://nominatim.openstreetmap.org/search?format=xml&q=bahamas%20ponte%20nova&addressdetails=1&limit=3
I have exactly the result that I want.
But if you delete the name of my city "Ponte Nova", in the result comes up to references from Spain, but no results come from my city.
How do I get the results to focus on a given radius?
Please see see Nominatim API documentation.
According to section Result Limitation you can use viewbox=<x1>,<y1>,<x2>,<y2> and bounded=1 to restrict the search results to a specific area.
Example: https://nominatim.openstreetmap.org/search?format=xml&q=bahamas&addressdetails=1&limit=3&viewbox=-43.00804%2C-20.36925%2C-42.73699%2C-20.44969&bounded=1
I am building a typical pagination that allows the user to click on a particular page number and view the results (similar to the google search result view). I am using the cloudant search API for this. The cloudant search API provides the limit option but no skip option. How can I skip n number of results if the user is on page 1 and clicks on page 4 ?
I can see that the pagination is implemented using bookmarks. Does it mean that I need to first get the bookmark for page 4 by sending 3 additional requests one after another to the search api ?
There are a couple of different ways of handling this - one is the one you already suggested, which is just to fetch the pages as needed to get the bookmarks. I'm not sure there are many alternatives for search results where we can't pre-calculate the results.
Another alternative, and this depends a bit on the details of what you are trying to do, is to create a view containing the data and use the keys to narrow down the view to the results you need. View outputs support use of limit and skip which would enable you to implement pagination.
There's also a good example of pagination in the docs: http://docs.couchdb.org/en/2.1.0/ddocs/views/pagination.html
We have a website where users put up ads for stuff they want to sell, with parameters such as price, location, title and description. These can then be searched for using sphinx and allowing users to specify min- and maxprice, a location with a searchradius (using google maps) etc. Users can choose to save these searches and get emails when new ads appear that fit their search. Herein lies the problem: We want to perform a reverse search every time an ad is posted. With the price, location, title and description as parameters we want to search through all the saved "searches" and get the ones that would have found the ad. The min- and maxprice should just be performed in a query i suppose, and some Quorom syntax to get all ads with at least 2 or mby just 1 occurance in the title/description. Our problem lies mostly in the geo-search. How do we find all searches where the "search-circles" would include our newly posted location without performing a search for every saved search?
That is the main-question, any comment on our suggested solution to the other problems is also very welcome. Thank you in advance / Jenny
The standard 'geo-search' support on sphinx should work just as well on a Prospective Index, as a normal retrospective search.
Having built a sphinx 'index' of all the saved searches...
And you run a query using the 'ad' as the search query:- rather than the 'filter' using a fixed radius, you just use the radius from the attribute (ie the radius stored on the particular query) - if using the API cant use setFilterRange directly, need to use setSelect, to make a new virtual attribute.
$cl->setSelect("*,IF(#geodist<radius,1,0) as myfilter");
$cl->setFilter('myfilter',array(1));
(and yes, the min/maxprice can just be done with normal filters too - just inverting the logic to that you would use in a retrospective search)
... the complication is in the 'full-text' query, if the saved search is anything more than a single keyword, but you appear to have already figured out that part.
I'm starting to use the Google Custom Search Engine in order to retrieve a temporal use of some selected word in an online newspaper.
I see that for example my result provides a total of 22000 retrieved articles. I tried to retrieve pages after the 100 index but I can't get any result.
I also tried to search directly on the google web page, but I see that after the 10 page I can't go further, so this only show me the first 1000 result at max.
Does it is possible to retrieve every single result or I've to get just only a small portion of that?
Thanks
im making a app in using the Foursquare API, using their API i can get some places in a neighborhood.
I'm using one like that:
https://api.foursquare.com/v2/venues/explore
?ll=40.7,-74
&limit=50
&venuePhotos=1
Using that API i can take 50 venues around my point, but how can I take more others venues like a second page of that?
You can use the "offset" parameter to query for subsequent pages of results. For example, to get the second page, you would supply "offset=50" to indicate skipping the first 50 results.