Get Photos and Detailed Info for Many Foursquare Venues in one call - foursquare

I am working on an iPhone app which allows users to search Foursquare. I am using the venues/explore endpoint for the search which works great, but the results don't include the images for a place or the priceTier.
Right now I am calling /venues/VENUE_ID for each of the returned results, which is generating a lot of API calls. Is there a better way to get this info in a single call?
Related question: If I use the multi endpoint to batch these requests, does that count as a single request towards the limit or as multiple requests?

Sounds like you're worried about limits more than network latency? If you're worried that making the extra call to details will make you hit rate limits faster, this is actually why we generally ask developers to cache details such as prices or photos :) A single multi request is not a single API call; it counts as how many requests are bundled into one.
There is a little help with photos though—if you pass in the venuePhotos=1 param as part of an explore request, you ought to get back photos in the response.

Related

How can i make my REST API Faster with nodejs and express?

Summarize the problem
My problem: I have built a slow API. I want to make my api faster.
I am using Node.js and express to make REST APIs.
And I have one api that takes average response time of 3.5s.
Below is what it does.
It gets all userIds.
Based on the userIds, it requests parallel REST APIs to get other user Data such as projectIds and groupIds. (Since the databases are in different DB, it is inevitable to make different Rest APIs)
Based on the userData(userId, projectId, groupId etc..), it filters. ex) projectId = 1.
It requests 4 different REST APIs in parallel such as schedule and timezone and so on.
I believe I am making two many requests because if there is 10 accounts. I am making 54 requests in total to response.
I want to get whole DB table with each request and combine them with code.
Help me there is better way to do it.
Thx in advance.
You can try the following ways to decrease the response time.
Use logging to find which query takes most of the time and optimize it.
Using Redis. Use caching to handle frequent common requests. Caching will
remove the need to make any additional queries.
Make sure all the requests and queries inside are actually async and
try to make a common Database or Table to Get this data instead of various
databases.

How to improve performance on backend when data is fetched from multiple APIs in sequencial manner?

I am creating a Nodejs app that consumes APIs of multiple servers in a sequential manner as the next request depends on results from previous requests.
For instance, user registration is done at our platform in PostgreSQL database. User feeds, chats, posts are stored at getStream servers. User roles and permissions are managed through CMS. If in a page we want to display a list of user followers with some buttons as per the user permissions then first I need to find list of my current user's followers from getStream then enrich them with my PostgreSQL DB then fetch their permissions from CMS. Since one request has to wait for another it takes long time to give response.
I need to serve all that data in a certain format. I have used Promise.all() where requests were not depending on each other.
I thought of a way to store pre-processed data that is ready to be served but I am not sure how to do that. What is the best way to solve this problem?
sequential manner as the next request depends on results from previous requests
you could try using async/await so that each request will run in a sequential manner.

Best way to request API and store every minute

I have an app that is hitting the rate limit for an API which is hurting the user experience. I have an idea to solve this but have no idea if this is what should be done ideally to solve this issue. Does this idea makes sense and is it a good way to solve this issue? And how should I go about implementing it? I'm using react-native and nodejs.
Here is the idea:
My app will request the data from a "middleman" API that I make. The middle man API will request data once per minute from the main API that I am having the rate limit problem with (this should solve the rate limit issue then) then store it for the one minute until it updates again. I was thinking the best way to do this is spin a server on AWS that requests from the other API every minute (Is this the easiest way to get a request every minute?) then store it on either a blank middleman webpage (or do I need to store it in a database like MongoDB?). Then my app will call from that middleman webpage/API.
Your idea is good.
Your middleman would be a caching proxy. It would act just as you stated. Hava a look at https://github.com/active-video/caching-proxy it does almost what you want. It creates a server that will receive requests of URLs, fetch and cache those, and serve the cached version from now on.
The only downside is that it does not have a lifetime option for the cache. You could either fork to add the option, or run a daemon that would delete the files that are too old to force a re-fetch.
EDIT:
A very interesting addition to the caching-proxy would be to have a head request to know if the result changed. While this is not provided by all API, this could become useful if yours is displaying such info. Only if HEAD requests do not count toward your API limits...

Get Request URL Capability

I recently began working with JavaScript and am looking at various get and post requests one can send to a server.
For get, as far as I know, all of the information of the query is contained in the URL that the user triggers. On the server side this has to be dissected to retrieve the necessary parameters.
I was just wondering how larger and more detailed requests are handled with this get method? For instance what if I had millions and millions of parameters that make up my whole request? Would they all be jumbled into the URL? Is there a limit as to the number of unique URLs one can have? I read this post:
How do URL shorteners guarantee unique URLs when they don't expire?
I would really like some more input.
Thank You!

Bulk async calls to FB and G+ count endpoints fail

my app has a controller that outputs the total social count for various pages: http://pastebin.com/MLBTb3mi
..it works fine when i'm making a few calls at a time but say when there is a website of 1000 urls and i want to update the social count for each of the urls, it breaks when its hit with volume calls asynchronously - particularly g+ and fb break. Here's the console error I get as the response to the call made to fb's graph id (this is the 'body' response - please refer to line 74 of the controller: http://pastebin.com/MLBTb3mi
facebook body is {"id":"970371719648388","created_time":"2015-04-02T07:43:09+0000","is_scrap
ed":false,"type":"website","updated_time":"2015-04-02T07:43:09+0000","url":"http:\/\/www.zappos.com\/womens-clothing\/"}
facebook result is undefined
Does anyone know how to solve this issue and to make the controller work smoothly?
I tried making bulk calls using sharedcount.com and i'm able to do 10s of thousand of simultaneous calls without any problem. How can I write my controller to handle such bulk operations without using external services like sharedcount?
You should examine the status code of the response. My guess is that you're getting rate-limited. See the Facebook page on rate-limiting for details; it doesn't discuss specific numbers on what will get you blocked, but it provides details on what to look for to indicate that you're being throttled. The solution is to throttle requests on your end so that they don't go out too fast; you can use something as simple as Lodash's _.throttle for this.

Resources