According to gmail's API docs, the limits are as follows:
API Limit Type Limit
Daily Usage 1,000,000,000 quota units per day
Per User Rate Limit 250 quota units per user per second, moving average (allows short bursts)
In the table further below, the docs say that a messages.get costs 5 quota units.
In my case, I am interesting in polling my inbox every second to check for new messages, and get the contents of those messages if there any.
Question 1: Does this mean that I'd be spending 5 quota units each second, and that I'd be well under my quota?
Question 2: How should check for only "new" messages? That is, messages that have arrived since the last time I made the API call? Would I need to add "read" labels to the messages after each API call (spending extra quota units on the "modify" API call), or is there an easier way?
Question 1:
That's right. You would spend (5 * 60 * 60 * 24 =) 432000 quota points on the polling, which is nowhere near the limit. You could also implement push notifications if you want Google to notify you of new messages rather than polling yourself.
Question 2:
Listing messages has an undocumented feature of querying for messages after a certain timestamp, given in seconds since the epoch.
If you would like to get messages after Sun, 29 May 2016 07:00:00 GMT, you would just give the the value after:1464505200 in the q query parameter.
Question 1:
You're right about that and as also detailed in your given documentation.
Question 2:
For an easier way, as you've asked, and also encouraged is with the use of Batching Requests. As discussed in the documentation, Gmail API supports batching to allow your client to put several API calls into a single HTTP request.
This related SO post - Gmail API limitations for getting mails can further provide helpful ideas on the usage of batching.
Related
When creating a notification from an extension using chrome.notifications API, the NotificationOptions: eventTime seems to be ignored as the notification is created immediately which supposed to be delayed by milliseconds set for eventTime. As per documentation:
eventTime
A timestamp associated with the notification, in milliseconds past the epoch (e.g. Date.now() + n).
Although this is not clearly stated that this will cause a delay in creation of the notification by the milliseconds set in eventTime, then what's the purpose of this option?
I found a similar question was asked some years ago: Chrome notification ignoring eventTime but the above question was not answered there. Instead the solution was talking about the other approaches (setTimeout, chrome.alarms) to delay the creation of notification.
According to the source code, eventTime does not delay the notification.
It's displayed in the title of the notification (source).
For example, setting it to Date.now() + (5 * 60 + 1) * 1000 shows In 5m i.e. five minutes.
Note that we had to add one second, otherwise the API shows 4m as it subtracts the few microseconds spent in creating the notification internally.
It's used to sort notifications within the same priority group (source)
I have created a logic app to trigger when a tweet is posted with a given hashtag. The trigger is set to check every 10 seconds. The reality is that the Logic App does not run, even if I wait minutes for it, but then if i manually run it, it then executes with the expected input. Any idea what is happening here?
I was having a similar issue, and believe this is due to the specific limitations set for the Twitter Connector (4. Frequency of trigger polls: 1 hour).
https://learn.microsoft.com/en-us/connectors/twitterconnector/
LIMITS
The following are some of the limits and restrictions:
Maximum number of connections per user: 2
API call rate limit for POST operation: 12 per hour
API call rate limit for other operations: 600 per hour
Frequency of trigger polls: 1 hour
Maximum size of image upload: 5 MB
Maximum size of video upload: 15 MB
Maximum number of search results: 100
Maximum number of new tweets tracked within one polling interval: 5
There should be some error occurred. You can inspect all runs of the triggers on the 'Trigger History' blade. This page gives a good overview of monitoring of logic apps: https://azure.microsoft.com/en-us/documentation/articles/app-service-logic-monitor-your-logic-apps/
I am working on a system where we are calling Vision Read API for extracting the contents from raster PDF. Files are of different sizes, ranging from one page to several hundred pages.
Files are stored in Azure Blob and there will be a function to push files to Read API once when all files are uploaded to blob. There could be hundreds of files.
Therefore, when the process starts, a large number of documents are expected to be sent for text extraction per second. But Vision API has limit of 10 transactions per second including read.
I am wondering what would be best approach? Some type of throttling or queue?
Is there any integration available (say with queue) from where the Read API will pull documents and is there any type of push notification available to notify about completion of read operation? How can I prevent timeouts due to exceeding 10 TPS limit?
Per my understanding , there are 2 key points you want to know :
How to overcome 10 TPS limit while you have lot of files to read.
Looking for a best approach to get the Read operation status and
result.
Your question is a bit broad,maybe I can provide you with some suggestions:
For Q1, Generally ,if you reach TPS limit , you will get a HTTP 429 response , you must wait for some time to call API again, or else the next call of API will be refused. Usually we retry the operation using something like an exponential back off retry policy to handle the 429 error:
2.1) You need check the HTTP response code in your code.
2.2) When HTTP response code is 429, then retry this operation after N seconds which you can define by yourself such as 10 seconds…
For example, the following is a response of 429. You can set your wait time as (26 + n) seconds. (PS: you can define n by yourself here, such as n = 5…)
{
"error":{
"statusCode": 429,
"message": "Rate limit is exceeded. Try again in 26 seconds."
}
}
2.3) If step 2 succeed, continue the next operation.
2.4) If step 2 fail with 429 too, retry this operation after N*N seconds (you can define by yourself too) which is an exponential back off retry policy..
2.5) If step 4 fail with 429 too, retry this operation after NNN seconds…
2.6) You should always wait for current operation to succeed, and the Waiting time will be exponential growth.
For Q2,, As we know , we can use this API to get Read operation status/result.
If you want to get the completion notification/result, you should build a roll polling request for each of your operation at intervals,i.e. each 10 seconds to send a check request.You can use Azure function or Azure automation runbook to create asynchronous tasks to check read operation status and once its done , handle the result based on your requirement.
Hope it helps. If you have any further concerns , please feel free to let me know.
In Instagram api documentation, for api GET/users/user-id/media/recent
it doesn't mention what is max count supported for each api call.
Does anyone has idea about it? Thanks.
It appears there is no limit for /users/user-id/media/recent - at least I just ran this and got 5500+ results back for user '787132' (natgeo). If you want to limit it use the count parameter.
Note that other endpoints seem to have limits e.g. /media/popular will usually return a max of 20.
Also be aware that if you do not limit using the count param you might reach your global / endpoint specific rate limit as per http://instagram.com/developer/limits/
I am using Jmeter (started using it a few days ago) as a tool to simulate a load of 30 threads using a csv data file that contains login credentials for 3 system users.
The objective I set out to achieve was to measure 30 users (threads) logging in and navigating to a page via the menu over a time span of 30 seconds.
I have set my thread group as:
Number of threads: 30
Ramp-up Perod: 30
Loop Count: 10
I ran the test successfully. Now I'd like to understand what the results mean and what is classed as good/bad measurements, and what can be suggested to improve the results. Below is a table of the results collated in the Summary report of Jmeter.
I have conducted research only to find blogs/sites telling me the same info as what is defined on the jmeter.apache.org site. One blog (Nicolas Vahlas) that I came across gave me some very useful information,but still hasn't help me understand what to do next with my results.
Can anyone help me understand these results and what I could do next following the execution of this test plan? Or point me in the right direction of an informative blog/site that will help me understand what to do next.
Many thanks.
According to me, Deviation is high.
You know your application better than all of us.
you should focus on, avg response time you got and max response frequency and value are acceptable to you and your users? This applies to throughput also.
It shows average response time is below 0.5 seconds and maximum response time is also below 1 second which are generally acceptable but that should be defined by you (Is it acceptable by your users). If answer is yes, try with more load to check scaling.
In you requirement it is mentioned that you need have 30 concurrent users performing different actions. The response time of your requests is less and you have ramp-up of 30 seconds. Can you please check total active threads during the test. I believe the time for which there will be 30 concurrent users in system is pretty short so the average response time that you are seeing seems to be misleading. I would suggest you run a test for some more time so that there will be 30 concurrent users in the system and that would be correct reading as per your requirements.
You can use Aggregate report instead of summary report. In performance testing
Throughput - Requests/Second
Response Time - 90th Percentile and
Target application resource utilization (CPU, Processor Queue Length and Memory)
can be used for analysis. Normally SLA for websites is 3 seconds but this requirement changes from application to application.
Your test results are good, considering if the users are actually logging into system/portal.
Samples: This means the no. of requests sent on a particular module.
Average: Average Response Time, for 300 samples.
Min: Min Response Time, among 300 samples (fastest among 300 samples).
Max: Max Response Time, among 300 samples (slowest among 300 samples).
Standard Deviation: A measure of the variation (for 300 samples).
Error: failure %age
Throughput: No. of request processed per second.
Hope this will help.