What are the units for the time remaining value in GitHub's rate limiting message? - github-api

According to GitHub's API documentation, when you go over the rate limit, you get a response that looks like this:
HTTP/1.1 403 Forbidden
Date: Tue, 20 Aug 2013 14:50:41 GMT
Status: 403 Forbidden
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1377013266
{
"message": "API rate limit exceeded for xxx.xxx.xxx.xxx. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)",
"documentation_url": "https://developer.github.com/v3/#rate-limiting"
}
What are the units on the X-RateLimit-Reset value? In other words, how can I tell from the error message how long in seconds or minutes I need to wait before I can send another request?

It's a Unix timestamp, see this note from the GitHub API documentation.
With the timestamp from that example the reset time would have been 20 Aug 2013 at 15:41:06.
According to a Wikipedia article the GitHub docs link to, a Unix timestamp is:
defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970, not counting leap seconds.

Related

Azure resource usage api says “bad request” with minutes part in the reported start and end datetimes

I am facing one issue while calling the Azure usage api. In the usage API, we need to provide the reported start date-time and reported end date-time. In these date-times if I provide the minutes part like:
2017-02-09T03%3a30%3a00Z
Then it fails with exception- bad request.
It works fine with date-time part till the hour part. The moment any minute part is given, it fails. I tried to make sure that:
• both start and end datetimes are past to the current date time
• both the datetimes are provided in the utc iso 8601 format
• end datetime is after i.e. future to the start datetime.
As a result of the above issue, the minimum time gap between the reported date-times that I could go is one hour. Please let me know that I could try or anything wrong that I might be doing.
Thanks in advance,
Rahul
If you use hourly granularity, it does not support minutes and seconds.
https://msdn.microsoft.com/en-us/library/azure/mt219001.aspx
The body of bad request response should look like this:
{
"error": {
"code": "InvalidInput",
"message": "The reportedendtime for hourly aggregation granularity needs to have the time set using only the hours portion, with zeros for minutes and seconds (1:00:00Z, 2:00:00Z, 3:00:00Z, etc.)."
}
}

Inconsistent counts in Virtuoso 7.1 for large graphs

I have an instance of Virtuoso 7.1 running and DBpedia set up as clearly elucidated in this blog. Now I have a very basic requirement of finding some count values. However I am confused by the results of my query:
select count(?s)
where {?s ?p ?o .
FILTER(strstarts(str(?s),"http://dbpedia.org/resource")) }
With this query I'd like to see how many resources are present in DBpedia that have an URI that starts with "http://dbpedia.org/resource". Essentially my hope is to find resources of the kind <http://dbpedia.org/resource/Hillary_Clinton> or <http://dbpedia.org/resource/Bill_Clinton> and so on.
My confusion lies in the fact that Virtuoso returns different results each time.
Now I tried it on two different machines, a local machine and our server. In both cases I see wildly different results. By wildly I would just want you to sample the sizes. They are 1101000, 36314, 328014, 292014.
Also about the execution time out. I did try changing it to 5000 from the default 0 or to 8000. That did not exactly increase the results.
I know DBpedia provides statistics for their dump, but I'd like to do this right in Virtuoso. Why is this anomaly?
Furthermore I saw this discussion as well, where they refer to something that might be related. I would just want to know how to get the counts right for DBpedia in Virtuoso. If not Virtuoso is there any other graph store i.e. Jena, rdf4j, Fuseki, which would do this right?
First thing -- Virtuoso 7.1 is very old (shipped 2014-02-17). I'd strongly advise updating to a current build, 7.2.4 (version string 07.20.3217) or later, whether Commercial or Open Source.
Now -- the query you're running has to do a lot of work to produce your result. It has to check every ?s for your string, and then tot up the count. That's going to need a (relatively) very long time to run; exactly how long is dependent on the runtime environment and total database size, among other significant factors.
HTML headers (specifically, X-SQL-Message) will include notice of such query timeouts, as seen here —
$ curl -LI "http://dbpedia.org/sparql?default-graph-uri=http%3A%2F%2Fdbpedia.org&query=SELECT+COUNT%28%3Fs%29+%0D%0AWHERE%0D%0A++%7B++%3Fs++%3Fp++%3Fo++.+%0D%0A+++++FILTER%28strstarts%28str%28%3Fs%29%2C%22http%3A%2F%2Fdbpedia.org%2Fresource%22%29%29+%0D%0A++%7D+&format=text%2Fhtml&CXML_redir_for_subjs=121&CXML_redir_for_hrefs=&timeout=3000000&debug=on"
HTTP/1.1 200 OK
Date: Tue, 06 Sep 2016 16:39:44 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 128
Connection: keep-alive
Server: Virtuoso/07.20.3217 (Linux) i686-generic-linux-glibc212-64 VDB
X-SPARQL-default-graph: http://dbpedia.org
X-SQL-State: S1TAT
X-SQL-Message: RC...: Returning incomplete results, query interrupted by result timeout. Activity: 12.43M rnd 13.28M seq 0 same seg 8.023M same pg 3.369M same par 0 disk 0 spec disk 0B / 0 m
X-Exec-Milliseconds: 121040
X-Exec-DB-Activity: 12.43M rnd 13.28M seq 0 same seg 8.023M same pg 3.369M same par 0 disk 0 spec disk 0B / 0 messages 0 fork
Expires: Tue, 13 Sep 2016 16:39:44 GMT
Cache-Control: max-age=604800
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: HEAD, GET, POST, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Accept-Encoding
Accept-Ranges: bytes
On your own instance, you can set larger or even infinite timeouts (e.g., MaxQueryExecutionTime) to ensure that you always get the most complete result available.

How Gmail API's Quota Units Work?

According to gmail's API docs, the limits are as follows:
API Limit Type Limit
Daily Usage 1,000,000,000 quota units per day
Per User Rate Limit 250 quota units per user per second, moving average (allows short bursts)
In the table further below, the docs say that a messages.get costs 5 quota units.
In my case, I am interesting in polling my inbox every second to check for new messages, and get the contents of those messages if there any.
Question 1: Does this mean that I'd be spending 5 quota units each second, and that I'd be well under my quota?
Question 2: How should check for only "new" messages? That is, messages that have arrived since the last time I made the API call? Would I need to add "read" labels to the messages after each API call (spending extra quota units on the "modify" API call), or is there an easier way?
Question 1:
That's right. You would spend (5 * 60 * 60 * 24 =) 432000 quota points on the polling, which is nowhere near the limit. You could also implement push notifications if you want Google to notify you of new messages rather than polling yourself.
Question 2:
Listing messages has an undocumented feature of querying for messages after a certain timestamp, given in seconds since the epoch.
If you would like to get messages after Sun, 29 May 2016 07:00:00 GMT, you would just give the the value after:1464505200 in the q query parameter.
Question 1:
You're right about that and as also detailed in your given documentation.
Question 2:
For an easier way, as you've asked, and also encouraged is with the use of Batching Requests. As discussed in the documentation, Gmail API supports batching to allow your client to put several API calls into a single HTTP request.
This related SO post - Gmail API limitations for getting mails can further provide helpful ideas on the usage of batching.

Foursquare API Rate limit reset

If I make a Foursquare API (Venue Details API) call say exactly at 12:00 PM and consume all the hourly rate limit by 12:15 PM then when does Foursquare will refresh the rate limit again so that I can make calls in the next call ?
In other words if my API call limit is 500 at 12:00 PM then will it be reset again at 01:00 PM.
How does FourSquare maintain the hourly limit of an API. Is the window fixed i.e. 12.00 pm to 1.00 PM then 1.00 PM to 2.00 pm or so or it maintains the hour from the first API call ?
The rate limit window is rolling (i.e. it doesn't use fixed hourly buckets)

Varnish - TTL and the current Date

I'd like to set the TTL based on the current date.
http://site.com/2011/03/ should have 5 days as TTL.
http://site.com/2011/04/ should have 1 day as TTL.
Current Date: 15 April 2011
How is this possible in varnish?
thx
Probably the simplest solution is to set the headers appropriately in the backend instances.
E.g., set Cache-Control; max-age=... on each backend response. Varnish takes this header information into account when computing the TTL.

Resources