I've been making the same request repeatedly and each time i get a different feedback.
I'm trying to get the total appearances of a certain term (in this case: bigdata) by making 2 requests trough 'dateRestrict' and subtracting the "totalResults" I get from the response Json and even thou i do not change anything, it always gives me something like:
first time I run it:
LowerRange total: 6710000
UpperRange total: 6760000
next time:
LowerRange total: 6720000
UpperRange total: 6770000
and the next:
LowerRange total: 6740000
UpperRange total: 6760000
LowerRange: 1 March.
UpperRange: 1 January.
everything pretty close, but different still.
Any insights on why this is happening? I'm not so familiar with how this response is made, so it might be right and I am misunderstanding.
my request for march is this:https://www.googleapis.com/customsearch/v1?q=bigdata&dateRestrict=d144&key=MY_API_KEY&cx=MY_UNIQUE_KEY
Another doubt i have is if the API rounds the results, cause I never get something like 6740107.
Based on its' searching algorithm, It provides different results to user. If you search a particular item in Google search, It will provide different output every time based on users, location, time etc. It is the beauty of Google Search. Go through the link for your reference.
Related
var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://api.github.com/repos/vuejs/vue/issues');
xhr.send();
with above code, I can receive top 30 issues list of vue project. But if I want to get top 30 issues whose issue number is less then 8000, how can I do it?
in the github v3 api doc,there are just a feature that allow you get issues since a time point.
One way using API V3 would be to traverse through the issues and find those that you want. In any case the call to the Issues API returns issues in descending order of the date of creation. Which means you just need to traverse through the issues to find the ones having issue number lower than 8000.
In the particular case of vuejs/vue; You can increase the number of issues displayed per page to 100 and then find issues having number less than 8000 in the second page :
https://api.github.com/repos/vuejs/vue/issues?per_page=100&page=2
I feel this is a better option than using issue Search API (V3), since you do not have to deal with a very low rate limit of Github Search APIs.
According to documentation: https://www.instagram.com/developer/limits/
The rate-limit control works under a "time-sliding" window, the question is:
What's the frequency of increasing for the remaining calls HTTP header (x-ratelimit-remaining) seconds? minutes?, an hour?
Reading the docs. "5000/hr per token for Live apps" (our company App went Live already), I assumed a frequency limiter, being calculated each second or minute, but after several days trying different strategies the value doesnt seem to have any deductible behaviour.
Possible answers (depending how it is coded) could be:
(a sliding window like a frequency limiter)
it increases 1 credit each 720 ms (3600'(1hr) / 5000 (remaining calls)) without a request until reaching 5000, it decays to 0 otherwise.
If we do 1 req. at the correct frequency, we should never lose 5000 calls., So we could spend them strategically: dispersed, cluttered, traffic-adapted.
(a limited sink recharging each hour)
with 5000 remaining, it decays 1 credit per request -no matter the frequency-, after 1 hour passed since that 1st request: it goes back to 5000
it renews to 5000 each 1 hour counting since the token was used to do the 1st request.
it decays 1 credit per request, and it goes to 5000 in a time fixed hour, like at 12:00, 13:00, 14:00, 15:00...
I'm using jInstagram 1.1.7.
After a lot of testing....
I have some temporary conclusions...
Starting from 5000, if you fetch at uniform rate (720ms/req) you will reach 500 like at the minute 50, then instagram will begin to give you credit in portions lesser than 500. So at the minute 60 you'll have 150 remaining calls left, and instagram will give you another credit portion, generally reaching 500 avg. and going down again of course...
If you stop consuming, like 30 minutes aprox. You will acquire again 5000 credits.
Also they give you 5000 remaining calls, they seem to have counters indexed by IP, if you make the request from different IPs with the same credential, they'll act like ignoring the others.
Besides that, instagram have many errors keeping a consistent value for the x-ratelimit-remaining HTTP header they respond on every HTTP request.
It looks related to some overriding, and some kind of race between the servers replicating the last value.
Shame on you instagram, I spent a lot of time adapting my cool throttling algorithm to your buggy behaviour, assuming you had good engineering down there !
Please fix them so we can play fair with you instead of playing hide and seek, stealth tricks..
I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.
Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial
Ive found this gem : http://watirwebdriver.com/page-performance/
But i cant seem to understand what this measures
browser.performance.summary[:response_time]/1000
Does it start measuring from the second i open the browser?
Watir::Browser.new :chrome
or from the last Watir-webdriver command writen?
And how can i set when it starts the timer?
** I've tried several scripts but i keep getting 0 seconds
Thats why im not sure.**
From what I have read (I have not actually used it on a project), the response_time is the time from starting navigation to the end of the page loading - see Tim's (the gem's author) answer in a previous question. The graphical image on Tim's blog will helps to understand the different values - http://90kts.com/2011/04/19/watir-webdriver-performance-gem-released/.
The gem is for getting performance results of single response, rather than overall usage of a browser during a script. So there is no need to start/stop the timer.
If you are getting 0 seconds, it likely means that the response_time is less than 1000 milliseconds (ie in Ruby, doing 999/1000 gives 0). To make sure you are getting something non-zero, try doing:
browser.performance.summary[:response_time]/1000.0
Dividing by 1000.0 will ensure that you get the decimal values (eg 0.013).