How can I get issues since any number? - github-api

var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://api.github.com/repos/vuejs/vue/issues');
xhr.send();
with above code, I can receive top 30 issues list of vue project. But if I want to get top 30 issues whose issue number is less then 8000, how can I do it?
in the github v3 api doc,there are just a feature that allow you get issues since a time point.

One way using API V3 would be to traverse through the issues and find those that you want. In any case the call to the Issues API returns issues in descending order of the date of creation. Which means you just need to traverse through the issues to find the ones having issue number lower than 8000.
In the particular case of vuejs/vue; You can increase the number of issues displayed per page to 100 and then find issues having number less than 8000 in the second page :
https://api.github.com/repos/vuejs/vue/issues?per_page=100&page=2
I feel this is a better option than using issue Search API (V3), since you do not have to deal with a very low rate limit of Github Search APIs.

Related

What to understand by loaded by in performance result

Lately i have seen high time to first byte for my sites.
Most of the time it is by javascript. A test at webpagetest.org usually shows like....
URL: http://example.com/
Loaded by: http://example.com/some-kind-of-javascript.js
When i remove that javascript then anothe javascript appears in that place.
What does loaded by exactly mean??
Check example test result....
https://www.webpagetest.org/result/190729_JY_cb028989b0f44671fba830c9eaca29d7/1/details/#waterfall_view_step1
I'm not sure why you think it's Javascript that is the problem? It looks to me like it's the initial HTML that is causing the problem.
Time to First byte for a resource is the time take from after sending the request (e.g. GET /) until it receives the first byte back. It excludes the DNS lookup, TCP connection and SSL handshake time, so really is a measure of the time take to start receiving that resource. Note that the "first byte" time at the top of the waterfall is the full end to end time, including DNS/TCP/SSL and any redirects, but the TTFB for each resource this is split out more.
I don't know how your home page is created - I would guess it's not a static page so whatever is generating this (PHP?) is taking too long. Whether this is due to bad backend code, an under-resourced server, a slow database, or something else is impossible to say from the outside. I would suggest getting in touch with your hosting provider and/or reviewing your code and server.

1000 rows limit for chef-api module/wrapper

So im using this Node module to connect to chef from my API.
https://github.com/normanjoyner/chef-api
The same contains a method called "partialSearch" which will fetch determined data for all nodes that match a given criteria. The problem I have, on of our environments have 1386 nodes attached to it, but it seems the module only returns 1000 as a maximum.
There does not seem to be any method to "offset" the results. This module works pretty well and its a shame this feature is not implemented since its lack really breaks the utility of such.
Does someone bumped into a similar issue with this module and can advise how to workaround it?
Here its an extract of my code :
chef.config(SetOptions(environment));
console.log("About to search for any servers ...");
chef.partialSearch('node',
{
q: "name:*"
},
{
name: ['name'] ,
'ipaddress': ['ipaddress'] ,
'chef_environment': ['chef_environment'] ,
'ip6address': ['ip6address'],
'run_list': ['run_list'],
'chef_client': ['chef_client'],
'ohai_time': ['ohai_time']
}
, function(err, chefRes) {
Regards!
The maximum is 1000 results per page, you can still request pages in order. The Chef API doesn't have a formal cursor system for pagination so it's just separate requests with a different start value, which can sometimes lead to minor desync (as in an item at the end of one page might shift in ordering and also show up at the start of the next page) so just make sure you handle that. That said, the fancy API in the client library you linked doesn't seem to expose that option, so you'll have to add it or otherwise workaround the problem. Check out https://github.com/sethvargo/chef-api/blob/master/lib/chef-api/resources/partial_search.rb#L34 for a Ruby implementation that does handle it.
We have run into similar issues with Chef libraries. One work-around you might find useful is if you have some node attribute that you can use to segment all of your nodes into smaller groups that are less than 1000.
If you have no such natural segmentation friendly already, a simple implementation would be to create a new attribute called segment and during your chef runs set the attribute's value randomly to a number between 1 and 5.
Now you can perform 5 queries (each query will only search for a single segment) and you should find all your nodes and if the randomness is working each group will be sized about 275 (1386/5).
As your node population grows you'll need to keep increasing the number of segments to ensure the segment sizes are less than 1000.

Get device's moving speed

I'm working on a project on Sencha Touch and NodeJS.
What I am trying to do now is get the device's moving speed. From what I have seen in Sencha Docs I'm supposed to use Ext.device.Geolocation in order to use the device's geolocation services instead of the browser's.
Here's the way I'm doing it...
Ext.device.Geolocation.getCurrentPosition({
success: function(position) {
alert(position.coords.speed);
alert(JSON.stringify(position))
},
failure: function() {
console.log('something went wrong!');
}
});
The position var in there gets this value when running from my iPhone..
{ busId: '186', position:'{"speed":null,"accuracy":81.15007979037921,
"altitudeAccuracy":10,"altitude":9.457816123962402,"longitude":-54.950113117326524,
"heading":null,"latitude":-34.93604986481545}'}
Which is missing the speed property, and I can't figure out why.
An even stranger thing happens though. If I access my app through an Android device, the position variable is empty. All I get is {}.
The app is running on localhost on my PC, and when I say access through my phone I mean I access to it with my PCs IP. I don't know if that is supposed to work ok or not though.
I have added the Cordova API, or tried to, in my projects folder, like this...
And included in my index.html like this...
<script type="text/javascript" src="./cordova-ios/CordovaLib/cordova.js"></script>
I have zero experience with Cordova so I have no idea what I'm doing here, what am I doing wrong?
According to the Sencha Touch 2 documentation, if the speed value returns null then the feature is unsupported on the device (http://docs.sencha.com/touch/2.3.1/#!/api/Ext.util.Geolocation). If it is supported then it should return at least 0.
If you don't plan on packaging the application for Android or iOS then there is no point in using Cordova. What I would suggest is writing your own getSpeed function which will check if the speed is not null or 0 and then use a fall back function.
For the fall back function, you'll need to be able to calculate the distance between two sets of GPS coordinates. There is a solution for that here: Calculate distance between 2 GPS coordinates
You will also need to track the time between the two GPS readings. The closer that you grab those readings, the more accurate your speed calculation will be. For instance, a 1 minute interval in between would be more accurate than a 10 minute interval since a vehicle or person could have stopped several times within that 10 minutes.
I would also make sure that you've set allowHighAccuracy to true on your Geolocation object. Also to prevent any cached GPS data, set maximum age to 0 in order to retrieve fresh data every time.

Give reads priority over writes in Elasticsearch

I have an EC2 server running Elasticsearch 0.9 with a nginx server for read/write access. My index has about 750k small-medium documents. I have a pretty continuous stream of minimal writes (mainly updates) to the content. The speeds/consistency I receive with search is fine with me, but I have some sporadic timeout issues with multi-get (/_mget).
On some pages in my app, our server will request a multi-get of a dozen to a few thousand documents (this usually takes less than 1-2 seconds). The requests that fail, fail with a 30,000 millisecond timeout from the nginx server. I am assuming this happens because the index was temporarily locked for writing/optimizing purposes. Does anyone have any ideas on what I can do here?
A temporary solution would be to lower the timeout and return a user friendly message saying documents couldn't be retrieved (however they still would have to wait ~10 seconds to see an error message).
Some of my other thoughts were to give read priority over writes. Anytime someone is trying to read a part of the index, don't allow any writes/locks to that section. I don't think this would be scalable and it may not even be possible?
Finally, I was thinking I could have a read-only alias and a write-only alias. I can figure out how to set this up through the documentation, but I am not sure if it will actually work like I expect it to (and I'm not sure how I can reliably test it in a local environment). If I set up aliases like this, would the read-only alias still have moments where the index was locked due to information being written through the write-only alias?
I'm sure someone else has come across this before, what is the typical solution to make sure a user can always read data from the index with a higher priority over writes. I would consider increasing our server power, if required. Currently we have 2 m2x-large EC2 instances. One is the primary and the replica, each with 4 shards.
An example dump of cURL info from a failed request (with an error of Operation timed out after 30000 milliseconds with 0 bytes received):
{
"url":"127.0.0.1:9200\/_mget",
"content_type":null,
"http_code":100,
"header_size":25,
"request_size":221,
"filetime":-1,
"ssl_verify_result":0,
"redirect_count":0,
"total_time":30.391506,
"namelookup_time":7.5e-5,
"connect_time":0.0593,
"pretransfer_time":0.059303,
"size_upload":167002,
"size_download":0,
"speed_download":0,
"speed_upload":5495,
"download_content_length":-1,
"upload_content_length":167002,
"starttransfer_time":0.119166,
"redirect_time":0,
"certinfo":[
],
"primary_ip":"127.0.0.1",
"redirect_url":""
}
After more monitoring using the Paramedic plugin, I noticed that I would get timeouts when my CPU would hit ~80-98% (no obvious spikes in indexing/searching traffic). I finally stumbled across a helpful thread on the Elasticsearch forum. It seems this happens when the index is doing a refresh and large merges are occurring.
Merges can be throttled at a cluster or index level and I've updated them from the indicies.store.throttle.max_bytes_per_sec from the default 20mb to 5mb. This can be done during runtime with the cluster update settings API.
PUT /_cluster/settings HTTP/1.1
Host: 127.0.0.1:9200
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "5mb"
}
}
So far Parmedic is showing a decrease in CPU usage. From an average of ~5-25% down to an average of ~1-5%. Hopefully this can help me avoid the 90%+ spikes I was having lock up my queries before, I'll report back by selecting this answer if I don't have any more problems.
As a side note, I guess I could have opted for more balanced EC2 instances (rather than memory-optimized). I think I'm happy with my current choice, but my next purchase will also take more CPU into account.

Watir Measure Page Performance

Ive found this gem : http://watirwebdriver.com/page-performance/
But i cant seem to understand what this measures
browser.performance.summary[:response_time]/1000
Does it start measuring from the second i open the browser?
Watir::Browser.new :chrome
or from the last Watir-webdriver command writen?
And how can i set when it starts the timer?
** I've tried several scripts but i keep getting 0 seconds
Thats why im not sure.**
From what I have read (I have not actually used it on a project), the response_time is the time from starting navigation to the end of the page loading - see Tim's (the gem's author) answer in a previous question. The graphical image on Tim's blog will helps to understand the different values - http://90kts.com/2011/04/19/watir-webdriver-performance-gem-released/.
The gem is for getting performance results of single response, rather than overall usage of a browser during a script. So there is no need to start/stop the timer.
If you are getting 0 seconds, it likely means that the response_time is less than 1000 milliseconds (ie in Ruby, doing 999/1000 gives 0). To make sure you are getting something non-zero, try doing:
browser.performance.summary[:response_time]/1000.0
Dividing by 1000.0 will ensure that you get the decimal values (eg 0.013).

Resources