Why are the results of the HERE isolines API V8 much larger than the V7 when using the same drive time? - geospatial

I have been generating isolines for a while with the V7 api and traffic enabled. When I use the V8 version, the isolines are consistently larger (30% larger). Was there a change made to V8 to make them less conservative?
I tried running them both at the exact same time and I get different sizes.

Related

Why should I not use incremental builds for release binaries?

I noticed that as my project grows, the release compilation/build time gets slower at a faster pace than I expected (and hoped for). I decided to look into what I could do to improve compilation speed. I am not talking about the initial build time, which involves compilation of dependencies and is largely irrelevant.
One thing that seems to be helping significantly is the incremental = true profile setting. On my project, it seems to shorten build time by ~40% on 4+ cores. With fewer cores the gains are even larger, as builds with incremental = true don't seem to use (much) parallelization. With the default (for --release) incremental = false build times are 3-4 times slower on a single core, compared to 4+ cores.
What are the reasons to refrain from using incremental = true for production builds? I don't see any (significant) increase in binary size or storage size of cached objects. I read somewhere it is possible that incremental builds lead to slightly worse performance of the built binary. Is that the only reason to consider or are there others, like stability, etc.?
I know this could vary, but is there any data available on how much of a performance impact might be expected on real-world applications?
Don't use an incremental build for production releases, because it is:
not reproducible (i.e. you can't get the exact same binary by compiling it again) and
quite possibly subtly broken (incremental compilation is way more complex and way less tested than clean compilation, in particular with optimizations turned on).

Synthetic performance AB test

I have deployed two versions of our singlepage web app: one master (A) and one branch where are some changes which can affect somehow load time (B). The change is usually some new feature on front-end, refactoring, small performance optimization, etc. The difference is not so big and the load time varies much more from other reasons (a load of testing machines, a load of servers, network, etc). So webpagetest.org even with 9 tries varies much more (14-20s speedindex) than the real difference could be (0,5s in average for example).
Basically, I need one number which tells me - this feature increase/decrease load time.
Is there some tool which could measure such differences?
My idea was to deploy Webpagetest to a server with minimal load and run Webpagetest randomly on both versions at the same time so I avoid most of the noise. Make a lot of samples (1000+) and check average(or median) value.
But before I start working on that I would like to ask if there is some service which solves that problem.

Is there any JIT pre-caching support in NodeJS?

I am using a rather large and performance-intensive nodejs program to generate hinting data for CJK fonts (sfdhanautohint), and for some better dependency tracking I had to end up calling the nodejs program tens of thousands of times from a makefile like this.
This immediately brought me to the concern that doing such is actually putting a lot of overhead in starting and pre-heating the JIT engine, so I decided to find something like ngen.exe for nodejs. It appears that V8 already has some support for code caching, but is there anything I can do to use it in NodeJS?
Searching for kProduceCodeCache in NodeJS's GitHub repo doesn't return any non-bundled-v8 results. Perhaps it's time for a feature request…
Yes, this happens automatically. Node 5.7.0+ automatically pre-caches (pre-heats the JIT engine for your source) the first time you run your code (since PR #4845 / January 2016 here: https://github.com/nodejs/node/pull/4845).
It's important to note you can even pre-heat the pre-heat (before your code is ever even run on a machine, you can pre-cache your code and tell Node to load it).
Andres Suarez, a Facebook developer who works on Yarn, Atom and Babel created v8-compile-cache, which is a tiny little module that will JIT your code and require()s, and save your Node cache into your $TMP folder, and then use it if it's found. Check out the source for how it's done to suit other needs.
You can, if you'd like, have a little check that runs on start, and if the machine architecture is in your set of cache files, just load the cached files instead of letting Node JIT everything. This can cut your load time in half or more for a real-world large project with tons of requires, and it can do it on the very first run
Good for speeding up containers and getting them under that 500ms "microservice" boot time.
It's important to note:
Caches are binaries; they contain machine-executable code. They aren't your original JS code.
Node cache binaries are different for each target CPU you intend to run on (IA-32, IA-64, ARM etc). If you want to pre-cache pre-caches for your users, you must make cache targets for each target architecture you want to support.
Enjoy a ridiculous speed boost :)

Node JS, Highcharts Memory usage keeps climbing

I am looking after an app built with Node JS that's producing some interesting issues. It was originally running on Node JS v0.3.0 and I've since upgraded to v0.10.12. We're using Node JS to render charts on the server and we've noticed the memory usage keeps climbing chart after chart.
Q1: I've been monitoring the RES column in top for the Node JS process, is this correct or should I be monitoring something else?
I've been setting variables to null to try and reallocate memory back to the system resources (I read this somewhere as a solution) and it makes only a slight difference.
I've pushed the app all the way to 1.5gb and it then ceases to function and the process doesn't appear to die. No error messages which I found odd.
Q2: Is there anything else I can do?
Thanks
Steve
That is a massive jump in versions. You may want to share what code changes you may have made to get it working on latest stable. The api is not the same as back in v0.3, so that may be part of the problem.
If not then the issue you see it more likely from heap fragmentation than from an actual leak. In later v8 versions garbage collection is more liberal with cleanup to improve performance. (see http://code.google.com/p/chromium/issues/detail?id=112386 for some discussion on this)
You may try running the application with --max_old_space_size=32 which will limit the amount of memory v8 can use to around 32MB. Note the docs say "max size of the old generation", so it won't be exactly 32MB. Just around it, for lack of a better technical explanation.
Also you can track the amount of external memory usage with --trace_external_memory. This will allow you to know if external memory (i.e. Buffers) are being retained in your application.
You're note on the application hanging around 1.5GB would tell me you're probably on a 64-bit system. You only mentioned it ceases to function, but didn't note if the CPU is spinning during that time. Also since I don't have example code I'm not sure of what might be causing this to happen.
I'd try running on latest development (v0.11.3 at the time of this writing) and see if the issue is fixed. A lot of performance/memory enhancements are being worked on that may help your issue.
I guess you have somewhere a memory leak (in form of a closure?) that keeps the (not longer used?) diagrams(?) somewhere in memory.
The v8 sometimes needs a bit tweaking when it comes to > 1 GB of memory. Try out --noincremental_marking and/or --max_old_space_size=81920000 (if you have 8 GB available).
Check for more options with node --v8-options and go through the --trace*-parameters to find out what slows down/stops node.

Random access to large files (needing support 64-bit file offsets) in node.js?

I'm thinking of porting some of my cross-platform scripts to node.js partly to learn node.js, partly because I'm more familiar with JavaScript these days, and partly due to problems with large file support in other scripting languages.
Some scripting languages seem to have patchy support for large file offsets, depending on such things as whether they are running on a 32-/64-bit OS or processor, or need to be specifically compiled with certain flags.
So I want to experiment with node.js anyway but Googling I'm not finding much either way on its support (or it's library/framework support etc) for large files with 64-bit offsets.
I realize that to some extend this will depend on JavaScript's underlying integer support at least. If I correctly read What is JavaScript's Max Int? What's the highest Integer value a Number can go to without losing precision? it seems that JavaScript uses floating point internally even for integers and therefore
the largest exact integral value is 253
Then again node.js is intended for servers and servers should expect large file support.
Does node.js support 64-bit file offsets?
UPDATE
Despite the _LARGEFILE_SOURCE and _FILE_OFFSET_BITS build flags, now that I've started porting my project that requires this, I've found that fs.read(files.d.fd, chunk, 0, 1023, 0x7fffffff, function (err, bytesRead, data) succeeds but 0x80000000 fails with EINVAL. This is with version v0.6.11 running on 32-bit Windows 7.
So far I'm not sure whether this is a limitation only in fs, a bug in node.js, or a problem only on Windows builds.
Is it intended that greater-than-31-bit file offsets work in node.js in all core modules on all platforms?
Node.js is compiled with _LARGEFILE_SOURCE and _FILE_OFFSET_BITS on all platforms, so internally it should be safe for large file access. (See the common.gypi in the root of the source dir.)
In terms of the libraries, it uses Number for start (and end) options when creating read and write streams (see fs.createReadStream). This means you can address up to position 2^53 through node (as evidenced here: Also relevant: What is JavaScript's highest integer value that a Number can go to without losing precision?) This is visible in the lib/fs.js code.
It was a little difficult to track down but node.js has only supported 64-bit file offsets since version 0.7.9 (unstable), from the end of May 2012. In stable versions from version 0.8.0, from the end of June 2012.
fs: 64bit offsets for fs calls (Igor Zinkovsky)
On earlier versions failure modes when using larger offsets, failure modes vary from silently seeking to the beginning of the file to throwing an exception with EINVAL.
See the (now closed) bug report:
File offsets over 31 bits are not supported
To check for large file support programatically from node.js code
if (process.version.substring(1).split('.') >= [0,7,9]) {
// use 64-bit file offsets...
}

Resources