Travis CI - bypass 50m timeout for Haskell Stack builds - haskell

I have a Haskell project with 300+ files (mostly auto generated). I can build it in few minutes with my 4 yo. processor (by specifying ghc-options: $everything: -j in stack.yaml) but when it comes to Travis things becomes really slow. It seems that modules being processed sequentially and even single module compilation time much larger (about one second on my machine vs tens on seconds on Travis) Eventually I hit Travis timeout (50 min for single job). Is there any way to speed up Travis build or to split up compilation process to multiple jobs? I would accept paid plan from Travis, I need solution which just works without complex setup.

This configuration uses stages: https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml
However, there are unpredictable problems with the cache, or perhaps problems with the Travis config: https://travis-ci.org/google/codeworld/builds/626216910 Also, I am not sure how Travis utilizes the cache(s) for simultaneous builds.
https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L52-L63 , https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L74 , and the redundant calls to stack upgrade --binary-only are attempts to work around these issues.

Related

3000 tests - Major performance issues. What can be done?

Our react project has ~3000 jest tests. Most of them are just typical "render without crashing".
When we run npm test, the amount of memory used slowly climbs all the way to 22 Gb.
On machines with only 16 Gb, the tests grind the entire machine to a halt and take a very long time to finish.
What we have tried that has not worked or made the issue worse:
--maxWorkers=50% or --maxWorkers=4 etc
--runInBand (way too slow)
--detectLeaks (half our tests have memory leaks according to this experimental option, but we have no idea what they are or even if they are the cause of this problem)
The only thing that works is running the tests on a machine with a large amount of RAM (>= 32Gb).
Any idea on how we can reduce the amount of memory used by these tests?
We worked around this problem by splitting up the testing in multiple steps.
For example, if you have two source folders:
a/
b/
Then instead of running npm test, you can run it in two steps:
react-scripts test src/a && react-scripts test src/b
This will clear the memory used in each step.

Incredibly slow Angular source map build

In an effort to debug a production Angular issue, I'm trying to generate a source map for the project. As suggested on a number of SO articles I'm doing as below:
export NODE_OPTIONS=--max-old-space-size=2048
ng build --prod --sourcemaps
The choice of 2G RAM in the first line above is based on the fact I'm running this under VirtualBox on a laptop that I need to run other stuff on. It seems it's decided to steal a few gig of swap over and above that anyway, the HDD activity light has barely been off since the build started ...
The ng build process has been running for about 14 hours now, with practically that entire time having been stuck on this line:
69% building modules 1392/1397 modules 5 active ...b/node_modules/chartjs-color/index.js
This isn't a remarkably big project, how on earth is it taking this long?
I'll add that I don't really know Angular, just looking at this while the maintainer is on leave, so please don't assume I haven't missed anything obvious.
Literally all I want is the source map, not interested in anything else being built. Is there anything I can skip?
Edit:
I followed the one upvoted comment and tried restarting the build - same problem over and over.
Tried checking out to a fresh project and reinstalling node modules locally, as another dev suggested the fact I was checking out production atop of dev branch might be an issue - same.
Tried doubling RAM - same.
What appears to have fixed it is the addition of option --no-aot. But I don't know if that means it's a none-identical build, at least in terms of source map? Will find out I guess ...

Jest with coverage takes too long in TeamCity

We migrated our project from jasmine to jest couple month ago and now want to add some coverage in our TeamCity CI server. What we noticed is that for jest on a local dev's machine first run (with coverage) takes about 2-2.5 minutes and all consequent runs take about 20 seconds, but in TeamCity it takes about 6 minutes (with coverage) and only 1:30 without coverage. Is there any way to speed up tests with coverage for TeamCity?
It is an known issue [3] that coverage in jest makes running tests slower. However, there is no explanation what cures the issue. Only tip was to try -i flag on running the tests.
My source [2] tells why that flag improves the efficiency of tests all together. The flag disables multiprocessing and on some machines with limited resources (they say) this speeds up efficiency two fold.
My source [1] also tells the version after 22.4.4 has regression in efficiency (significantly slower than 22.4.4) and that was not fixed until article was written.
Also, they recommend in [1] to use Node but not JSDOM because Node is faster.
So, use:
// package.json
"jest": {
"testEnvironment": "node"
}
Hope these rocket speed your tests and you can taggle the loss of speed via coverage option on.
Sources:
[1] https://itnext.io/how-to-make-your-sluggish-jest-v23-tests-go-faster-1d4f3388bcdd
[2] Why does Jest --runInBand speed up tests?
[3] https://github.com/facebook/jest/issues/2586
Try adding the dotnet.cli.test.reporting parameter, which will
bring back to the normal running time.
Another possible workaround is to use vstest command instead of
test since it supports more precise test adapter path declaration.

Very slow RedHawk component builds

We have some components that build 15+ object files before linking them. We find that if we modify a .h file used by many or all, that builds are VERY slow. Some of our components take over an hour to build. It appears that RedHawk issues a make -j or a make -j with a large number, so that we have 15+ compiles running simultaneously and this overwhelms even 4 GB of RAM and results in excessive swapping and VERY slow execution (the entire CPU is nearly locked up, other windows are also dead until it completes). If we use a simple make from shell in the component it completes in 5 min. Is there a way to change RH to issue a simple make or make with an adjustable number of max processes?
If you're referring to how the IDE invokes the build you can check the build console. I'm pretty sure it either calls the top level build.sh or the build.sh within your implementation's folder. In either case you can modify that file to perform the build however you'd like.

Is there any JIT pre-caching support in NodeJS?

I am using a rather large and performance-intensive nodejs program to generate hinting data for CJK fonts (sfdhanautohint), and for some better dependency tracking I had to end up calling the nodejs program tens of thousands of times from a makefile like this.
This immediately brought me to the concern that doing such is actually putting a lot of overhead in starting and pre-heating the JIT engine, so I decided to find something like ngen.exe for nodejs. It appears that V8 already has some support for code caching, but is there anything I can do to use it in NodeJS?
Searching for kProduceCodeCache in NodeJS's GitHub repo doesn't return any non-bundled-v8 results. Perhaps it's time for a feature request…
Yes, this happens automatically. Node 5.7.0+ automatically pre-caches (pre-heats the JIT engine for your source) the first time you run your code (since PR #4845 / January 2016 here: https://github.com/nodejs/node/pull/4845).
It's important to note you can even pre-heat the pre-heat (before your code is ever even run on a machine, you can pre-cache your code and tell Node to load it).
Andres Suarez, a Facebook developer who works on Yarn, Atom and Babel created v8-compile-cache, which is a tiny little module that will JIT your code and require()s, and save your Node cache into your $TMP folder, and then use it if it's found. Check out the source for how it's done to suit other needs.
You can, if you'd like, have a little check that runs on start, and if the machine architecture is in your set of cache files, just load the cached files instead of letting Node JIT everything. This can cut your load time in half or more for a real-world large project with tons of requires, and it can do it on the very first run
Good for speeding up containers and getting them under that 500ms "microservice" boot time.
It's important to note:
Caches are binaries; they contain machine-executable code. They aren't your original JS code.
Node cache binaries are different for each target CPU you intend to run on (IA-32, IA-64, ARM etc). If you want to pre-cache pre-caches for your users, you must make cache targets for each target architecture you want to support.
Enjoy a ridiculous speed boost :)

Resources