Meteor file modified refresh taking 2 minutes - node.js

I am having a big problem with meteor. Build process "meteor run" is extremely slow it takes about 10 minutes but that is not the bad part since it only happens once when starting.
The bad part is that it takes ~2 minutes to show my changes, file changed watcher taking too long.
When working with a basic example the feedback was way better ~5 seconds and it was workable but now that I am working on a real project it is impossible to make any progress.
I have around 40 packages in packages file and I am using latest meteor (1.3.2.4 at this time).
There were a ton of questions around this problem #4284 #6750, I don't know if there is any tip to bypass this issue ( changing any config,adding more RAM or anything ).
It there is no solution of that it would be helpful if there is a way to limit file watch to only a certain folder at a moment.
Update: I noticed there is ".node_modules" in the root of app can it be excluded from build process?
Thank you guys!

Try Webpack for Meteor.
It supports hot module reload, which can shorten rebuild times a lot. There are some differences compared to the default build process, so you'll need to learn a thing or two about it, but it could be well worth your time.
Try it by fetching kickstart-meteor-react-flowrouter from GitHub.

Related

Incredibly slow Angular source map build

In an effort to debug a production Angular issue, I'm trying to generate a source map for the project. As suggested on a number of SO articles I'm doing as below:
export NODE_OPTIONS=--max-old-space-size=2048
ng build --prod --sourcemaps
The choice of 2G RAM in the first line above is based on the fact I'm running this under VirtualBox on a laptop that I need to run other stuff on. It seems it's decided to steal a few gig of swap over and above that anyway, the HDD activity light has barely been off since the build started ...
The ng build process has been running for about 14 hours now, with practically that entire time having been stuck on this line:
69% building modules 1392/1397 modules 5 active ...b/node_modules/chartjs-color/index.js
This isn't a remarkably big project, how on earth is it taking this long?
I'll add that I don't really know Angular, just looking at this while the maintainer is on leave, so please don't assume I haven't missed anything obvious.
Literally all I want is the source map, not interested in anything else being built. Is there anything I can skip?
Edit:
I followed the one upvoted comment and tried restarting the build - same problem over and over.
Tried checking out to a fresh project and reinstalling node modules locally, as another dev suggested the fact I was checking out production atop of dev branch might be an issue - same.
Tried doubling RAM - same.
What appears to have fixed it is the addition of option --no-aot. But I don't know if that means it's a none-identical build, at least in terms of source map? Will find out I guess ...

Is there any JIT pre-caching support in NodeJS?

I am using a rather large and performance-intensive nodejs program to generate hinting data for CJK fonts (sfdhanautohint), and for some better dependency tracking I had to end up calling the nodejs program tens of thousands of times from a makefile like this.
This immediately brought me to the concern that doing such is actually putting a lot of overhead in starting and pre-heating the JIT engine, so I decided to find something like ngen.exe for nodejs. It appears that V8 already has some support for code caching, but is there anything I can do to use it in NodeJS?
Searching for kProduceCodeCache in NodeJS's GitHub repo doesn't return any non-bundled-v8 results. Perhaps it's time for a feature request…
Yes, this happens automatically. Node 5.7.0+ automatically pre-caches (pre-heats the JIT engine for your source) the first time you run your code (since PR #4845 / January 2016 here: https://github.com/nodejs/node/pull/4845).
It's important to note you can even pre-heat the pre-heat (before your code is ever even run on a machine, you can pre-cache your code and tell Node to load it).
Andres Suarez, a Facebook developer who works on Yarn, Atom and Babel created v8-compile-cache, which is a tiny little module that will JIT your code and require()s, and save your Node cache into your $TMP folder, and then use it if it's found. Check out the source for how it's done to suit other needs.
You can, if you'd like, have a little check that runs on start, and if the machine architecture is in your set of cache files, just load the cached files instead of letting Node JIT everything. This can cut your load time in half or more for a real-world large project with tons of requires, and it can do it on the very first run
Good for speeding up containers and getting them under that 500ms "microservice" boot time.
It's important to note:
Caches are binaries; they contain machine-executable code. They aren't your original JS code.
Node cache binaries are different for each target CPU you intend to run on (IA-32, IA-64, ARM etc). If you want to pre-cache pre-caches for your users, you must make cache targets for each target architecture you want to support.
Enjoy a ridiculous speed boost :)

Grunt watch & TypeScript - How to make it faster?

I have a complicated workflow of TS compilation and I want to make my watchers faster (and still smarts). I currently have 3 different TS compilation that are executed when Grunt starts but also on watch changes.
grunt-ts configuration:
https://gist.github.com/Vadorequest/f1fb95ab4bbc786f420b
grunt-watch configuration:
https://gist.github.com/Vadorequest/eaa82c292a5d3e1ee51f
It currently works. But it takes too much time to recompile every file each time a change is made in any TS file that belong to a set of files. I'm looking for a way to compile only what needs to be compiled, in a smart way. (Meaning that if A.ts inherits of B.ts, if B is changed then A should be recompiled too, it should be possible since WebStorm IDE is able to do that using its Files Watchers)
I read something about fast-compile on https://github.com/TypeStrong/grunt-ts#fast but it doesn't seem like I could use it, but I'm confused about it. (see https://github.com/TypeStrong/grunt-ts/issues/293)
I'm looking for a solution, and also for advices because I think my setup can be improved. It's great to have server side TS files and even shared TS files across the server and the client, but it adds a lot of compilation workflow which is hard to understand and to maintain. Maybe using the recent feature tsconfig.json would help? Any advice would be appreciated.
More details:
serverCommonJs: The server uses TS files that are compiled before to start the application, like controllers and models for instance.
clientCommonJs: Most of the client scripts are in CommonJs rather than in AMD because they're all concatened and minified and it's way easier to work with commonJS than AMD which requires a tons of setup.
amd: Some files are compiled in AMD, whether they're used in the server or the client, or both.
On my computer it takes about 1.5s to 2.5s to compile one set of file. Once compiled they're all copied into a temporary folder which is served to the browser (assets). So it takes easily 5 up to 10 seconds and it could be much much faster if only the changed files were compiled and copied.
I also have a similar issue with LESS files, but that's another story and it should be much simpler to fix since I only have one set of LESS files.

Docpad - how can I find out why it is slow?

I'm migrating my tumblr blog to docpad and have started with this boilerplate: https://github.com/ervwalter/ewalnet-docpad
Now my problem is that "docpad run" takes 58s to run, and a livereload run takes 23s. I wrote the author of this boilerplate and he says he is having the same, but it doesn't bother him too much.
But I don't want to wait half a minute for every change in a blog post to see how it looks like, so I'm trying to make it faster. I tried profiling with nodetime but I don't see a drilldown per method or so. My assumption is that the time is lost in the partials, at it sends the whole posts to the partials
How can I profile Docpad so I see where the time is lost? I know the question is very broad, but all I found on performance optimizing on DocPad is that you should make Docpad not to parse static files.
Update the missing link was that I needed to start the CPU profiler on nodetime:
configure nodetime, described here
start CPU profiler on nodetime
start docpad: docpad --profile run
Unfortunately in my case the output is not much helping. The results of my run reveal that 81% of the time is spent in ambi.js, which seems is just a intermediate layer which calls functions. I could not find out which functions are called, adding console.log(fireMethod.toString()) I only see
function () { [native code] }
so I'm not really further. How can I find out where the time is actually spent?
For reference: here is my v8.log
Also, I'm a bit worried, that docpad almost only relies on modules written by Benjamin Lupton. Why is that so?
After an odyssey of about 1 week I came to the conclusion that Docpad is not made for speed, it is made to handle complex sites. Some facts:
even a fresh docpad installation with only twitter bootstrap takes 12s to build
there are no means to only regenerate the files which source files have changed (dependency tree), it always regenerates everything
reading threads like this show that speed is not in focus
My use case is writing articles for a blog and I have a lot of "change text and see how it looks" loops. I have switched to Hexo which is a lot faster:
hexo server starts in 2.5 seconds. With livereload on, when I change a blog post, the broswer tab reloads the page and shows the new content in about 1s
generating all files afresh with hexo clean and hexo generate takes only 5s.
This is the same setup (with less, coffeescript, etc.) I had for DocPad where DocPad needed 38s to run.
Additionally to speed hexo gave me
themes: hexo nicely separates between the theme and the content (DocPad mingles the two). Currently there are about 30 hexo themes to choose from
implementation of read more: in hexo <! --more --> is supported out of the box
deployment to github pages is out of the box
architecture was a lot easier for me to understand, writing widgets is a bliss, the documentation also looks nicer
Overall, it looks like hexo is suited better for blogs, whereas docpad is better suited for more complex sites. Hexo looks like it's really taking off, getting about 30 stars on github per week, whereas docpad is only getting about 10 stars per week.
you can use meta
standalone: true
while you work on a file. This meta will regenerate only this file if update it. Remove the meta after you finish.

Monodroid GC and Sensors

I recently started deploying my test code onto an actual device and ran some sample code provided by Xamarin involving different technologies that they introduce you to. Then I came upon an issue with their garbage collector when trying to test out sensors. With the latest version it runs when you reach a certain threshold however that makes the device unresponsive. Using the code from http://docs.xamarin.com/android/recipes/OS%2f%2fDevice_Resources/Accelerometer/Get_Accelerometer_Readings but just changing it to add 2 more sensors, a gyroscope and gravity sensors, the project lasts about 30 seconds before the GC begins to run. I noticed that every time you reference the e.Values list from the OnSensorChanged function you get more references created. Is there a way to delete those references, as the app I'm working on requires those three sensors and needs to run for about 4 to 5 mins, (its just a section of the app but a really important section). Thanks in advance for any help you can give me.
The following link actually provides a way to understand that issue comes up as well as the solution that would fix the issue completely.
https://bugzilla.xamarin.com/show_bug.cgi?id=1084#c6

Resources