Is there a generic way to consume my dependency's grunt build process? - node.js

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?

What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

Related

create scafolding generators like react/open-wc using node

I am trying to create a project structure to my team like how it implemented by open-wc or create-react-app
just say npm init #open-wc and it asks couple of questions and creates the folder with specified configurations.
I didn't find good articles on google except exploring the github projects.
Maintainer of open-wc here :)
So to get an npm init script all you need to do is define a bin in your package.json.
This is what we use for npm init #open-wc:
"name": "#open-wc/create",
"bin": {
"create-open-wc": "./dist/create.js"
},
So then for the name you have 2 choices:
create-foo will be available via npm init foo
#foo/create will be available via npm init #foo
The generators itself
That's a rather sad story... we looked around but we didn't find anything that really fit our use case well. There is http://yeoman.io/ which we used initially but it's huge and it meant we had a bootup time of ~30-40 seconds before the menu appeared. We felt we needed to do something so now we roll our own solution.
It covers what we need now with a fraction of the size (especially by being very careful with dependencies) which reduced our bootup time to about ~5-10 seconds.
We thought about promoting it as a separate standalone project but truth be told we don't have the manpower for it. It's just 4 files you can find here https://github.com/open-wc/open-wc/tree/master/packages/create/src - beware as there is no docu and quite some rough edges.
Still, if you don't find a better solution feel free to join us and with some help, we could make it a separate product.

What is the best and fastest way to clean-up traces of JHipster?

When you build with a code generator, sometimes you would like to remove traces, if you will, of anything in the source that clearly shows that you have used a generator to bootstrap your project.
This is something that possibly can be done manually and by refactoring, with some efforts and knowledge of the code and its structure.
Of course, this what you would want to do when your code is stable and you know that you won't be using the generator again.
The Question:
What is the best and fastest way to clean-up traces of JHipster and there scripts out there to do this?

Arranging auxiliary tasks for a Haskell project

There are some repetitive auxiliary tasks that I usually have to run when developing or testing a project. For example: downloading some data, setting up the database, cleaning the logs, etc. In Ruby land, they are handled by rake while other languages prefer make or something else (tasks occasionally depend on other tasks, so we may occasionally need one task to perform subtasks that it depends on).
So, is there some conventional way to organize those tasks in a Haskell project?
I would assume that cabal could be used for that, but not all of those auxiliary tasks are about running Haskell code: sometimes it's just a case of performing rm -r logs/*.log or downloading some data with wget or curl. Would it make sense to make cabal's test target depend on other cabal targets that, ugh, run shell scripts/commands from Haskell code? (If it's possible to have dependent targets in cabal at all?)
Alternatively, I could use make, but would "an average haskeller" (an "outside" project contributor, for example) find that intuitive? I believe one would first try cabal test before discovering that it requires setting up the database for the testing first, then running a whole chain of other tasks. Would one notice a Makefile in the first place?
I couldn't find any recipes for handling those auxiliary tasks in Haskell project around.
As long as I know, there's no de facto standard tool in Haskell project.
But recently I heard Shake, a monadic build system written in Haskell.

Is there any good build tool that stays out of my way?

As the title says, I want to have a build tool that quite much stays out of my way.
I would rather want to specify rules, rather than steps in the build process. I wan to say that I want a binary file with a name placed in the root directory of my project, .o files should go in an obj/tmp dir and the source is in the Source-directory.
I do NOT want to tell it that it is this'n'that file as I keep adding new files rather quickly, it should just scan the source directory (and its subdirectories) looking for Ragel (.rl) and C++ code (.cxx) and doing what's necessary to make all into an executable.
I have looked into many tools, like auto{make,conf,header} (Did not really like that I placed the files it wanted in a subdir of project root, eclipse did not like that either), CMake (Seems like I have to add all source-files myself, and is quite much a variation of autotools in my eyes). I have also read about ant, maven (I am also allergic against XML, it's a good format to serialize data for applications, not so much for humans. I would prefer YAML) and others on WikiPedia. And I have seen tools which seems good but which require to be set up as a webserver which is kinda overkill.
Also, I really need the ability to be able to work offline without internet connection!.
Right now it seems like the best option is to make a little script that finds all .cxx files and write an Unity.cxx and builds that one with G++, which probably is quite fast but to much an ugly hack, I guess.
Bonus Points:
Fast builds
Ability to type build test-1 or something and it will build and directly run test-1
Multi-core builds (i.e. faster builds)
Does really not interrupt my train of thought
CMake is great. It's free, cross-platform, and reasonably well documented. It supports "out of source builds", meaning none of the build files are placed in the source directory. That makes source control a bit easier. It can be set up to find new files (globbing). Fast?...It generates make files...after that it's up to your compiler. Multicore...again, more a function of the compiler. I've used CMake on Windows, Linux, and Mac...it just works.
Another that I haven't tried but have read about and plan to test is premake... http://industriousone.com/sample-script
cake from CoffeeScript is quite good, and I'm writing a similar tool using Lua myself.
CMake and premake Ain't build/maketools, they are build/make-descriptor generators; which may fit a large number of projects that ain't changing too much. But not for project where rapid prototyping is a key.
Right now, I'm doing a project where the browser updates when you hit the save-button in your text editor; You do not need to go to the browser and hit F5 (Which would cause a small delay while the browser load in everything again, and you would most likely loose the state of the page, like say that you have an menu open, and wish to tweak the look of the menu. You would be forced to navigate there again in your RIA).

#Grape in scripts with multiple files

I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.

Resources