Test both source code and bundled code with jest - node.js

Let's say I am developing an NPM module.
I am using Jest for the testing, Webpack to bundle it and TypeScript in general.
When I test the source code, everything is fine, with also a very good code coverage and all of that. But I think that it is not enough. It could be possible that something breaks after the Webpack bundle is generated, for instance a dynamic import (a require with a variable instead of a fixed path) that would become incorrect after the bundle, or other possible scenarios.
How should I write tests that cover also the bundle? Should I test against both the source code (so that I get good coverage) and the bundle? Usually I import things directly from a specific files (e.g. /utils/myutil.ts), but with the bundle this would be impossible. How to handle this?

I do test against the bundle for some of my projects. I do this for some libraries (npm).
To do this I create some code that imports the bundle and write tests against this code. Don't care about coverage in this case, I just want to verify that my library does what it's supposed to do.
In another case (not a library) I'm testing against the bundle but I'm running more integration/e2e tests.
Don't worry about coverage that much unless every functions (or most of them) of your code are going to be used by the final user. You should test something the way it is used. 100% coverage is nice to see but very impractical to achieve when projects get big and in any case it's a waste of time. Of course, some people will disagree :)

Related

Test for GHC compile time errors

I'm working on proto-lens#400 tweaking a Haskell code generator. In one of the tests I'd like to verify that certain API has not been built. Specifically, I want to ensure that a certain type of program will not type check successfully. I'd also have a similar program with one identifier changed which should compile, to guard against a typo breaking the test. Reading Extending and using GHC as a Library I have managed to have my test write a small file and compile it using GHC as a library.
But I need the code emitted by the test to load some other modules. Specifically the output of the code generator of that project and its runtime environment with transitive dependencies. I have at best a very rough understanding of stack and hpack, which is providing the build time system. I know I can add dependencies to some package.yaml file to make them available to individual tests, but I have no clue how to access such dependencies from the GHC session set up as part of running the test. I imagine I might find some usable data in some environment variables, but I also believe such an approach might be undocumented and prone to break without warning.
How can I have a test case use GHC as a library and have it access dependencies expressed in package.yaml? Or alternatively, can I use some construct other than a regular test case to express a file with dependencies but check that the file won't compile?
I don't know if this applies to you because there are too many details going way over my head, but one way to test for type errors is to build your test suite with -fdefer-type-errors and to catch the exception at run-time (of type TypeError).

PWA app.js how to move code to many smaller files

I've written PWA application, application isn't big, but now my app.js has 800 lines of the code. It has many methods. How to move these methods to another files divided thematically?
require doesn't work
You have a few options depending on what browsers you support.
You may be able to use native support for modules. You can find more information about this in https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules This would be one of the simpler solutions as it does not require any additional tooling but at this time the support outside chrome is not very good.
A second alternative is to break up your code into multiple JS files and just load them all separately. This can have performance implications but if your files are small and few it wont cause too many problems. Just ensure that the code produced in these files put themselves onto a name object to avoid conflicts.
Ex file
(function() {
window.mycode = {};
window.mycode.func = function() {...};
})();
A third option is to use an existing module loader in the browser such as https://requirejs.org/
The fourth option, which is probably the most common, is to integrate a build step into your code that uses npm and a module loader such as webpack or browserify. This also lets you integrate babel which is really common among large javascript projects. The downside is it adds a step to your deployment that needs to be run and you need to learn how to use tools like webpack(which is surprisingly complicated). However, if you do javascript dev you will need to be familiar with them eventually.

Coverage drops when using babel

I stalled the decision of using babel but found, that it is necessary to write better code.
Before babel I used mocha and chai I started to test my code and reached up to a 100%. But since using it, my code coverage drops significantly (of course) as I only try to cover the resulting ES5 output.
So my question would be: How to test my source code without having a huge drop at my statistics.
Generally the core issue with this, is that Babel has to insert code to cover all of the edge cases of the spec, but may not matter from the standpoint of coverage calculation.
The best approach currently would be to use https://github.com/istanbuljs/babel-plugin-istanbul to add the coverage tracking metadata to your original ES6 code, which means that even though Babel eventually converts it to ES5, the coverage will be about the ES6 code.

Is there a generic way to consume my dependency's grunt build process?

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?
What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

#Grape in scripts with multiple files

I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.

Resources