Include custom Dojo modules in Intern coverage - iis

I'll apologize now because I am very new to Intern and know just enough to know that I don't know anywhere near enough. I am using the latest version of Intern. I see lots of details about how to exclude files from the coverage reports that Intern generates, but nothing on what it includes in coverage by default, and now to get other things included. Intern already instruments and provides coverage reports on the test files that I run, but that doesn't do me any good. I need to I have several custom Dojo modules that need to be instrumented for coverage, but I can't seem to find how to make that happen. I am only running functional tests at this time.
The website under test is being served by local IIS, but the test files are in a completely different folder. Be default, it appears that Intern is instrumenting the test files and showing me nice reports about how much of my tests were covered in the run. Seeing this, my thought was that I needed to move all of the Intern install and configuration to the local IIS folder, which I did. Intern is still only providing coverage reports for the test files and not the Dojo modules.
Folder structure in IIS
wwwroot
|
--js
|
--Chai
--ckeditor
--myScripts
--dojo
--node_modules
Gruntfile.js
internConfig.js
package.json
I need the files in the myScripts folder instrumented for code coverage. Here is what I am excluding:
excludeInstrumentation: /^(?:Chai|node_modules|ckeditor|dojo)\//
It appears that nothing in those folders is being is instrumented, so at least I have that right. I don't have anything defined under loaderOptions at this time, and I'm not entirely sure that that is where the stuff in the myScripts folder should be listed when it comes to functional testing. So, the question is how do I get the stuff in that folder instrumented for code coverage?

In order to be instrumented, code need to be requested from the HTTP server that Intern creates when you run intern-runner. If you are loading code directly from IIS, it will never be instrumented and no code coverage analysis can be performed. If you need to use IIS instead of the built in server, you will also need to configure IIS to reverse proxy requests for these files to Intern, as described in the testing non-CORS APIs documentation.

Related

Building monorepo babel-transpiled node JS application with dependencies

I am working on a project that is hosted as a monorepo. For simplification purposes let's say that inside there are three self-explanatory packages: server, a webapp client and library. The directory structure would be something like the following:
the-project
packages
server
src
webapp
src
library
src
All packages employ flow type notation, use a few >ES5 features and, for this reason, go through babel transpilation. The key difference is that transpilation of the webapp package is done via webpack, whereas server employs a gulp task that triggers script transpilation through the gulp-babel package. library is transpiled automatically when web is built.
Now, the problem I have is that for server to build, babel requires library to be built first and its package.json to specify its (built) main JS source file so its transpiled artifacts can be included. As you can imagine, this would quickly become problematic if the project were to contain multiple libraries that are actively being developed (which it does), as all would require building, including any dependent packages (like server in this simple case).
As an attempt to overcome this annoyance, I initially thought of using webpack to build the server, which would take care of including whatever dependencies it requires into a bundle, but I ran into issues as apparently webpack is not meant to be used on node JS applications.
What strategies are available for building a node JS application requiring Babel transpilation, such that the application's source files as well as any dependencies are built transparently and contained in a single output directory?
Annex A
Simplified gulp task for transpilation of scripts, as employed by server.
return gulp
.src([`src/**/*.js`], { allowEmpty: true })
.pipe(babel({ sourceMap: true }))
.pipe(gulp.dest('dist'));
As can be seen above, only server's own source files are included in the task. If src were to be changed to also include library, the task would emit the dependencies' artifacts in server's own output directory and any require('library') statements within would attempt to locate the built artifacts in packages/library and not packages/server/dist, thus resulting in import failures.
First of all, I am not sure what your server is doing. If it is doing a database connection or some calculations then I would not recommend it to be built by webpack. Whereas If your server is just doing Server-Side Rendering and making some API calls to other servers then I would recommend it to be bundled using webpack.
A lot of projects follow this philosophy. For example, you can take a look at something similar, I have done in one of my personal projects [Blubus]. Specifically, you might be interested in webpack-server-config. And also you can take a look at how big projects like spectrum does it.

How do I set up a Dojo build process with multiple applications?

I have a single-page Dojo (1.8) application, built on top of Colin Snover's Dojo Boilerplate, and it builds and works well. Now I've expanded the website into multiple pages, some of which have other Dojo applications. It works well from the source directories, but the build process doesn't pick up the additional files and thus the installed website is broken.
I need to update the build process so that it optimizes and copies all of the files, but I can't figure out where I should be adding the additional references.
(I've gone through lots of Dojo documentation, but it tends to focus on the details of the trees, or even the tree branches, without saying just what the forest looks like.)
The original boilerplate file tree is as follows:
/build.sh: the bash-based build script, which at its core runs the build tool under node.js
/profiles/app.profile.js: the "application build profile", handed to the build script with the --profile option
/webroot/: the root web server directory, containing:
/dijit/, /dojo/, /dojox/, /util/: the standard Dojo source directories
/app/: the application directory, containing
main.js: the main entry point for the app, which requires everything and then parses the DOM to instantiate the various app objects
run.js: some fundamental require()ments, handed to the build tool with the --require option
(the rest of the app's code)
The build tool is invoked from /webroot/util/buildscripts/ as follows:
node ../../dojo/dojo.js load=build --require ../../app/run.js --profile ../../../profiles/app
I've now added two new applications: one hosted in /webroot/info.html with source in /webroot/info/, and the other in /webroot/licenses.html with source in /webroot/licenses/ (both apps have run.js and main.js based on the initial boilerplate files). The new apps use the various Dojo tools, as well as some of the classes in /webroot/app/*.
But, where do I add references to these new apps so that the build process Does The Right Thing? Here are some possibilities I've come up with:
Add new --require newApp/run.js options to the build tool
Add new profiles, included by additional --profile newApp.profile.js options to the build tool
Add new "layers" to the existing app.profile.js file
Run the build tool multiple times, each time configured for one of the apps, trusting it to properly merge the files into the destination directory (I doubt this would work, but I've considered it...)
So, where do I go from here?
simplest is to create one bash file per application, which you can still optimize down to one via passed through bash variables from the command line ($1 $2,...).
so basically, you copy over the build.sh into each app directory, adjust the paths, and then you create a master shell script, calling each app's build.sh

Packaging requirejs optimized files in war

In a large web application, I'm using requirejs amd modules so that the scripts themselves are modular and maintainable. I have the following directory structure
web
|-src
|-main
|-java
|-resources
|-webapp
|-static
|-scripts
|-styles
|-images
|-static-built //output from r.js. not checked into git
|-WEB-INF
During build js and css are optimized using r.js into static-built folder. Gradle is the build tool.
Now the problem: The jsps refer to the scripts in static/scripts folder and this is how i want when working locally. However when building war, I want the static files to be served from static-built folder. The important thing is the source jsp should not have to change to serve the optimized files from static-built folder.
Two options that I have are: a) the gradle build while making war should include static-built instead of static. b)include static-built in addition to static and using tuckey urlrewrite pick the resouce from static-built rather than static.
What best practices are the community following in similar scenarios?
We've setup the server to have a runtime profile (dev, qa, prod, etc) read from a system property which determines some settings based on it. When running in production profile we serve the optimized files from the WAR. In development we serve the non-minified and non-concatenated files directly from the filesystem outside the application context.
Files are structured according to the official multipage example.
Configuring serving files depends on your chosen backend solution. Here's an example for spring.
Alternatively, r.js can generate source maps and those will help with development as well.
Not sure if this question is outdated already, but I had a kind of similar problem.
I had similar project structure, but with the only difference - I've split the project into 2 modules:
one of them (let's call it service) was java-module for back-end
the second one contained only js and other stuff related to front-end (let's call it ui).
Then in Gradle build 'assemble' task of the service depends on 'assemble' task of ui AND another custom task called 'pre-assemble'. This 'pre-assemble' task was copying the optimized js files to place where I wanted them to be.
So, basically, I've just added another task that was responsible for placing all the optimized js files in the proper place.

How to debug tests with karma.js + require.js

I have a setup basically described here - http://karma-runner.github.io/0.8/plus/RequireJS.html
Problem is that I can't see source files of my tests in Chrome dev tools. So I can't debug it. Adding debugger; works but it is very uncomfortable, almost unusable since I can't browse any other file except the one with debugger; currently fired
Seems like karma load files, parse them, wrap each test and then unload files before run.
ng-boilerplate has a grunt build that will put all your plain js files into a build directory for testing and debugging.
Take a look at the Gruntfile and karma/karma-unit.tpl.js for how this is done.
Running grunt watch will leave your browser in a state where you can debug all your tests. Just click the debug button, set your break point(s) and reload the page.
Suddenly, you are debugging any or all your js files.
If you need to debug your test deeply, this is generally an indicator of badly organized code or badly made unit test. If you follow a TDD workflow, taking small step will help you prevent any major issue with your code. I warmly recommend you watch this video: http://blog.testdouble.com/posts/2013-10-03-javascript-testing-tactics.html?utm_source=javascriptweekly&utm_medium=email (it doesn't use Karma, but you should watch it for the workflow/the principles presented)
Then, if you really want to debug your test code, nothing beat the browser. As so, you should set up your test in a manner it can be runned both in Karma and the browser. We implemented this for QUnit, Jasmine and Mocha on the Backbone-Boilerplate. Feel free to base yourself on these settings to set up your own environment.

Building (preparing) node.js application for production (deploy)

I have a project that consists of several nod.js backend applications. The apps are using the same modules (which a placed outside of each ap folder in shared location). The aps they are to be deployed on differnt environments (servers), some code is for test, some for debug as usual.
If I choosed a platform (for example PaaS nodejitsu) for one of my apps, how I'm supposed to send there only production code for one of my apps? I deployed on nodejitsu and it just sends the app folder and uses package.json to configure the app. But there are a bunch of code that is not need (tests) for example and some code is external. And what If I want to obstruct server code too? How this issues are supposed to be soleved?
For front-end applications has a tons of methods to be build for production. I understand that the requirements are different, but didn't find any infromation on best practices fo how correctly to prepare node.js back end application for deploy.
Read section "Keeping files out of your package" in the NPM Developer page. It states following
Use a .npmignorefile to keep stuff out of your package. If there's no .npmignore file, but there is a .gitignore file, then npm will ignore the stuff matched by the .gitignore file. If you want to include something that is excluded by your .gitignore file, you can create an empty .npmignore file to override it.
Add those test files in .gitignore
or make another branch for production in git and push the production branch.

Resources