How do I set up a Dojo build process with multiple applications? - node.js

I have a single-page Dojo (1.8) application, built on top of Colin Snover's Dojo Boilerplate, and it builds and works well. Now I've expanded the website into multiple pages, some of which have other Dojo applications. It works well from the source directories, but the build process doesn't pick up the additional files and thus the installed website is broken.
I need to update the build process so that it optimizes and copies all of the files, but I can't figure out where I should be adding the additional references.
(I've gone through lots of Dojo documentation, but it tends to focus on the details of the trees, or even the tree branches, without saying just what the forest looks like.)
The original boilerplate file tree is as follows:
/build.sh: the bash-based build script, which at its core runs the build tool under node.js
/profiles/app.profile.js: the "application build profile", handed to the build script with the --profile option
/webroot/: the root web server directory, containing:
/dijit/, /dojo/, /dojox/, /util/: the standard Dojo source directories
/app/: the application directory, containing
main.js: the main entry point for the app, which requires everything and then parses the DOM to instantiate the various app objects
run.js: some fundamental require()ments, handed to the build tool with the --require option
(the rest of the app's code)
The build tool is invoked from /webroot/util/buildscripts/ as follows:
node ../../dojo/dojo.js load=build --require ../../app/run.js --profile ../../../profiles/app
I've now added two new applications: one hosted in /webroot/info.html with source in /webroot/info/, and the other in /webroot/licenses.html with source in /webroot/licenses/ (both apps have run.js and main.js based on the initial boilerplate files). The new apps use the various Dojo tools, as well as some of the classes in /webroot/app/*.
But, where do I add references to these new apps so that the build process Does The Right Thing? Here are some possibilities I've come up with:
Add new --require newApp/run.js options to the build tool
Add new profiles, included by additional --profile newApp.profile.js options to the build tool
Add new "layers" to the existing app.profile.js file
Run the build tool multiple times, each time configured for one of the apps, trusting it to properly merge the files into the destination directory (I doubt this would work, but I've considered it...)
So, where do I go from here?

simplest is to create one bash file per application, which you can still optimize down to one via passed through bash variables from the command line ($1 $2,...).
so basically, you copy over the build.sh into each app directory, adjust the paths, and then you create a master shell script, calling each app's build.sh

Related

How can I deploy my application within a cloned repository on Google App Engine?

I'm using a node package to run a web server (among other benefits) for my project. The catch is, my project is only loaded on the server if it's within a directory of the node package. In other words, my directory structure looks like this:
<npm_pkg>/
<npm_pkg_src>/
clients/
<my_project_name>/
<my_project_src>
I would like to be able to use standard deployment processes for my project (e.g. gcloud app deploy, Travis continuous deployment, etc.), but I need to run my project from within a subdirectory of the larger package. Is there an easy way to force a git clone <pkg> during a build step and deploy my project in the target subdirectory?
I'm pretty new to CI/CD, but I tried to search around for similar examples and couldn't find any. Note: the parent project is not owned by me and thus I can't just use submodules without forking it (and I have no intention to alter it in any way). I also strictly just want to be able to trigger deploys based on my actual project's repository, if possible, whereas submodules would involve maintaining two and committing features twice (from what I understand).
Any help is much appreciated.
Edit: I forgot to mention that as part of this configuration I also need to run my server script from the root of the parent package. IOW, my package.json's start script will look like "start": "cd ../.. && npm start". Just in case it's relevant.
This might be what you’re looking for: CI/CD with App Engine
Clone from the repo and deploy from the subdirectory it is located, and Cloud Source Repositories can automate the whole process for you
I would also suggest you keep services separate, this will make things clearer for you and others that will/might be working on the project with you

How to run my node script as one of the ember's build tasks?

I am working in an ember application. From what I understood, it builds the application using Broccoli. I have a requirement where I need to process some files in the application by running a node script before the building process starts. Now I am running the node script separately and then I start the ember server. What is the right way to achieve it? Can I make it as one of the tasks during ember build process? Where should I maintain the node file in the directory?
I would recommend an in-repo addon that implements preBuild or postBuild Ember CLI Addon hooks. Addon hooks are badly documented but there are some usage examples by other addons. E.g. ember-cli-deploy-build-plus uses postBuild hook to remove some files from build output.
An more advanced option would be implementing a broccoli plugin and using that one in a treeFor* hook. This makes especially sense if your custom script needs to add / remove files from the build. ember-cli-addon-docs is a great example for that usage.
Well one solution would be to leverage an in-repo addon since the addon hooks provide alot of extra points for customization than I'm aware than ember-cli-build.js does (as far as I'm aware).
If you want to go beyond the built in customizations or want/need more
advanced control in general, the following are some of the hooks
(keys) available for your addon Object in the index.js file. All hooks
expect a function as the value.
includedCommands: function() {},
blueprintsPath: // return path as String
preBuild:
postBuild:
treeFor:
contentFor:
included:
postprocessTree:
serverMiddleware:
lintTree:
In your case, preBuild sounds like the ticket:
This hook is called before a build takes place.
You can require() whatever files you need to from index.js
A simpler solution may be to call a function from your build script in ember-cli-build.js somewhere before return app.toTree();
let my_build_script = require('./lib/my-build-script.js');
await my_build_script();
return app.toTree();
Some disadvantages to this approach include:
That it will not be run as one of many parallel processes if that is possible on your machine.
It will not be run asynchronously with the rest of the build, instead you will have to wait until it is done to start building.
You will likely have to modify your build script to return a function you can call and have it return a promise when it is complete.

Include custom Dojo modules in Intern coverage

I'll apologize now because I am very new to Intern and know just enough to know that I don't know anywhere near enough. I am using the latest version of Intern. I see lots of details about how to exclude files from the coverage reports that Intern generates, but nothing on what it includes in coverage by default, and now to get other things included. Intern already instruments and provides coverage reports on the test files that I run, but that doesn't do me any good. I need to I have several custom Dojo modules that need to be instrumented for coverage, but I can't seem to find how to make that happen. I am only running functional tests at this time.
The website under test is being served by local IIS, but the test files are in a completely different folder. Be default, it appears that Intern is instrumenting the test files and showing me nice reports about how much of my tests were covered in the run. Seeing this, my thought was that I needed to move all of the Intern install and configuration to the local IIS folder, which I did. Intern is still only providing coverage reports for the test files and not the Dojo modules.
Folder structure in IIS
wwwroot
|
--js
|
--Chai
--ckeditor
--myScripts
--dojo
--node_modules
Gruntfile.js
internConfig.js
package.json
I need the files in the myScripts folder instrumented for code coverage. Here is what I am excluding:
excludeInstrumentation: /^(?:Chai|node_modules|ckeditor|dojo)\//
It appears that nothing in those folders is being is instrumented, so at least I have that right. I don't have anything defined under loaderOptions at this time, and I'm not entirely sure that that is where the stuff in the myScripts folder should be listed when it comes to functional testing. So, the question is how do I get the stuff in that folder instrumented for code coverage?
In order to be instrumented, code need to be requested from the HTTP server that Intern creates when you run intern-runner. If you are loading code directly from IIS, it will never be instrumented and no code coverage analysis can be performed. If you need to use IIS instead of the built in server, you will also need to configure IIS to reverse proxy requests for these files to Intern, as described in the testing non-CORS APIs documentation.

Packaging requirejs optimized files in war

In a large web application, I'm using requirejs amd modules so that the scripts themselves are modular and maintainable. I have the following directory structure
web
|-src
|-main
|-java
|-resources
|-webapp
|-static
|-scripts
|-styles
|-images
|-static-built //output from r.js. not checked into git
|-WEB-INF
During build js and css are optimized using r.js into static-built folder. Gradle is the build tool.
Now the problem: The jsps refer to the scripts in static/scripts folder and this is how i want when working locally. However when building war, I want the static files to be served from static-built folder. The important thing is the source jsp should not have to change to serve the optimized files from static-built folder.
Two options that I have are: a) the gradle build while making war should include static-built instead of static. b)include static-built in addition to static and using tuckey urlrewrite pick the resouce from static-built rather than static.
What best practices are the community following in similar scenarios?
We've setup the server to have a runtime profile (dev, qa, prod, etc) read from a system property which determines some settings based on it. When running in production profile we serve the optimized files from the WAR. In development we serve the non-minified and non-concatenated files directly from the filesystem outside the application context.
Files are structured according to the official multipage example.
Configuring serving files depends on your chosen backend solution. Here's an example for spring.
Alternatively, r.js can generate source maps and those will help with development as well.
Not sure if this question is outdated already, but I had a kind of similar problem.
I had similar project structure, but with the only difference - I've split the project into 2 modules:
one of them (let's call it service) was java-module for back-end
the second one contained only js and other stuff related to front-end (let's call it ui).
Then in Gradle build 'assemble' task of the service depends on 'assemble' task of ui AND another custom task called 'pre-assemble'. This 'pre-assemble' task was copying the optimized js files to place where I wanted them to be.
So, basically, I've just added another task that was responsible for placing all the optimized js files in the proper place.

Building (preparing) node.js application for production (deploy)

I have a project that consists of several nod.js backend applications. The apps are using the same modules (which a placed outside of each ap folder in shared location). The aps they are to be deployed on differnt environments (servers), some code is for test, some for debug as usual.
If I choosed a platform (for example PaaS nodejitsu) for one of my apps, how I'm supposed to send there only production code for one of my apps? I deployed on nodejitsu and it just sends the app folder and uses package.json to configure the app. But there are a bunch of code that is not need (tests) for example and some code is external. And what If I want to obstruct server code too? How this issues are supposed to be soleved?
For front-end applications has a tons of methods to be build for production. I understand that the requirements are different, but didn't find any infromation on best practices fo how correctly to prepare node.js back end application for deploy.
Read section "Keeping files out of your package" in the NPM Developer page. It states following
Use a .npmignorefile to keep stuff out of your package. If there's no .npmignore file, but there is a .gitignore file, then npm will ignore the stuff matched by the .gitignore file. If you want to include something that is excluded by your .gitignore file, you can create an empty .npmignore file to override it.
Add those test files in .gitignore
or make another branch for production in git and push the production branch.

Resources