I have a very simple program developed with ES6 and transpiled with Babel.
import kue from 'kue';
import cluster from 'cluster';
const queue = kue.createQueue();
const clusterWorkerSize = require('os').cpus().length;
if (cluster.isMaster) {
kue.app.listen(3000);
for (var i = 0; i < clusterWorkerSize; i++) {
cluster.fork();
}
} else {
queue.process('email', 10, function(job, done){
...
});
}
The problem comes when I run the program with
$ babel-node --presets es2015 program.js
The master process run without problem but the children crash with:
import kue from 'kue';
SyntaxError: Unexpected reserved word
Any idea of how run the children with Babel?
NOTE: one option is to generate a dist/ folder with all the code transpiled to ES5, but I leave that for the last.
The problem here is that child processes are run under the node, not babel-node.
Try to use babel require hook instead of CLI.
Related
I'm building a node.js application using worker_threads under the hood. In the script file, called worker.ts, I cannot use the import statement because Node throws an error. So I'm importing the needed packages like this:
const { parentPort } = require('worker_threads')
parentPort.on('message', (data) => {
//Non relevant code
})
However, despite the code actually working, the following error is displayed since there is neither an import nor an export statement:
'worker.ts' cannot be compiled under '--isolatedModules' because it is considered a global script file.
How can I solve the issue?
Using the CodeSandbox link that you provided as a reference, I'll explain the changes that need to be made in both TypeScript modules in order for compilation to succeed and for the program to execute successfully:
./src/index.ts:
// Use import statements: TypeScript will transform them into "require" calls
// because you are targeting CommonJS in your TSConfig
import {Worker} from 'worker_threads';
import * as path from 'path';
const worker = new Worker(path.resolve(__dirname, './worker.js'));
/* ^^^
It is important to use the path of the **COMPILED** file,
and the extension of the compiled file will be ".js" */
worker.on('message', (data) => console.log('Main: ' + data));
worker.postMessage('Hello!');
./src/worker.ts:
// Again, use import statement
import {parentPort} from 'worker_threads';
parentPort.on('message', (data) => {
console.log('Worker: ' + data);
setTimeout(() => parentPort.postMessage(data), 1000);
});
Run:
# $ cd path/to/project/dir
$ tsc && node dist/index.js
Worker: Hello!
Main: Hello!
I have a test repo with redux-observable
It works with webpack-dev-server but breaks with server-side-rendering giving:
TypeError: action$.ofType(...).delay is not a function
How to reproduce:
yarn dev works okay (webpack-dev-server).
yarn build && yarn start - runs node server-side-rendering which is breaking when creating store with redux createStore method.
It recognizes imported operators from rxjs within a browser (webpack-dev-server). My guess it might be a problem with webpack serverConfig, more specifically with:
externals: fs.readdirSync('./node_modules').concat([
'react-dom/server',
]).reduce((ext, mod) => {
ext[mod] = `commonjs ${mod}`;
return ext;
}, {}),
importing the whole rxjs library will jeopardise your tree shaking.
use pipe instead.
import { delay } from 'rxjs/operators';
const epic = action$ => action$
.ofType('baz')
.pipe(delay(5000))
.mapTo({ type: 'bar' });
;
It turned out I had to include rxjs in server.js where express is like:
import `rxjs`;
But I would swear I tried that solution before I posting a question.
I know that Angular 2 is run on a web browser, which does not have access to the file system.
However, I'm using Electron as my front-end, and also running the app via electron:
"build-electron": "ng build --base-href . && cp src/electron/* dist",
"electron": "npm run build-electron && electron dist"
Therefore, I run it with npm run electron which at the very end runs electron dist.
Since I'm running through electron and not ng I would think that I should be able to access the filesystem. However, when I do:
import * as fs from 'fs'
I get an error:
ng:///AppModule/AppComponent_Host.ngfactory.js:5 ERROR TypeError: __WEBPACK_IMPORTED_MODULE_0_fs__.readFileSync is not a function(…)
Similarly, when I try: var fs = require('fs');
I get:
ng:///AppModule/AppComponent_Host.ngfactory.js:5 ERROR TypeError: fs.readFileSync is not a function
This is the call resulting in the error:
this.config = ini.parse(fs.readFileSync('../../CONFIG.ini', 'utf-8'))
Does anyone have any idea what's causing this?
Thanks.
Solved it by:
1) Eject webpack: ng eject
2) Add target: 'electron-renderer' to the module.exports array inside webpack.config.js
3) Require remote, since we're in the renderer, but fs is only available in the main process (Read more): var remote = require('electron').remote;
4) Require fs (this time using remotes implementation of require): var fs = remote.require('fs');
And now it works!
I am using
Angular CLI: 7.0.7
Node: 8.12.0
OS: win32 x64
Angular: 7.0.4
I tried the ng eject method it didn't work in my case, it is disabled by default and will be removed completely in Angular 8.0
Error message: The 'eject' command has been disabled and will be removed completely in 8.0.
It worked for me by creating a file called native.js in the src folder and insert the following:
`window.fs = require('fs');
Add this file to the angular-cli.json scripts array:
"scripts": [
"native.js"
]
Add the following lines to polyfills.ts:
`declare global {
interface Window {
fs: any;
}
}`
After that you can access the filesystem with:
`window.fs.writeFileSync('sample.txt', 'my data');`
credits
As I understand it, you build the application with Webpack.
You can expose all Node modules via the externals array in your webpack config.
module.exports = {
"externals": {
"electron": "require('electron')",
"child_process": "require('child_process')",
"fs": "require('fs')",
"path": "require('path')",
...
}
}
Since they are provided through the Webpack externals, one does not have to require them but use them with imports.
import * as fs from 'fs'
You can read more about this problem in my article.
I'm late to the party but I also stumbled upon this problem recently. To the late comers, you can use ngx-fs
https://github.com/Inoverse/ngx-fs
Usage:
const fs = this._fsService.fs as any;
fs.readdir("\\", function (err, items) {
if (err) {
return;
}
for (let i = 0; i < items.length; i++) {
console.log(items[i]);
}
});
I had the same problem and could solve it in an easier way:
Just download this project as start, the 'require'-s are already in the webpack.config.js file (along with the integration of angular, electron and so on):
https://github.com/maximegris/angular-electron
import 'fs' into home.ts (or into any other component) as mentioned by #Matthias Sommer above:
import * as fs from 'fs'
Use 'fs' :)
I need to run some code after nodeunit successfully passed all tests.
I'm testing some Firebase wrappers and Firebase reference blocks exiting nodeunit after all test are run.
I am looking for some hook or callback to run after all unit tests are passed. So I can terminate Firebase process in order nodeunit to be able to exit.
Didn't found a right way to do it.
There is my temporary solution:
//Put a *LAST* test to clear all if needed:
exports.last_test = function(test){
//do_clear_all_things_if_needed();
setTimeout(process.exit, 500); // exit in 500 milli-seconds
test.done();
}
In my case, this is used to make sure DB connection or some network connect get killed any way. The reason it works is because nodeunit run tests in series.
It's not the best, even not the good way, just to let the test exit.
For nodeunit 0.9.0
For a recent project, we counted the tests by iterating exports, then called tearDown to count the completions. After the last test exits, we called process.exit().
See the spec for full details. Note that this went at the end of the file (after all the tests were added onto exports)
(function(exports) {
// firebase is holding open a socket connection
// this just ends the process to terminate it
var total = 0, expectCount = countTests(exports);
exports.tearDown = function(done) {
if( ++total === expectCount ) {
setTimeout(function() {
process.exit();
}, 500);
}
done();
};
function countTests(exports) {
var count = 0;
for(var key in exports) {
if( key.match(/^test/) ) {
count++;
}
}
return count;
}
})(exports);
As per nodeunit docs I can't seem to find a way to provide a callback after all tests have ran.
I suggest that you use Grunt so you can create a test workflow with tasks, for example:
Install the command line tool: npm install -g grunt-cli
Install grunt to your project npm install grunt --save-dev
Install the nodeunit grunt plugin: npm install grunt-contrib-nodeunit --save-dev
Create a Gruntfile.js like the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
grunt.registerTask('test', [
'nodeunit'
]);
};
Create your custom task that will be run after the tests by changing your grunt file to the following:
module.exports = function(grunt) {
grunt.initConfig({
nodeunit : {
all : ['tests/*.js'] //point to where your tests are
}
});
grunt.loadNpmTasks('grunt-contrib-nodeunit');
//this is just an example you can do whatever you want
grunt.registerTask('generate-build-json', 'Generates a build.json file containing date and time info of the build', function() {
fs.writeFileSync('build.json', JSON.stringify({
platform: os.platform(),
arch: os.arch(),
timestamp: new Date().toISOString()
}, null, 4));
grunt.log.writeln('File build.json created.');
});
grunt.registerTask('test', [
'nodeunit',
'generate-build-json'
]);
};
Run your test tasks with grunt test
I came across another solution how to deal with this solution. I have to say the all answers here are correct. However when inspecting grunt I found out that Grunt is running nodeunit tests via reporter and the reporter offers a callback option when all tests are finished. It could be done something like this:
in folder
test_scripts/
some_test.js
test.js can contain something like this:
//loads default reporter, but any other can be used
var reporter = require('nodeunit').reporters.default;
// safer exit, but process.exit(0) will do the same in most cases
var exit = require('exit');
reporter.run(['test/basic.js'], null, function(){
console.log(' now the tests are finished');
exit(0);
});
the script can be added to let's say package.json in script object
"scripts": {
"nodeunit": "node scripts/some_test.js",
},
now it can be done as
npm run nodeunit
the tests in some_tests.js can be chained or it can be run one by one using npm
Here is a simplified version of my cluster Express app:
/index.js
module.exports = process.env.CODE_COV
? require('./lib-cov/app')
: require('./lib/app');
/lib/app.js
var cluster = require('cluster'),
express = require('express'),
app = module.exports = express.createServer();
if (cluster.isMaster) {
// Considering I have 4 cores.
for (var i = 0; i < 4; ++i) {
cluster.fork();
}
} else {
// do app configurations, then...
// Don't listen to this port if the app is required from a test script.
if (!module.parent.parent) {
app.listen(8080);
}
}
/test/test1.js
var app = require('../');
app.listen(7777);
// send requests to app, then assert the response.
Questions:
var app = require('../'); will not work in this cluster environment. Which of the worker apps should it return? Should it return the cluster object instead of an Express app?
Now, obviously setting the port in the test script will not work. How would you set a port within a test script to a cluster of apps?
How would you send requests to this cluster of apps?
The only solution I can think of is to conditionally turn off the clustering feature and run only one app if the app is requested from a test script (if (module.parent.parent) ...).
Any other way to test a clustered Express app with Mocha?
It's been quite a long time since I have posted this question. Since no one has answered, I will answer to this question myself.
I kept the /index.js as it is:
module.exports = process.env.CODE_COV
? require('./lib-cov/app')
: require('./lib/app');
In /lib/app.js which starts the cluster, I have the following code. In brief, I start the cluster only in non-test environment. In test environment the cluster is not started but only one app/worker itself is started as defined in the cluster.isMaster && !module.parent.parent condition.
var cluster = require('cluster'),
express = require('express'),
app = module.exports = express.createServer();
if (cluster.isMaster && !module.parent.parent) {
// Considering I have 4 cores.
for (var i = 0; i < 4; ++i) {
cluster.fork();
}
} else {
// do app configurations, then...
// Don't listen to this port if the app is required from a test script.
if (!module.parent.parent) {
app.listen(8080);
}
}
In the above case !module.parent.parent will be evaluated as a truthful object only if the application was not started by a test script.
module is the current /lib/app.js script.
module.parent is its parent /index.js script.
module.parent.parent is undefined if the application was started directly via node index.js.
module.parent.parent is the test script if the application was started via one of the scripts.
Thus, I can safely start the script where I can set a custom port.
/test/test1.js
var app = require('../');
app.listen(7777);
// send requests to app, then assert the response.
At the same time if I need to run the application in real, i.e. not for testing, then I run node index.js and it will start up the cluster of applications.
I have a much simpler way of doing this
if (process.env.NODE_ENV !== 'test') {
if (cluster.isMaster) {
var numCPUs = require('os').cpus().length;
console.log('total cpu cores on this host: ', numCPUs);
for (var i = 0; i < numCPUs; i++) {
console.log('forking worker...');
cluster.fork();
}
cluster.on('online', function(worker) {
console.log('Worker ' + worker.process.pid + ' is online.');
});
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died.');
});
} else {
console.log('Im a worker');
// application code
setupServer()
}
} else {
// when running tests
setupServer();
}
Just make sure to set the env to test when running the tests
ex: NODE_ENV=test grunt test
I kind of liked your solution because of it's simplicity, however, in an environment like an MVC framework for node, you may end up chaining module.parent up to 11 times (seriously).
I think a better approach would be to simply check which script node started processing with. The node's command-line arguments are available at process.argv.
The first item in this array would be 'node', the executable and the second argument would be the path to the file that node start executing. This would be index.js in your case.
So instead of checking
module.parent.parent
^ ^
(app.js) |
(index.js)
You could do something like this
var starter = process.argv[1].split(path.sep).pop();
Where starter would be index or index.js depending on what you started your server with.
node index.js vs node index
The check would then look like:
if (cluster.isMaster && starter === 'index.js') {
cluster.fork();
}
Worked in my environments—I hope this helps!