I'm working with an express app and am using babel to transpile my code to be able to use some ES6/7/8 goodies.
THe command I'm running to transpile the files is: npx babel server --out-dir lib --watch. Then to start my server, I run nodemon lib/server.js.
The issue I'm currently running into is that all errors are happening from the transpiled files in /lib, so the trace doesn't quite match with what is actually in the source, making it hard to debug.
So let's say an exception is thrown on line 10 in a transpiled file in /lib, that error doesn't match up to where the error actually is in the source since the trace is with respect to the transpiled file.
Is there a way I can get it to map correctly?
Thanks!
#Brian i suggest you to use "babel-polyfill" and "babel-register" modules.
add these modules in your main entry file for example refer the below code.
in this way you do not need to transpile your code seprately and can debug in the same ES6+ original code.
just add the start command simply as shown in the below code snippet, it will run your node.js code and transpiles all your ES6+ features as well at run time on fly.
Example:
app.js
// Added for regenerator runtime!!
require('babel-polyfill');
// Transpile on the fly
require('babel-register')({
ignore: false,
only: /\/src/,
});
require('dotenv/config');
let server = require('./server');
server.listen(process.env.APP_PORT, () => {
console.info(`application started on port ${process.env.APP_PORT}`);
});
package.json
"scripts": {
"start": "node src/app.js",
Happy Coding :)
Related
Well, the problem is simple to explain and although I have investigated I have not found a solution
When I start the project in src/ with either nodemon src/ --ignore src/public/*(with the npm start script) or node src/ commands only two .js files are executed in src/ and these are web3.js and index.js, nodemon and node completely ignore the existence of database.js
I thought that the error might be in the code in database.js so I created the file test.js which makes a simple console.log to check if it was executed but not
The files are structured as follows:
src/
- database.js
- index.js
- test.js
- web3.js
package.json
config.toml
I tried to run the database.js file independently with node and got this error, which is weird because in web3.js and index.js I also get the toml file the same way.
> node src/database.js
node:internal/fs/utils:345
throw err;
^
Error: ENOENT: no such file or directory, open './config.toml'
The code that gives this error, although irrelevant, here it is
const { readFileSync } = require("fs");
const { mongoUriOp } = toml.parse(readFileSync("./config.toml", 'utf-8'));
I tried (thanks to a comment) changing the config path to "../config.toml" and the file ran with node database, but it still won't run with npm start or node src/.
EDIT:
From the comments, it looks like you figured it out by importing (require) all your files in index.js. I have a feeling that your package.json file has a field main, and it is set to src/index.js.
It's difficult to understand what you are having trouble with.
I'm assuming you want to watch for changes in all of your JavaScript files inside the src folder.
You could try to add the watch flag to your command
inside package.json.
"scripts": {
"start": "nodemon --watch src --ignore src/public/*"
}
Link to doc for reference
I'm trying to get the ts-node option --experimental-loader working along with mocha, and having no luck. Until I started trying to compile ES6 modules, I used to be able to run mocha tests this way:
"test": "nyc --reporter=html mocha --require ts-node/register src/**/*.spec.ts"
That doesn't work anymore when generating ES6 modules.
I'd use the TS_NODE_COMPILER_OPTIONS='{\"module\": \"commonjs\" }' solution for testing, but that won't work for me because of another complication: I'm generating ES6 modules as a first step in my build, but also generating ES5/CommonJS modules using webpack and babel. That last step doesn't work unless I add .js to the end of my local TypeScript import statements.
But adding those .js extensions turns out to break the TS_NODE_COMPILER_OPTIONS='{\"module\": \"commonjs\" }' solution, which will work, however, if I go back and delete all of the .js extensions. I obviously don't want a test and build process where I have to keep going back and forth between adding and removing those extensions.
To simplify for now, I've taken out nyc, and I'm trying to run tests like this:
mocha -r ts-node/register --experimental-loader ./ts-loader.mjs src/**/*.spec.ts
I get no errors this way, but nothing happens either. It's like the src/**/*.spec.ts doesn't exist.
My do-nothing (for now) dummy loader looks like this:
console.log('ts-loader loaded');
export async function resolve(specifier, context, defaultResolve) {
console.log('resolve');
return defaultResolve(specifier, context, defaultResolve);
}
export async function getFormat(url, context, defaultGetFormat) {
console.log('getFormat');
return defaultGetFormat(url, context, defaultGetFormat);
}
export async function getSource(url, context, defaultGetSource) {
console.log('getSource');
return defaultGetSource(url, context, defaultGetSource);
}
export async function transformSource(source, context, defaultTransformSource) {
console.log('transformSource');
return defaultTransformSource(source, context, defaultTransformSource);
}
export function getGlobalPreloadCode() {
console.log('getGlobalPreloadCode');
return '';
}
I can tell it gets loaded because the 'ts-loader loaded' message appears, but none of the functions ever get called.
I've tried other permutations, but just get errors like src/**/*.spec.ts being treated as a literal file name instead of a glob, or errors about modules not being found.
I was hoping to see my loader invoked for every import being handled, and then figuring out how to manipulate the file extensions, but I haven't managed to get that far yet. Any suggestions?
I'm using node v14.15.1. The full code for my project, with a working build, but broken tests, can be found here: https://github.com/kshetline/tubular_math
I finally found a solution, although it wasn't along the lines I was originally looking for. I gave up on trying to make mocha happy with the extra .js extensions, and found a way to make webpack happy without them. So...
import { Angle, Mode, Unit } from './angle.js';
...went back to...
import { Angle, Mode, Unit } from './angle';
My test script looks like this:
"scripts": {
"build": "rimraf dist/ && tsc && webpack && webpack --env target=umd",
"prepublishOnly": "npm run build",
"lint": "eslint 'src/**/*.ts'",
"test": "TS_NODE_COMPILER_OPTIONS='{\"module\":\"commonjs\"}' nyc --reporter=html mocha --require ts-node/register src/**/*.spec.ts"
},
And finally, most importantly, I figured out how to make webpack 5.x (4.x didn't have this issue) happy with local JavaScript imports that don't have a .js extension, which webpack now otherwise insists upon if your package.json says "type": "module":
module: {
rules: [
{ test: /\.js$/, use: 'babel-loader', resolve: { fullySpecified: false } }
]
}
...where setting fullySpecified to false is the key to the solution.
UPDATE: The above example was done working on a deliberately simple project, something easy for a starter to generate an npm package with ESM modules. Now that I'm trying something a little more advanced, I've run into a snag again running unit tests. As soon as a *.spec.ts file directly or indirectly imports external code, module loading fails. I can only test code with no external dependencies until I figure out how to fix that problem. Apparently using "TS_NODE_COMPILER_OPTIONS='{\"module\":\"commonjs\"}' is only letting me go one level deep into fixing the basic problem with running mocha along with ts-node.
I am trying to run tests with async/await using mocha. The project architecture was setup before I started working on it and I have been trying to update it's node version to 8.9.4. The project is an isomorphic application and uses babel, gulp and webpack to run.
To run the tests we run a gulp task. There are two .bablerc files in the project. One in the root folder of the project and another in the test fodler.
Both have the same configuration:
{
"presets": [
["env", {"exclude": ["transform-regenerator"]}],
"react",
"stage-1"
],
"plugins": [
"babel-plugin-root-import"
]
}
When I run the app locally there is no error returned anymore. However when I run the tests with gulp test:api I constantly get the error: ReferenceError: regeneratorRuntime is not defined
This is my gulp file in the test folder:
var gulp = require('gulp')
var gutil = require('gulp-util')
var gulpLoadPlugins = require('gulp-load-plugins')
var plugins = gulpLoadPlugins()
var babel = require('gulp-babel')
require('babel-register')({
presets:["es2015", "react", "stage-1"]
});
// This is a cheap way of getting 'test:browser' to run fully before 'test:api' kicks in.
gulp.task('test', ['test:browser'], function(){
return gulp.start('test:api')
});
gulp.task('test:api', function () {
global.env = 'test'
gulp.src(['test/unit-tests/server/**/*.spec.js'], {read: false})
.pipe(plugins.mocha({reporter: 'spec'}))
.once('error', function (error) {
console.log(error)
process.exit(1);
})
.once('end', function () {
process.exit(0);
})
});
gulp.task('default', ['test']);
Any help on why this is happening wouldd be much appreciated.
Node version 8 already has support for async/await so you do not need Babel to transform it; indeed, your root .babelrc includes this preset to exclude the regenerator that would transform async/await (and introduce a dependency on regeneratorRuntime):
["env", {"exclude": ["transform-regenerator"]}]
However, in your test file, the configuration does not specify this preset. Instead, it specifies the preset "es2015", which does include the unwanted transform-regenerator (as you can see at https://babeljs.io/docs/plugins/preset-es2015/). If you change this to match the presets in the root .babelrc, you'll get more consistent results.
Strangely i ran into this issue after i upgraded to Node v8.10.0 from v8.6.x . I had used babel-require like so in my test-setup.js
require('babel-register')();
and the testing tools are Mocha,chai,enzyme + JSDOM . I was getting the same issue when i was making a async call to a API, also while using generator functions via sagas. Adding babel-polyfill seemed to have solved the issue.
require('babel-register')();
require('babel-polyfill');
i guess even babel docs themselves advocate using polyfill for generators and such
Polyfill not included
You must include a polyfill separately when using features that require it, like generators.
Ran into the same issue when running mocha tests from within Visual Studio Code.
The solution was to add the necessary babel plugins in the Visual Studio Code settings.json :
"mocha.requires": [
"babel-register",
"babel-polyfill"
],
I've run into this error before myself when using async/await, mocha, nyc, and when attempting to run coverage. There's never an issue when leveraging mocha for running tests, just with mocha tests while leveraging nyc for running coverage.
11) Filesystem:removeDirectory
Filesystem.removeDirectory()
Should delete the directory "./tmp".:
ReferenceError: regeneratorRuntime is not defined
at Context.<anonymous> (build/tests/filesystem.js:153:67)
at processImmediate (internal/timers.js:461:21)
You can fix the issue a couple of different ways.
Method 1 - NPM's package.json:
...
"nyc": {
"require": [
"#babel/register",
"#babel/polyfill"
],
...
},
...
It really depends which polyfill package you're using. It's recommended to use the scoped (#babel) variant: #babel/pollyfill. However, if you're using babel-polyfill then ensure that's what you reference.
Method 2 - Direct Import
your-test-file.js (es6/7):
...
import '#babel/polyfill';
...
OR
your-test-file.js (CommonJS):
...
require("#babel/polyfill");
...
Don't assign it to a variable, just import or require the package. Again, using the package name for the variant you've sourced. It includes the polyfill and resolves the error.
HTH
I've got a react app going, and it's run on an express server and is bundling with webpack. My issue is that everytime I restart the server, like when i am making changes to it, it takes forever to rebuild the frontend bundle, even though i don't make any changes to the frontend.
It would be nice to just reload the server portion and leave the current frontend bundle in tact when just making server/api changes that don't involve the front end bundle.
Here is the code that run's in a dev environment:
const compiler = webpack(webpackConfig)
const middleware = webpackMiddleware(compiler, {
publicPath: webpackConfig.output.publicPath,
contentBase: 'src',
stats: {
colors: true,
hash: false,
timings: true,
chunks: false,
chunkModules: false,
modules: false
}
})
app.use(middleware)
app.use(webpackHotMiddleware(compiler))
app.get('*', (req, res) => {
res.write(middleware.fileSystem.readFileSync(path.join(__dirname, 'build/app.html')))
res.end()
})
Is there a smarter way to do this? is it possible to leave the current frontend bundle in memory and just reload the server? Or can I detect if the bundle needs to be updated and skip the process if it doesn't need to be updated?
Any tips, advice and suggestions are welcome! Let me know if you need any other info. Thanks!
Chokidar solution
If using webpack-dev-tools, a great library for watching changes is chokidar
Chokidar does still rely on the Node.js core fs module, but when using
fs.watch and fs.watchFile for watching, it normalizes the events it
receives, often checking for truth by getting file stats and/or dir
contents.
Here is a small example of using chokidar to watch only the targetfolder. By targeting just a specific folder you could leave the frontend intact. I haven't tried this for your specific use case but at first sight it seems that this may suit your needs.
var production = process.env.NODE_ENV === 'production'
if(!production) {
var chokidar = require('chokidar')
var watcher = chokidar.watch('./targetfolder')
watcher.on('ready', function() {
watcher.on('all', function() {
console.log("Clearing /targetfolder/ module cache from server")
Object.keys(require.cache).forEach(function(id) {
if (/[\/\\]targetfolder[\/\\]/.test(id)) delete require.cache[id]
})
})
})
}
There's a great example on Github, called Ultimate Hot Reloading Example
NB: The webpack-dev-server doesn't write files to disk, it serves the result from memory trough an Express instance. But webpack --watch does write files to disk.
Flag solution
You can use webpack's --watch flag.
In your package.json, the script block that starts your server (or the one that runs webpack), add this webpack --progress --colors --watch.
See Webpack documentation, it says:
We don’t want to manually recompile after every change…
When using watch mode, webpack installs file watchers to all files, which were used in the compilation process. If any change is detected, it’ll run the compilation again. When caching is enabled, webpack keeps each module in memory and will reuse it if it isn’t changed.
Example in package.json:
"scripts": {
"dev": "webpack --progress --colors --watch"
}
I have this problem in a SpringBoot app, where i can rebuild my bundle rapidly, but then that wouldn't necessarily make the server look inside an actual folder and find it live in realtime. So really what you need to do is have a way to configure your server to always look in a local folder for the bundle.js file instead of pulling it from the WAR/JAR or wherever it normally pulls it from. It's not "webpack issue". It's an issue of how to make servers read directly from the bundle.js on the folder. I would give you the Spring way of doing it but that's not your architecture.
How can I properly run jasmine tests using jasmine-node and RequireJS?
I already tried something like this, but doesnt work (CoffeeScript):
requirejs = require 'requirejs'
requirejs.config { baseUrl: __dirname + '/../' }
requirejs ['MyClasses', 'FooClass'], (MyClasses, FooClass) ->
describe "someProp", ->
it "should be true", ->
expect(MyClasses.FooClass.someProp).toEqual true
Finished in 0 seconds 0 tests, 0 assertions, 0 failures
My goal is to write modular classes using RequireJS, CoffeeScript and classes must be testable with jasmine-node (CI server).
How can I do that please?
Thank you!
EDIT:
I executing tests with command (at directory with tests):
jasmine-node ./
Jonathan Tran is right, it's the spec in the file name for me.
I have this:
"scripts": {
"install": "cake install",
"test": "node_modules/jasmine-node/bin/jasmine-node --verbose --coffee --runWithRequireJs --captureExceptions spec"
},
in my package.json and I installed jasmine-node from inside the project npm install jasmine-node
Minimal test file called RingBuffer.spec.coffee
require ["disrasher"], (mod) ->
describe "A test", ->
it "should fail", ->
expect(1).toEqual 0
It doesn't actually work at the moment because I haven't got the project hooked up with require properly I don't think. I'll post back here when it does.
If anyone is running into this, much has changed since this question was asked. The first thing to check is still that you're naming your files like thing.spec.coffee.
But if you're running the tests and still seeing the output "0 tests", you need to make a JavaScript file with your requirejs config. This must be JavaScript, not CoffeeScript.
// requirejs-setup.js
requirejs = require('requirejs');
requirejs.config({ baseUrl: __dirname + '/../' });
Then tell jasmine to use this setup file:
jasmine-node --coffee --requireJsSetup requirejs-setup.js ./
One nice thing about this is that you don't need to include the requirejs config in every spec file.
I've tested this on node v12.16, jasmine-node v3.0.0, and requirejs v2.3.6.
It seems that jasmine-node and require.js are completely incompatible. That said, it is possible to run jasmine tests on require.js modules in node using a bit of extra code. Take a look at https://github.com/geddski/amd-testing to see how.