Console.log statements output nothing at all in Jest - jestjs

console.log statements output nothing at all in Jest. This was working for me yesterday, and all of sudden, it's not working today. I have made zero changes to my config and haven't installed any updates.
I'm not using the --forceExit option. Still seeing this issue.

Jest suppresses the console log message by default. In order to show the console log message, set silent option to false at the command line
set --silent=false in the command line:
npm run test -- --silent=false

You can run both options together like this --watch --verbose false if you want to also be watching the files and see the output.
for one time runs just do --verbose false

As per comment on https://github.com/facebook/jest/issues/2441,
Try setting verbose: false (or removing it) in the jest options in package.json.

This is a pretty old question and still there's no accepted answer. However, none of the suggested solutions worked for me (settings like --silent --verbose etc.). The main problem is that Jest changes the global console object. So, the easiest solution is to not use the global console object.
Instead import dedicated log functions from the console module and work with those:
import { error } from "console";
error("This is an error");
As easy as that.

Try using console.debug() instead.
Run console.debug('Message here', yourValueHere) inside test function and it should show in the console output when running test script. You can verify if it works using Ctrl+F and find Message here in the standard output.
This does the trick of showing output in the console, while it is not an answer quite on how to use console.log I understand.
I am running #testing-library/jest-dom and jest-junit 12.0.0 as devDependencies.
jest-junit has a minimal configuration of
"jest-junit": {
"usePathForSuiteName": "true"
},
in package.json. This is mainly to configure coverage reporting.
jest is configured like this:
"jest": {
"testMatch": [
"**/__tests__/**/*.[jt]s?(x)",
"**/?(*.)+(spec|test).[jt]s?(x)",
"!**/utilities.ts",
],

Check for your command line flags in package.json to see that you don't have --silent in there.

in addition to --verbose option which can cause this as mentioned, be aware that the --watch may also cause this bug.

One of the potential reason that logging is not printing is due to console.log has been mocked. Something as below
// jest-setup.js
global.console = {
// eslint-disable-next-line no-undef
log: jest.fn(), // console.log are ignored in tests
// log: console.log,
// Keep native behaviour for other methods, use those to print out things in your own tests, not `console.log`
error: console.error,
warn: console.warn,
info: console.info,
debug: console.debug,
};
// package.json
"jest": {
"preset": "react-native",
"moduleFileExtensions": [
"ts",
"tsx",
"js",
"jsx",
"json",
"node"
],
"setupFilesAfterEnv": [
"#testing-library/jest-native/extend-expect",
"<rootDir>/src/config/jest-setup.js"
],
"testMatch": [
"<rootDir>/src/**/__tests__/**/*.test.{ts,tsx}"
]
},
This is commonly used if you wish to disable console.log in jest

Also be sure that your jest config does not have silent: true. In my case, I didn't realize that someone else had added that to our config.
I don't see it in the list of config options, but the command line flag is documented here.

If using Webstorm with Jest configuration, click on the file name instead of the test name.

Having tried a few of the config options in the previous replies, using console.debug() instead of console.log() worked.

In my case, the issue was caused by [only] flag in:
it.only() or test.only('some text',()=>{})

According to the v27 docs silent is what you want here. verbose false (the default) prevents Jest from outputting the result of every test in a hierarchy while silent true (the default) will:
Prevent tests from printing messages through the console.
Use npx jest --silent false if you want to run Jest with that option from the CLI. Tested this just now with console.log and it works as expected.

Tried the advice given regarding jest config settings to no avail. Instead, in my case, the issue seemed related to not awaiting asynchronous code:
test("test", async () => {
console.log("Does output")
new Promise(resolve => {
// some expectation depending on async code
setTimeout(() => resolve(console.log("Does not output")) , 1)
})
})
Rather, awaiting the promise does output the async log:
test("test", async () => {
console.log("Does output")
await new Promise(resolve => {
// some expectation depending on async code
setTimeout(() => resolve(console.log("Does output")) , 1)
})
})
Possibly related background:
https://github.com/facebook/jest/issues/2441

Try using console.info() which is an alias for console.log(). I tried almost all the above answers but still console.log() didn't worked for me by any means. So, used console.info() which did the work.

This is what worked for me: jest --verbose true

In my case the problem was that the logs where made when the module is required, so before the start of an actual test case. Change from a top-level import to using require inside the test case fixed the problem.

In my case the problem was importing the functions from the compiled version (present in dist folder) instead of the src folder. And therefore it was using the old version. So rebuilding the project and/or importing from src fixed my issue.

On MacOS with jest version 26.6.3 I had to append --silent="false"

renaming my file to index.test.js from index.spec.js did the trick for me.

Related

Express +Jest. Test files are running sequentially instead of in parallel

I have an Express.JS server which uses jest and supertest as a testing framework.
It has been working excellently.
When I call my test npm script, it runs npx jest and all of my test files run in parallel.
However I ran my tests recently and they ran sequentially which takes a very long time, they have done this ever since.
I haven't changed any jest or npm configuration, nor have I changed my test files themselves.
Has anyone experienced this? Or is it possible that something in my configuration is incorrect?
jest.config
export default {
setupFilesAfterEnv: ['./__tests__/jest.setup.js'],
}
jest.setup.js
import { connectToDatabase } from '/conn'
// Override the dotenv to use different .env file
require('dotenv').config({
path: '.env.test',
})
beforeAll(() => {
connectToDatabase()
})
test('', () => {
// just a dummy test case
})
EDIT: Immediately after posting the question, I re ran the tests and they ran in parallel, without me changing anything. If anyone has any knowledge around this i'd be interested to get a second opinion
After intermittent switching between parallel and sequential for unknown reasons. I have found it work consistently by adding the --no-cache arg to the npx jest call.
See below where I found the answer
Github -> jest not always running in parallel

Can't enable no-console using eslint cli

I'm using eslint, and in my configuration file I have "no-console": "off".
I want to turn it on for my CI system, so I've been using the command line (vue cli syntax):
vue-cli-service lint --rule '"no-console": "error"'
This doesn't work.
However, if I invert things (set error in the configuration, and pass off as a flag) it does work.
Anyone know why?
EDIT: it should probably look like vue-cli-service --rule 'no-console: 2'
PS: "error" may be working too I guess.
You can make a config in a lot of places but the usual one is probably .eslintrc.js in which you can write
module.exports = {
[...]
// add your custom rules here
rules: {
"no-console": "off",
},
}
As shown here: https://eslint.org/docs/rules/no-console
This one should work but it always depend on how your project is setup too.

How to disable warnings when node is launched via a (global) shell script

I am building a CLI tool with node, and want to use the fs.promise API. However, when the app is launched, there's always an ExperimentalWarning, which is super annoying and messes up with the interaction prompts. How can I disable this warning/all warnings?
I'm testing this with the latest node v10 lts release on Windows 10.
To use the CLI tool globally, I have added this to my package.json file:
{
//...
"preferGlobal": true,
"bin": { "myapp" : "./index.js" }
//...
}
And have run npm link to link the ./index.js script. Then I am able to run the app globally simply with myapp.
After some research I noticed that there are generally 2 ways to disable the warnings:
set environmental variable NODE_NO_WARNINGS=1
call the script with node --no-warnings ./index.js
Although I was able to disable the warnings with the 2 methods above, there seems to be no way to do that while directly running myapp command.
The shebang I placed in the entrance script ./index.js is:
#!/usr/bin/env node
// my code...
I have also read other discussions on modifying the shebang, but haven't found a universal/cross-platform way to do this - to either pass argument to node itself, or set the env variable.
If I publish this npm package, it would be great if there's a way to make sure the warnings of this single package are disabled in advance, instead of having each individual user tweak their environment themselves. Is there any hidden npm package.json configs that allow this?
Any help would be greatly appreciated!
I am now using a launcher script to spawn a child_process to work around this limitation. Ugly, but it works with npm link, global installs and whatnot.
#!/usr/bin/env node
const { spawnSync } = require("child_process");
const { resolve } = require("path");
// Say our original entrance script is `app.js`
const cmd = "node --no-warnings " + resolve(__dirname, "app.js");
spawnSync(cmd, { stdio: "inherit", shell: true });
As it's kind of like a hack, I won't be using this method next time, and will instead be wrapping the original APIs in a promise manually, sticking to util.promisify, or using the blocking/sync version of the APIs.
I configured my test script like this:
"scripts": {
"test": "tsc && cross-env NODE_OPTIONS=--experimental-vm-modules NODE_NO_WARNINGS=1 jest"
},
Notice the NODE_NO_WARNINGS=1 part. It disables the warnings I was getting from setting NODE_OPTIONS=--experimental-vm-modules
Here's what I'm using to run node with a command line flag:
#!/bin/sh
_=0// "exec" "/usr/bin/env" "node" "--experimental-repl-await" "$0" "$#"
// Your normal Javascript here
The first line tells the shell to use /bin/sh to run the script. The second line is a bit magical. To the shell it's a variable assignment _=0// followed by "exec" ....
Node sees it as a variable assignment followed by a comment - so it's almost a nop apart from the side effect of assigning 0 to _.
The result is that when the shell reaches line 2 it will exec node (via env) with any command line options you need.
New answer: You can also catch emitted warnings in your script and choose which ones to prevent from being logged
const originalEmit = process.emit;
process.emit = function (name, data, ...args) {
if (
name === `warning` &&
typeof data === `object` &&
data.name === `ExperimentalWarning`
//if you want to only stop certain messages, test for the message here:
//&& data.message.includes(`Fetch API`)
) {
return false;
}
return originalEmit.apply(process, arguments);
};
Inspired by this patch to yarn

How can I get jasmine-ts to execute my specs with a specific seed?

I am running unit tests using jasmine-ts version 0.3.0.
The previous version worked fine, but the moment I upgraded, I'd get the output:
No specs found
I found a github issue (and this one) where someone commented:
All arguments passed to jasmine-ts need to have one of these in that argument argv.config || process.env.JASMINE_CONFIG_PATH || "spec/support/jasmine.json";
Indeed, creating a jasmine.json file solved the "No specs issue":
{
"spec_dir": "../src/**/specs",
"spec_files": [
"**/*[sS]pec.ts"
],
"stopSpecOnExecutionFailure": false,
"random": true
}
Running my tests randomly, I discovered that I had some failures, so I wanted to seed the jasmine execution with a specific seed to reproduce the issue.
I tried adding a "seed": 123 config to my jasmine.json, but that didn't work. I found some docs describing what jasmine.json is supposed to look like, and it didn't contain any mention of a seed config.
What did mention seed was the section about command-line options here.
So I tried:
jasmine-ts --seed=123 --config="./jasmine.json"
(Remember, the config file is apparently required - or at least I didn't see any option for specifying where my specs are without using it)
This however did not work as jasmine logged:
Randomized with seed 94263
The config file that I provide apparently overrides the command-line options. I can see this by specifying the option --random=false, but the output still says Randomized with seed ..., since my jasmine.json contains "random": true.
So... I can't specify seed in jasmine.json, and specifying --seed=... has no effect.
How can I set the seed using jasmine-ts 0.3.0 in that case?
Ran into the same problem with regular Jasmine and found that it doesn't copy that in loadConfig for some reason, but there is a method on the jasmine object you create if running it from your own script:
const jasmine = new Jasmine();
jasmine.seed(1234);
As of jasmine-ts version 0.3.2 (here's the closed issue), command line arguments now get forwarded to jasmine, so given a package.json like:
{
...
"scripts": {
"test": "jasmine-ts.cmd --config=jasmine.json"
}
}
You can run npm run test -- --seed=1234 from command line.
Was running into this issue while upgrading to angular 12. I had a particular seed that was failing due to the order of some async tests that weren't properly resolved between executions.
I was able to get a specific seed to run by updating the karma.conf.js file:
client:{
captureConsole: true,
clearContext: false, // leave Jasmine Spec Runner output visible in browser
jasmine: {
seed: 19224, // set value to here and comment out when done
random: false // set this to false while running the seed and switch back to true for normal builds.
}
Doing this was a simple way for me to run a specific jasmine seed.

Mocha - Running test ReferenceError: regeneratorRuntime is not defined

I am trying to run tests with async/await using mocha. The project architecture was setup before I started working on it and I have been trying to update it's node version to 8.9.4. The project is an isomorphic application and uses babel, gulp and webpack to run.
To run the tests we run a gulp task. There are two .bablerc files in the project. One in the root folder of the project and another in the test fodler.
Both have the same configuration:
{
"presets": [
["env", {"exclude": ["transform-regenerator"]}],
"react",
"stage-1"
],
"plugins": [
"babel-plugin-root-import"
]
}
When I run the app locally there is no error returned anymore. However when I run the tests with gulp test:api I constantly get the error: ReferenceError: regeneratorRuntime is not defined
This is my gulp file in the test folder:
var gulp = require('gulp')
var gutil = require('gulp-util')
var gulpLoadPlugins = require('gulp-load-plugins')
var plugins = gulpLoadPlugins()
var babel = require('gulp-babel')
require('babel-register')({
presets:["es2015", "react", "stage-1"]
});
// This is a cheap way of getting 'test:browser' to run fully before 'test:api' kicks in.
gulp.task('test', ['test:browser'], function(){
return gulp.start('test:api')
});
gulp.task('test:api', function () {
global.env = 'test'
gulp.src(['test/unit-tests/server/**/*.spec.js'], {read: false})
.pipe(plugins.mocha({reporter: 'spec'}))
.once('error', function (error) {
console.log(error)
process.exit(1);
})
.once('end', function () {
process.exit(0);
})
});
gulp.task('default', ['test']);
Any help on why this is happening wouldd be much appreciated.
Node version 8 already has support for async/await so you do not need Babel to transform it; indeed, your root .babelrc includes this preset to exclude the regenerator that would transform async/await (and introduce a dependency on regeneratorRuntime):
["env", {"exclude": ["transform-regenerator"]}]
However, in your test file, the configuration does not specify this preset. Instead, it specifies the preset "es2015", which does include the unwanted transform-regenerator (as you can see at https://babeljs.io/docs/plugins/preset-es2015/). If you change this to match the presets in the root .babelrc, you'll get more consistent results.
Strangely i ran into this issue after i upgraded to Node v8.10.0 from v8.6.x . I had used babel-require like so in my test-setup.js
require('babel-register')();
and the testing tools are Mocha,chai,enzyme + JSDOM . I was getting the same issue when i was making a async call to a API, also while using generator functions via sagas. Adding babel-polyfill seemed to have solved the issue.
require('babel-register')();
require('babel-polyfill');
i guess even babel docs themselves advocate using polyfill for generators and such
Polyfill not included
You must include a polyfill separately when using features that require it, like generators.
Ran into the same issue when running mocha tests from within Visual Studio Code.
The solution was to add the necessary babel plugins in the Visual Studio Code settings.json :
"mocha.requires": [
"babel-register",
"babel-polyfill"
],
I've run into this error before myself when using async/await, mocha, nyc, and when attempting to run coverage. There's never an issue when leveraging mocha for running tests, just with mocha tests while leveraging nyc for running coverage.
11) Filesystem:removeDirectory
Filesystem.removeDirectory()
Should delete the directory "./tmp".:
ReferenceError: regeneratorRuntime is not defined
at Context.<anonymous> (build/tests/filesystem.js:153:67)
at processImmediate (internal/timers.js:461:21)
You can fix the issue a couple of different ways.
Method 1 - NPM's package.json:
...
"nyc": {
"require": [
"#babel/register",
"#babel/polyfill"
],
...
},
...
It really depends which polyfill package you're using. It's recommended to use the scoped (#babel) variant: #babel/pollyfill. However, if you're using babel-polyfill then ensure that's what you reference.
Method 2 - Direct Import
your-test-file.js (es6/7):
...
import '#babel/polyfill';
...
OR
your-test-file.js (CommonJS):
...
require("#babel/polyfill");
...
Don't assign it to a variable, just import or require the package. Again, using the package name for the variant you've sourced. It includes the polyfill and resolves the error.
HTH

Resources