how to remove console.warn message when i ran npm run test? - node.js

I am new to react when i run npm run test i get flood of below warnings please give your inputs
console.warn node_modules/react-intl-universal/lib/index.js:101
react-intl-universal locales data "null" not exists

You likely need to initialize react-intl-universal somewhere in your test setup.
A good place to start is how react-intl-universal tests their own code.
Basically you'll need this executed before your tests run that depend on it (sounds like a lot of them?):
import intl from 'react-intl-universal';
// common locale data
require('intl/locale-data/jsonp/en.js');
// app locale data
const locales = {
"en-US": require('./locales/en-US.js'),
};
intl.init({ locales, currentLocale: "en-US" });
If it's just a few places then I'd do it how they do, but if it's your entire codebase you're better off moving it into your test configs/setup.

Related

BeforeEach step is repeated with cy.session using cypress-cucumber-preprocessor

I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.

JEST Change pre-formatted output from test case

I have an application that runs a Jest test suite from the command line, then takes the JSON output, parses it and then fills table in a database as per the output file. The application runs shell command:
npm run all
and in the package.json file the all script looks like this:
"scripts": {
"all": "../node_modules/.bin/jest --json --outputFile=testResults.json",`
......
}
So I get the testResults.json file and I am able to parse it - so far so good.
But during the test case run I would like to add some extra data to the output. Something like details - where the problems is, how to fix it, some troubleshooting information etc. For example to put one more field in :
require('testResults.json').testResults[x].assertionResults[y].details
You see, the detail property is not part of the json output file format. But can I create it from within the test case (pseudo example):
test('Industry code should match ind_full_code', async () => {
result = await stageDb.query(QUERY);
// And here I want to add this custom information to some global property available?
reporter.thisTestCase.assertionResults.details = "Here is what you should do to fix this ...." // <- Ideally this is how easy I imagine it to be.
expect(result.results).toEqual([]);
}, 2 * 100 * 1000)
I just want to give a little bit more information to the QA or whomever on test failure.
In other words I need the option to change the output from within the test case.
I've been looking into custom reporters, but their listeners are passed the same information as to the json reported.
I've found a need for a similar feature in Jest. The ability to add documentation to the test is rarely supported by test frameworks.
However I found a way to do this with the soon to be default runner: Jest Circus. I then made my own Jest Circus environment. A custom Jest Circus environment provides more test events/lifecycles and access to the actual test code that is being ran.
// Example of a custom Jest Circus environment
export default class MyCustomNodeEnvironment extends NodeEnvironment {
handleTestEvent(event: Circus.Event, state: Circus.State) {
if(event.name === 'test_fn_start') {
console.log(event.test.toString())
// will log the actual test code.
}
}
}
// jest.config.js
{
"testEnvironment": "<rootDir>/my-custom-environment.js",
"testRunner": "jest-circus/runner"
}
I then used regex patterns to find comments in the test functions and add them to the Allure report (Allure report demo).
If you'd like to create your own Jest environment and implement this yourself I've made a template repo or if you prefer a gist of a basic Jest Circus environment.
If you like how Allure reports look you should checkout my open source project jest-circus-allure-environment.

Retrieve file contents during Gatsby build

I need to pull in the contents of a program source file for display in a page generated by Gatsby. I've got everything wired up to the point where I should be able to call
// my-fancy-template.tsx
import { readFileSync } from "fs";
// ...
const fileContents = readFileSync("./my/relative/file/path.cs");
However, on running either gatsby develop or gatsby build, I'm getting the following error
This dependency was not found:
⠀
* fs in ./src/templates/my-fancy-template.tsx
⠀
To install it, you can run: npm install --save fs
However, all the documentation would suggest that this module is native to Node unless it is being run on the browser. I'm not overly familiar with Node yet, but given that gatsby build also fails (this command does not even start a local server), I'd be a little surprised if this was the problem.
I even tried this from a new test site (gatsby new test) to the same effect.
I found this in the sidebar and gave that a shot, but it appears it just declared that fs was available; it didn't actually provide fs.
It then struck me that while Gatsby creates the pages at build-time, it may not render those pages until they're needed. This may be a faulty assessment, but it ultimately led to the solution I needed:
You'll need to add the file contents to a field on File (assuming you're using gatsby-source-filesystem) during exports.onCreateNode in gatsby-node.js. You can do this via the usual means:
if (node.internal.type === `File`) {
fs.readFile(node.absolutePath, undefined, (_err, buf) => {
createNodeField({ node, name: `contents`, value: buf.toString()});
});
}
You can then access this field in your query inside my-fancy-template.tsx:
{
allFile {
nodes {
fields { content }
}
}
}
From there, you're free to use fields.content inside each element of allFile.nodes. (This of course also applies to file query methods.)
Naturally, I'd be ecstatic if someone has a more elegant solution :-)

How to detect if a mocha test is running in node.js?

I want to make sure that in case the code is running in test mode, that it does not (accidentally) access the wrong database. What is the best way to detect if the code is currently running in test mode?
As already mentioned in comment it is bad practice to build your code aware of tests. I even can't find mentioned topic on SO and even outside.
However, I can think of ways to detect the fact of being launched in test.
For me mocha doesn't add itself to global scope, but adds global.it.
So your check may be
var isInTest = typeof global.it === 'function';
I would suggest to be sure you don't false-detect to add check for global.sinon and global.chai which you most likely used in your node.js tests.
Inspecting process.argv is a good approach in my experience.
For instance if I console.log(process.argv) during a test I get the following:
[
'node',
'/usr/local/bin/gulp',
'test',
'--file',
'getSSAI.test.unit.js',
'--bail',
'--watch'
]
From which you can see that gulp is being used. Using yargs makes interpretting this a whole lot easier.
I strongly agree with Kirill and in general that code shouldn't be aware of the fact that it's being tested (in your case perhaps you could pass in your db binding / connection via a constructor?), for things like logging I can see why you might want to detect this.
Easiest option is to just use the detect-mocha [NPM package.
var detectMocha = require('detect-mocha');
if(detectMocha()) {
// doSomethingFancy
}
If you don't want to do that, the relevant code is just
function isMochaRunning(context) {
return ['afterEach','after','beforeEach','before','describe','it'].every(function(functionName){
return context[functionName] instanceof Function;
})
Where context is the current window or global.
I agreed with #Joshua on his answer, he says Inspecting process.argv is a good approach in my experience.
So, I've written a simple detecting mocha code.
const _MOCHA_PATH = new RegExp('(\\\\|/)node_modules\\1mocha\\1bin\\1_mocha$');
var isMochaRunning = process.argv.findIndex(arg => _MOCHA_PATH.test(arg)) > -1;
In a small project with no logging infrastructure, I use
if (process.env.npm_lifecycle_event !== 'test')
console.error(e);
to avoid logging expected errors during testing, as they would interfere with test output.

Grunt-Karma: Use Node.js fs-framework in Jasmine Testfile

I'm writing unit-tests with the Jasmine-framework.
I use Grunt and Karma for running the Jasmine testfiles.
I simply want to load the content of a file on my local file-system (e.g. example.xml).
I thought I can do this:
var fs = require('fs');
var fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
This works well in my Gruntfile.js and even in my karma.conf.js file, but not in my
Jasmine-file. My Testfile looks like this:
describe('Some tests', function() {
it('load xml file', function() {
var fs = require("fs");
fileContent = fs.readFileSync("test/resources/example.xml").toString();
console.log(fileContent);
});
});
The first error I get is:
'ReferenceError: require is not defined'.
Does not know why I cannot use RequireJS here, because I can use it
in Gruntfiel.js and even in karma.conf.js?!?!?
Okay, but when manually add require.js to the files-property in karma.conf.js-file,
then I get the following message:
Module name "fs" has not been loaded yet for context: _. Use require([])
With the array-syntax of requirejs, nothing happens.
I guess that is not possible to access Node.js functionality in Jasmine when running the
testfiles with Karma. So when Karma runs on Node.js, why is it not possible to access the 'fs'-framework of Nodejs?
Any comment/advice is welcome.
Thanks.
Your test do not work because karma - is a testrunner for client-side JavaScript (javascript who run in browser), but you want to test node.js code with it (which run on the server part). So karma just can't run server-side tests. You need different testrunner, for example take a look to jasmine-node.
Since this comes up first in the Google search, I received a similar error but wasn't using any node.js-style code in my project. Turns out the error was one of my bower components had a full copy of jasmine in it including its node.js-style code, and I had
{ pattern: 'src/**/*.js', included: false },
in my karma.conf.js.
So unfortunately Karma doesn't provide the best debugging for this sort of thing, dumping you out without telling you which file caused the issue. I had to just tear that pattern down to individual directories to find the offender.
Anyway, just be wary of bower installs, they bring a lot of code down into your project directory that you might not really care to have.
I think you're missing the point of unit testing here, because it seems to me that you're copying application logic into your test suite. This voids the point of a unit test because what it is supposed to do is run your existing functions through a test suite, not to test that fs can load an XML file. In your scenario if your XML handling code was changed (and introduced a bug) in the source file it would still pass the unit test.
Think of unit testing as a way to run your function through lots of sample data to make sure it doesn't break. Set up your file reader to accept input and then simply in the Jasmine test:
describe('My XML reader', function() {
beforeEach(function() {
this.xmlreader = new XMLReader();
});
it('can load some xml', function() {
var xmldump = this.xmlreader.loadXML('inputFile.xml');
expect(xmldump).toBeTruthy();
});
});
Test the methods that are exposed on the object you are testing. Don't make more work for yourself. :-)

Resources