Is there mocha reporter that logs console output, not test code? - node.js

I tried a lot of mocha reporters, that can create report HTML files, but none of them showed the console output for every test and just showed the source code of the test.
Why is that? The console output is the exact information that i want to see in my reports!
Tried:
- mocha-simple-html-reporter
- mochawesome
- .others maybe
The only reporter that shows the console output is the Intellij bundled one, but there is no way i can make it create test report html file for me with console command

Mochawesome generates new report for every spec we have. And since by default it overwrites old reports, this means it will only keep last test spec run. This we can fix by setting overwrite flag to false. Changing this flag to false would just generate new file at each run. So you should delete old ones before running, manually or by using some script.
{
"reporter": "mochawesome",
"reporterOptions": {
"charts": false,
"html": true,
"json": true,
"reportDir": "cypress/reports",
"reportFilename": "report",
"overwrite": true
}
}

Related

BeforeEach step is repeated with cy.session using cypress-cucumber-preprocessor

I have a Cypress project where I use the Cypress session API to maintain a session throughout features.
Now I try switching from the deprecated Klaveness Cypress Cucumber Preprocessor to the replacement, Badeball's Cypress Cucumber Preprocessor. But I am running into an issue; the beforeEach() step where my authentication takes place gets repeated several times before the tests start. Eventually, Cypress "snaps out of it" and starts running the actual tests - but obviously this is very resource and time intensive, something is going wrong.
My setup:
Dependencies:
"cypress": "^9.6.1",
"#badeball/cypress-cucumber-preprocessor": "^9.1.3",
index.ts:
beforeEach(() => {
let isAuthInitialized = false;
function spyOnAuthInitialized(window: Window) {
window.addEventListener('react:authIsInitialized', () => {
isAuthInitialized = true;
});
}
login();
cy.visit('/', { onBeforeLoad: spyOnAuthInitialized });
cy.waitUntil(() => isAuthInitialized, { timeout: 30000 });
});
login() function:
export function login() {
cy.session('auth', () => {
cy.authenticate();
});
}
As far as I can see, I follow the docs for cy.session almost literally.
My authenticate command has only application specific steps, it does include a cy.visit('/') - after which my application is redirected to a login service (different domain) and then continues.
The problem
cy.session works OK, it creates a session on the first try - then each subsequent time it logs a succesful restore of a valid session. But this happens a number of times, it seems to get stuck in a loop.
Screenshot:
It looks to me like cy.visit() is somehow triggering the beforeEach() again. Perhaps clearing some session data (localstorage?) that causes my authentication redirect to happen again - or somehow makes Cypress think the test starts fresh. But of course beforeEach() should only happen once per feature.
I am looking at a diff of my code changes, and the only difference except the preprocessor change is:
my .cypress-cucumber-preprocessorrc.json (which I set up according to the docs
typing changes, this preprocessor is stricter about typings
plugins/index.ts file, also set up according to the docs
Am I looking at a bug in the preprocessor? Did I make a mistake? Or something else?
There are two aspects of Cypress + Cucumber with preprocessor that make this potentially confusing
Cypress >10 "Run all specs" behaviour
As demonstrated in Gleb Bahmutov PhD's great blog post, if you don't configure Cypress to do otherwise, running all specs runs each hook before each test. His proposed solution is to not use the "run all specs" button, which I find excessive - because there are ways around this; see below for a working solution with the Cucumber preprocessor.
Note: as of Cypress 10, "run all specs" is no longer supported (for reasons related to this unclarity).
Cucumber preprocessor config
The Cypress Cucumber preprocessor recommends to not use the config option nonGlobalStepDefinitions, but instead configure specific paths like (source):
"stepDefinitions": [
"cypress/integration/[filepath]/**/*.{js,ts}",
"cypress/integration/[filepath].{js,ts}",
"cypress/support/step_definitions/**/*.{js,ts}",
]
}
What it doesn't explicitly state though, is that the file which includes your hooks (in my case index.ts) should be excluded from these paths if you don't want them to run for each test! I could see how one might think this is obvious, but it's easy to accidentally include your hooks' file in this filepath config.
TLDR: If I exclude my index.ts file which includes my hooks from my stepDefinitions config, I can use "run all specs" as intended - with beforeEach() running only once before each test.

JEST Change pre-formatted output from test case

I have an application that runs a Jest test suite from the command line, then takes the JSON output, parses it and then fills table in a database as per the output file. The application runs shell command:
npm run all
and in the package.json file the all script looks like this:
"scripts": {
"all": "../node_modules/.bin/jest --json --outputFile=testResults.json",`
......
}
So I get the testResults.json file and I am able to parse it - so far so good.
But during the test case run I would like to add some extra data to the output. Something like details - where the problems is, how to fix it, some troubleshooting information etc. For example to put one more field in :
require('testResults.json').testResults[x].assertionResults[y].details
You see, the detail property is not part of the json output file format. But can I create it from within the test case (pseudo example):
test('Industry code should match ind_full_code', async () => {
result = await stageDb.query(QUERY);
// And here I want to add this custom information to some global property available?
reporter.thisTestCase.assertionResults.details = "Here is what you should do to fix this ...." // <- Ideally this is how easy I imagine it to be.
expect(result.results).toEqual([]);
}, 2 * 100 * 1000)
I just want to give a little bit more information to the QA or whomever on test failure.
In other words I need the option to change the output from within the test case.
I've been looking into custom reporters, but their listeners are passed the same information as to the json reported.
I've found a need for a similar feature in Jest. The ability to add documentation to the test is rarely supported by test frameworks.
However I found a way to do this with the soon to be default runner: Jest Circus. I then made my own Jest Circus environment. A custom Jest Circus environment provides more test events/lifecycles and access to the actual test code that is being ran.
// Example of a custom Jest Circus environment
export default class MyCustomNodeEnvironment extends NodeEnvironment {
handleTestEvent(event: Circus.Event, state: Circus.State) {
if(event.name === 'test_fn_start') {
console.log(event.test.toString())
// will log the actual test code.
}
}
}
// jest.config.js
{
"testEnvironment": "<rootDir>/my-custom-environment.js",
"testRunner": "jest-circus/runner"
}
I then used regex patterns to find comments in the test functions and add them to the Allure report (Allure report demo).
If you'd like to create your own Jest environment and implement this yourself I've made a template repo or if you prefer a gist of a basic Jest Circus environment.
If you like how Allure reports look you should checkout my open source project jest-circus-allure-environment.

Angular universal server error: When using the default fallback, a fallback language must be provided in the config

I have installed angular universal on my app.
Running npm run build:ssr - DONE. WORKS.
Running npm run server:ssr - DONE.WORKS.
After accessing the server URL (localhost:4000), the page is not fully loaded and the following error is raised on the Terminal:
I also faced the same problem, so I would just like to share my findings for the same.
For me, there were two plausible causes/solutions for it:
First, in my project's I18N default JSON file that was en.json, I was having a problem In the structure of the JSON file.
For example, I had the below mistake. I missed the comma after the second label 'FINISH' :
{
"COMMON": {
"EDIT": "Edit",
"FINISH": "Finish"
"QUIT": "Quit",
}
}
So after correcting the structure, the application ran fine without an error.
Secondly, another cause of the issue could be, at runtime transloco was not able to find the correct label in the selected language, so it looked for a fallback language and it could not even find that in the transloco-root.module.ts so after adding my fallback language, it tried to find the same in the fallback language as specified in the transloco-root.module.ts.
So it found out that label and the issue got resolved.
BUT in the second solution provided, you need to have that incorrect label in at least that fallback language's json file in correct format.
I added the fallback language like below:
useValue: translocoConfig({
availableLangs: ['fr', 'en'],
defaultLang: 'en',
reRenderOnLangChange: true,
fallbackLang: 'fr',
prodMode: environment.production,
missingHandler: {
logMissingKey: true
}
})
i18n Transloco wasn't fully configured on the module file.

mocha --inspect-brk lets me inspect mocha's code - not my own

Following this discussion I attempted to debug a mocha test script via the following:
mocha --inspect-brk ./tests/foo.test.js
This does indeed present an inspector URL that I can bind to in Chrome, but the "sources" are populated only with the source of mocha itself, not my code - is there something I need to change to get the inspector to bring up my code and not mocha's?
(I did see a similar question but I'm hoping for an answer that doesn't involve bringing in another dependency like node-inspector.)
Add debugger to one of your tests. When you resume in dev tools, execution will pause in your test code and you can browse your files.
it('should replace a template string', function(){
debugger
expect( Helper.templateString('{{a}}', {a:2}) ).to.equal( '2' )
})
You can also step over _mocha until it loads the files, which is around line 460 in v5.0.4, labeled requires:
// requires
requires.forEach(mod => {
require(mod);
});
After this you can browse your files and set break points. Dev tools will remember the break points for the next run.

Write qUnit output to file via Grunt

I need to be able to report qUnit tests to a file so my build server can parse them.
I'm using qUnit (grunt-contrib-qunit) through Grunt along with the jUnit reporter found here.
I can get the report to write to the log just as it states but I'm having trouble getting it into a file. I've tried qunit callbacks in my gruntfile but none of them seem to get the xml info. I also tried to simply redirect stdout but it (of course) printed all of the non-xml command-line stuff along with the xml.
In short, I've got the XML echoing properly in the console.log statement. I just need to get this to a file somehow. Either through Grunt, phantomjs, or any other means.
Well, if you're running QUnit tests from Grunt, then you have the full power of Node at your disposal. I've never used that JUnit plugin, but if it just gives you callback in your QUnit HTML file, then you would need a browser solution (even if that is phantomjs).
Phantom uses QtWebKit, which has implemented the File API so you could implement a solution using that from JUnit's callback, but, of course, that would fail if you run the tests in certain other browsers (namely IE9 or under). Here's how that might look (no guarantees on this being exact, I have not run it):
QUnit.jUnitReport = function(report) {
function onInitFs(fs) {
fs.root.getFile('qunit_report.xml', {create: true}, function(fileEntry) {
fileEntry.createWriter(function(fileWriter) {
fileWriter.onwriteend = function(e) { /* if you need it */ };
fileWriter.onerror = function(e) { /* if you need it */ };
var blob = new Blob([report.xml], {type: 'application/xml'});
fileWriter.write(blob);
}, someErrorHandlerFunction);
}, someErrorHandlerFunction);
}
window.requestFileSystem(window.TEMPORARY, 1024*1024, onInitFs, someErrorHandlerFunction);
}
And again, if you need to do something to write the file in IE9 or under (or some mobile browsers) you'll need another solution, like kicking off an ajax request to upload the data to a server that stores the file. You could even run that "server" from within Grunt and have Node write the file.

Resources