Different results in jest debugger than in runner - jestjs

I have a pretty simple test, where I render a story from Storybook and then test its effect on the store. The store is the same store that's passed into the Provider in the global decorator.
it("tracks the user's selection", () => {
render(<Stories.s03_SingleFilters />);
// rendering the component initializes the filter
expect(Object.keys(store.getState().dataFilters)).toHaveLength(1);
The test fails, receiving an empty array.
However, when I debug the test (in Webstorm) with a breakpoint on the assertion, and then execute Object.keys(store.getState().dataFilters) in the debug console, I get an array with one key in it, like I expect!
How is it that I'm getting different results in the debugger than in the runner???

Related

Crypto.randomUUID() not working with Next.js v13

The Environment
Next.js v13 (stable, no ./app folder) is running with Node v18 in Ubuntu WSL.
As per de docs , the Crypto API has been around since Node v14~ish.
This has been indeed tested in Node, in my environment:
Node 18 importing and running crypto.randomUUID()
I also printed the whole object and it looks as the docs says it should.
The Problem
Imagine this simple component:
import crypto from 'crypto';
export default function Crypto() {
console.log(crypto);
return (
<p>
{crypto.randomUUID()}
</p>
);
}
Next.js says it "compiled client and server successfully in 397 ms". But after the UUID renders in the browser for a couple of milliseconds, Next.js throws a couple of errors revolving around randomUUID not being a function.
Next Runtime Error with crypto.randomUUID()
I see that Webpack is somewhat mingling in there; haven't tried Turbopack. It's beyond the scope of this issue.
After commenting out the method invocation within the paragraph, the console.log(crypto) runs and prints twice, as usual, in the devtools as follows:
crypto method printed
Notice how one comes from "react devetools backend" and the other one from webkpack. That leads me to believe the error gets thrown server-side as the console.loglog is invoked before the UUID method.
Server-side, despite the errors thrown in the browser, the object gets printed by the Next CLI and it contains the method: Next CLI prints crypto object and randomUUID is listed
Client-side, within the printed object, the method randomUUID() is nowhere to be found:
Inside printed crypto object in devtools
This confirms the error message. My code is not getting access to the method. Also, a couple of methods are missing, when compared to the Node docs.
And yet if one console.log(crypto) directly from the devTools, it has indeed the method within its prototype:
randomUUID directly from devtools
Furthermore, because of the structure, I'm inclined to believe the crypto object being printed is somehow coming from Node, as the structure of the Chrome V8 crypto object is completely different. But why in the hell are those methods missing?
I tried console.loging the object server-side, client-side, and in-between. Somehow the method gets lost in-between. Webpack might be the culprit. Worst of all, albeit being for the blink of an eye, I can see the string rendered before the errors get thrown; and dismissing the error cards throws a blank body. The string disappears.
EDIT
The reason one imports/requires crypto is so it can run in Node. Next is a SSR framework; in a nutshell it is intended to run first on the server, get rendered and delivered as HTML as much as it can to the client. If not imported, Node throws an error when Next tries to invoke Crypto server-side.
Now then, I tell that piece of code to only run if the Window object is available (i.e. I'm in the Browser) and it runs with the native chromium V8 Crypto object.
// import crypto from 'crypto';
export default function Crypto() {
if (typeof window !== 'undefined') {
console.log('CLIENT: ', crypto.randomUUID());
return (
<p>
{crypto.randomUUID()}
</p>
);
}
return (
<h1>SERVER SIDE</h1>
);
}
The only downside is that is somehow still runs twice bc of Next magic, once server side and one client-side, which means it's not bc of React 18. It tells me accurately that which is to be expected as an UUID function always returns a different result.
Browsers restrict access to some crypto APIs when not running in a secure context (as defined here).
Set it to state in a useEffect hook when page initially loads so it persist and then render it from state.
const Crypto = () => {
const [randomUUID, setRandomUUID] = useState();
useEffect((
if (typeof window !== 'undefined' && !randomUUID) {
setRandomUUID(crypto.randomUUID());
}
),[]);
if(!randomUUID) <>No UUID</>;
return <>{randomUUID}</>
}
export default Crypto;

JEST Change pre-formatted output from test case

I have an application that runs a Jest test suite from the command line, then takes the JSON output, parses it and then fills table in a database as per the output file. The application runs shell command:
npm run all
and in the package.json file the all script looks like this:
"scripts": {
"all": "../node_modules/.bin/jest --json --outputFile=testResults.json",`
......
}
So I get the testResults.json file and I am able to parse it - so far so good.
But during the test case run I would like to add some extra data to the output. Something like details - where the problems is, how to fix it, some troubleshooting information etc. For example to put one more field in :
require('testResults.json').testResults[x].assertionResults[y].details
You see, the detail property is not part of the json output file format. But can I create it from within the test case (pseudo example):
test('Industry code should match ind_full_code', async () => {
result = await stageDb.query(QUERY);
// And here I want to add this custom information to some global property available?
reporter.thisTestCase.assertionResults.details = "Here is what you should do to fix this ...." // <- Ideally this is how easy I imagine it to be.
expect(result.results).toEqual([]);
}, 2 * 100 * 1000)
I just want to give a little bit more information to the QA or whomever on test failure.
In other words I need the option to change the output from within the test case.
I've been looking into custom reporters, but their listeners are passed the same information as to the json reported.
I've found a need for a similar feature in Jest. The ability to add documentation to the test is rarely supported by test frameworks.
However I found a way to do this with the soon to be default runner: Jest Circus. I then made my own Jest Circus environment. A custom Jest Circus environment provides more test events/lifecycles and access to the actual test code that is being ran.
// Example of a custom Jest Circus environment
export default class MyCustomNodeEnvironment extends NodeEnvironment {
handleTestEvent(event: Circus.Event, state: Circus.State) {
if(event.name === 'test_fn_start') {
console.log(event.test.toString())
// will log the actual test code.
}
}
}
// jest.config.js
{
"testEnvironment": "<rootDir>/my-custom-environment.js",
"testRunner": "jest-circus/runner"
}
I then used regex patterns to find comments in the test functions and add them to the Allure report (Allure report demo).
If you'd like to create your own Jest environment and implement this yourself I've made a template repo or if you prefer a gist of a basic Jest Circus environment.
If you like how Allure reports look you should checkout my open source project jest-circus-allure-environment.

Get test title and state in selenium-webdriver.js testing

I am writing mocha tests with selenium-webdriver.js, and trying to take screenshot only if the current test failed.
In Mocha, I can get the current test info like title and state as follows:
afterEach(function(){
console.log('afterEach', this.currentTest.title, this.currentTest.state);
});
But selenium-webdriver.js wraps around Mocha's interface with selenium-webdriver/testing, and the original this.currentTest is not exposed anymore:
var test = require('selenium-webdriver/testing');
test.afterEach(function(){
//console.log('afterEach', this.currentTest.title, this.currentTest.state);
});
I am wondering if such information is still exposed somehow or is there any workaround for this.
this.title - returns suite name
this.ctx.currentTest.title - returns current test name
this.ctx.currentTest.state - returns current test state
Doesn't work if used arrow function for "describe".

How can I test multiple pages in a single Intern test

Is it possible for a single functional test to handle 2 different pages? For example, when I execute the following test:
return this.remote
.get(testPage)
.waitForElementByCssSelector('.alfresco-core-Page.allWidgetsProcessed', 5000)
.elementByCss('#UNIT_TEST_MODEL_FIELD>DIV.control>TEXTAREA')
.type(testData)
.end()
.elementByCss("#LOAD_TEST_BUTTON")
.click()
.sleep(2000)
.waitForElementByCssSelector('.alfresco-core-Page.allWidgetsProcessed', 5000)
.elementByCss("#DD1")
.click()
.end();
I get the following error:
Test main - firstTest - Test1 FAILED on chrome 31.0.1650.57 on LINUX:
Error: Cannot call method 'apply' of undefined
TypeError: Cannot call method 'apply' of undefined at null.<anonymous> (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/lib/util.js:108:10)
at /home/dave/ScratchPad/ShareInternTests/node_modules/intern/lib/wd.js:769:29
at signalListener (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:37 :21)
at signalWaiting (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:28 :4)
at resolve (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:19 2:5)
at signalDeferred (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:81 :15)
at signalListener (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:52 :6)
at signalWaiting (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:28 :4)
at resolve (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:19 2:5)
at signalDeferred (/home/dave/ScratchPad/ShareInternTests/node_modules/intern/node_modules/dojo/Deferred.js:81 :15)
Because of the framework that I'm testing I need to load a "bootstrap" test page which I write the test data into and then POST it to persist a test model into the HTTP session which is then rendered in a resultant page.
However, whilst the first part of the test works fine (I see the test data entered and the next page is submitted, I seem to get the error on the 2nd .waitForElementByCssSelector call. I've tried various permutations but can't get this to work.
If I run a completely second test on the second page then this works fine, but ideally I'd like it all captured within a single test.
Is what I'd like to do possible or do I have to break it into separate tests?
Please try using setImplicitWaitTimeout and use the elementBy* methods instead of using waitForElementByCssSelector. It is more efficient, and should work properly.
A second option is to make sure you call end before the second waitForElementByCssSelector call; it looks like there is a defect in the way the waitForElement methods are called, where the #LOAD_TEST_BUTTON element is being used as the context for the call but waitForElement accepts no context.

Inconsistent results from karma e2e test runner. How can I debug?

I have a simple angular / requirejs / node project that loads correctly when viewed from a browser. I'm trying to get e2e tests with karma set up.
I've copied all of the e2e configurations and directory structures from the angular-require-js seed into my own project. Unfortunately, the tests in my own project give bizarre (and ever-changing!) results. Here's the stripped-down test I'm trying to run:
describe('My Application', function() {
beforeEach(function() {
browser().navigateTo('/');
sleep(0.5);
});
it('shows an "Ask a Question" button on the index page', function() {
expect(element('a').text()).toBe('Ask a Question');
});
});
Sometimes the test fails
Executed 1 of 1 (1 FAILED) (0.785 secs / 0.614 secs)
Firefox 22.0 (Mac) My Application shows an "Ask a Question" button on the index page FAILED
element 'a' text
http://localhost:9876/base/test/lib/angular/angular-scenario.js?1375035800000:25397: Selector a did not match any elements.
(but there ARE a elements on the page!)
Sometimes the test hangs
Executed 0 of 0! In these cases the test-runner browser does show that it's trying to run a test, but it never completes:
It just stays like this forever. My app IS displayed in the browser during this hang.
Without element('a') it always passes
The only way to get consistent results is to avoid element(). If I expect(true).toBe(true) then 1 out of 1 tests always pass.
How can I debug this?
I'm at a loss for how to move forward. The test browser is correctly displaying my app, with the relevant 'a' element and everything. The test runner itself seems to only sometimes recognize that it should be running something and NEVER finds the a element. Is there a way to step through the test running process? Is this a common problem that happens when [x] is misconfigured?
Thanks for any suggestions!
karma-e2e.conf.js
basePath = '../';
files = [
'test/lib/angular/angular-scenario.js',
ANGULAR_SCENARIO_ADAPTER,
'test/e2e/**/*.js'
];
autoWatch = false;
browsers = ['Firefox'];
singleRun = true;
proxies = {
'/': 'http://localhost:3000/'
};
urlRoot = "__karma__";
junitReporter = {
outputFile: 'test_out/e2e.xml',
suite: 'e2e'
};
How many anchor tabs do you have on the page?
You may not be referencing the actual anchor you'd expect. Add an id tag to the anchor and test again. If it is the only anchor tag in the page try to match the text rather than expect it to be. IE:
expect((element('#anchor-tag-id').text()).toMatch(/Ask a question/);
If you use chrome open the develop tools on your a element to check the actual values, this may help a lot.
EDIT:
should be
expect(element('#anchor-tag-id').text()).toMatch(/Ask a question/);
sorry added extra ( in the first example

Resources