How to wait for all requests in jest tests using MSW - jestjs

I've just started using MSW (https://mswjs.io/) in the tests. I came however across a case that I have no clue how to deal with.
Let's say we are testing component A (render <A /> in a test) which makes three different API requests. All of them are done independently in the useEffect section of component A.
The first test checks the influence of the first request (e.g. request returns a list and the list is rendered). The second test checks something related to second request and so forth. I'd like that any test is as independent as it could be and verifies one thing only.
Let's see what's happening in the first test:
render <A /> triggers three requests.
waitFor waits for the data from first requests and it's influence in the UI.
If A renders correctly - the tests passes and waitFor is over.
Second and third request is under way and the first tests is not waiting for it (since it's not related to things checked in the first test). This situation causes warnings Can't perform a React state update on an unmounted component.
What is the approach I should follow to get rid of the warning?
Should the first test has direct indication to wait for second and third request to be finished? If so, it means that I'm gonna end up with not independent tests. Is that correct?

Related

Fatal error: stale element reference: element is not attached to the page document in selenium with nodejs

I am facing this issue while running the UI regression test. I have used Typescript for automation in Selenium and I have around 80 test cases. So when executing all at once, it executes some of the test cases smoothly but after sometimes in one of the testcase this error comes up and stops the execution. I am using only one thread for running these tests.
"noOfThreads": "1",
There is no issue with the individual test cases as they work fine when running individually
StaleElementException occurs when the the driver tries to perform an action on an element which was once present in the DOM but does not exist anymore. For example, if you store an element in a variable, perform some changes which introduces variation in the DOM structure and the original element is lost, and now your driver tries to access it, this would fail.
In your case, when you are running the tests individually, there is no effect of one scenario on the next one. This is why the tests pass. However, when you run all of them together, this can introduce a change in the DOM.
Try to debug the test which failed by running a few tests before it.

Execute SoapUI test on multi-threads

I have a SoapUI test which uses an input file to read lines as input of requests. So there is a loop which reads data and execute request and write output to file. Response times are too long, so processing of this file should be done asynchronously, but I am not sure, how SoapUI can handle this. There is file attachment in SOAP requests, which is not handled by current version of JMeter.
As per the SoapUI's documentation below, both test cases or test suites can be executed in Parallel mode.
In the case of TestSuites and TestCases these can be executed either in sequence or parallell, as configured with the corresponding toolbar buttons.
In the above image, first one in the marked image stands for sequential execution and the second one (with multiple parallel arrows) stands for Parallel execution mode.
User can select either of the one before executing the tests.
Hope this helps.
Note that SOAPUI does not allows test steps to be executed in parallel. If you need any custom execution i.e., same test case and steps to be executed in Parallel, here is sample project done for that. It can be used as reference and apply it to your case.
I understood this question as requiring the ability to call a service asynchronously due to the time it takes to process. So, by this, I mean SoapUI makes a request to a web service and instead of waiting for it, it carries on. At some point later, SoapUI receives the response.
SoapUI can handle this, I haven't tried it myself, but when reading some guides recently, I noticed it can be done.
See....
Blog Guide
SoapUI Forum
In short, it involves setting up a mock service to receive the response, which can then be validated.

Sequencing multiple non-main-thread HttpRequest

In a .net-Application I work with several requests against a Web-Api, using HttpRequest. Since these request are not run on the main thread (async, await...), it is possible that some requests are launched at almost the same time, so a later one is started without a former one being finished. This, in the end, leads to an inconvienient behavior - for example when a first request runs into refreshing access tokens, a second one shall not start a similar request as well, but should wait for the first one to finish and run afterwards.
My idea is to somehow schedule the request in a FiFo-Way, using Arrays/List which are updated everytime one request is finished. A starting request then would check in this list if it is the one in line, or would wait for a start signal by some Array/List watcher.
However, I have strong doubts that this approach is the best/correct way for doing this. Any Help/Hint/Heads Up would be great!

Testing background processes in nodejs (using tape)

This is a general question about testing, but I will frame it in the context of Node.js. I'm not as concerned with a particular technology, but it may matter.
In my application, I have several modules that are called upon to do work when my web server receives a request. In the case of some of these modules, I close the request before I call upon them.
What is a good way to test that these modules are doing what they are supposed to do?
The advice here for RSpec is to mock out the work these modules are doing and just ensure that the appropriate methods are being called. This makes sense to me, but in Node.js, since my modules are not global, I don't think I cannot mock out functions without changing my program architecture so that every instance receives instances of objects that it needs1.
[1] This is a well known programming paradigm, but I cannot remember its name right now.
The other option I see is to use setTimeout and take my best guess at when these modules are done with their work.
Neither of these seems ideal.
Am I missing something? Are background processes not tested?
Since you are speaking of integration tests of these background components, a few strategies come to mind.
Take all the asynchronicity out of their operation for test mode. I'm imagining you have some sort of queueing process (that could be a faulty assumption), you toss work into the queue, and then your modules pick up that work and do their task. You could rework your test harness such that the test harness stands in as the queuing mechanism and you effectively get direct control over when the modules execute.
Refactor your modules to take some sort of next callback function. They would end up functioning a bit like Express's middleware layer or how async's each function works, but into each module you'd pass some callback that you call when that module's task is complete. Once all of the modules have reported in, then you can check the state of the program.
Exactly what you already suggested-- wait some amount of time, and if it still isn't done, consider that a failure. Mocha sort of does that, in that if a given test is over a definable threshold, then it's a failure. I don't like this way though, because if you add more tests, they all have to wait the same amount of time.

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources