How to close the test browser when a functional test fails? - intern

When running a functionnal test with Intern, I've noticed that the test browser (google chrome in my case) automatically close when the test succeeed, but is not closed when the test fail.
Is there a way to have the browser close even when a test fail ?

Test browsers are always instructed to quit regardless of whether or not the test run succeeded or failed. If they are not quitting, it is probably because of a fatal error elsewhere (closed socket, etc.) that prevents Intern from sending the quit command to the browser.

Related

Debugging Node in VS Code (or other IDEs)

In VS code there are two ways to launch the debug console for Node. One is “launch” which executes node and passes in your script. The script executes and node exits which i don’t want to happen. The other way is “attach”, this way you launch node yourself using --inspect then attach VS code to the debugger. Then I have to go to the node console and type “.load myscript”. This will keep the node console open after the script has finished.
What I want is to have ease of use of the “launch” method but keep the node console open at the end like the “attach” approach so I can then type further commands or view the contents of variables. There must be a way to do this but I can’t find out how. Can anyone advise how I could achieve this? I am even happy to only use the “launch” method if I could someone add a breakpoint at the end of code so that it would keep node open.
A node.js process will not exit as long as there are events pending. A simple way to do that at the end of your script is to start a server that does nothing:
net.createServer(()=>{}).listen(0)
Setting the port to 0 will cause the OS to give you a random available port so you don't need to think about what port to use.
This is generally safe if you are on a local network. However, if you are worried about other software connecting to your bogus server you can simply close all incoming connections upon receiving them:
net.createServer(x=>x.end()).listen(0)

while loading report page manually browser crashed and failed to load data, but vugen script is running fine and getting 200 OK response

While loading report contains ~ 50K records data manually web browser crashed and failed to respond but vugen script is passed with the same 50K record returning 200 Ok for the same request. so does it mean from vugen script (web http protocol) we can not find such performance issue where browser failed to load web page.
If you are using WebHTTP you are only simulating the transport layer and you will not be able to find browser issues.
For browser issues you will have to do a functional test with a functional testing tool (e.g. UFT).
You can "cheat" though and use 1 TruClient virtual user in conjunction with the other WebHttp virtual users and then you will most likely be able to detect the problem.

Jest, single setup for multiple test files?

I am playing around with Node.js and Jest for testing and running into a problem when I have multiple test files.
In each of my test, I have beforeAll() which starts my server and I have afterAll() which closes the server. When I had one test file, it worked fine. But when I have two test files, the server state conflicts. Sometimes while server is already running from first file, second file tries to run and causes Error: listen EADDRINUSE type of error, sometimes first file closes server in the middle of second file running and it all depends on timing that sometimes it works and sometimes it runs into issues.
Is there way to get my server started once before any of the test files run, then close the server after all of the test files are completed?
Or is it that I need to structure my testing differently?
Perhaps I need to put all tests which requires my server into single test file? I kind of doubt I have to do it that way, but who knows..
Any help or suggestion would be appreciated.
And I prefer not using --runInBand flag though I am not certain this would help.

start a new node.js Server from an existing one

I haven't found out anything about my problem so I'd like to ask you if following problem could be solved. I have a nodejs server which displays a website with a button. Is it possible to start another node server (which should do some spookyJS tests and print the results to the website) when i click this button?
I found out that with nowJS you have a shared space which the server and "client" (some html page) share. Is this module helpful?
Thanks for your help,
Alex
In short - Yes!
But perhaps you can have both web servers running at all times. In fact, it'll be less of a load on your hardware.
1st Server - Application Server - runs at yoursite.com
2nd Server - SpookyJs/Test Server - runs at tests.yoursite.com
After the servers are up and running the next thing I'd do is wrap the SpookyJs application with a simple restful interface/api. To start tests and to respond with the result of a test.
An important thing to note here is that when you start the SpookyJS application, let stay open. So that every request to the SpookyJS application (through your interface) calls the "open" or the "then" method.
Again, this is to remedy the issue of spawning too many headless browsers.
After the request goes through, go ahead and respond to the request with the result that spooky gives you.
Maybe that helps?
We are doing similar things with Zombie js... so maybe it will help you (:

How can I create "evented" it tests in mocha

I've written a series of tests in mocha for a node.js client/server application that uses UDP to send messages between the client and server.
My tests are mainly on the 'client side', treating the server as a block box and validating the responses from the server. The problem is that some of the "conversations" span multiple message send and receive events, sometimes going up to the dozens of seconds. It seems bizarre to have a 1,000 line test script with only one huge call to it at the top - I want to run multiple tests during the "conversation". I want more granularity in which specific parts of the test fail if it does fail (e.g. the first 2 responses from the server were fine but the 3rd was malformed), but the initial tests pass.
I've looked in to nesting the calls to it (doesn't seem to work) and most recently separating the calls to it in separate steps, with each step representing the sending or receipt of one message at the client side.
This approach doesn't seem to work because mocha terminates the node application after the very first step, never waiting for the socket to receive more responses from the server and complete the rest of the steps.
How can I create "evented" it calls in mocha?
The first would be called at the start of the test, then each successive it would only be called after receiving a response from the server. I'm looking for any solution that gives me the granularity on tests without having to write enormous functions in my it call that include dozens of messages between client and server. It's also not ok to attempt validating responses from the server outside of the context of a conversation where many messages have been sent and received before hand, because those messages determine the validity of the server response.
See a sample implementation that I created at https://gist.github.com/4490219. You can see the result is that the first test passes but the second is never executed even though the socket is clearly still open and waiting for requests from the server.
(P.S. apologies for the text formatting of the gist - I couldn't seem to select Javascript as a language type when creating it)
(P.P.S. I really don't want to have to use a series of setTimeout calls at the start of each step to make mocha think it has to wait).
The solution is to use a combination of nested its (which actually can be achieved if you use a describe between each nested it) and using a registration it with long timeout who's done is only called when a response arrives from the server.
For future googlers I created another gist to show the solution for this problem.
https://gist.github.com/Trindaz/4490646

Resources