In intern, subsequent test suites reuse browser/session - intern

In Intern, testsuites re-use the browser from the prior test suite. Although I can see the benefit of this in certain instances, I'm wondering if there is a way to set up the teardown/setup within the test suites so that the browser is closed upon completion and a new browser instance is opened when it starts. Thanks

Related

Puppeteer in-browser time acceleration

Is there some option, the opposite to slowMo, that accelerates in-browser time? What I want, is to play all the page scripts/intervals/animations as quick as it possible.
I don't believe that this is achievable. You can slow down actions in the browser, as you pointed out by using the slowMo setting, but otherwise the browser acts just like it would if a user was interacting with it - that's kind of the point really :-)
The suggestions I can make to you are based on things I've tried and implemented from my own experiences with UI automation. Hopefully something in here can help you.
What I want, is to play all the page scripts/intervals/animations as quick as it possible.
As I've said already, I don't believe you will be able to do anything other than wait for the page the load normally as if a user was logging in to and / or interacting with your application. However you do have a very powerful option which can take some of the pain away from waiting for your page to load all of the time.
For example, you can use jest-puppeteer:
https://github.com/smooth-code/jest-puppeteer/tree/master/packages/jest-puppeteer
What jest-puppeteer allows you to do is structure your test suite(s) in a behaviour driven testing format using describe and it statements to define your suite and test scripts respectively. By using this method, you can specify a before method to be executed before all test scripts in the suite are executed - so in here you could, say, log into your application and have it wait for everything to load once and once only. All test scripts will then be executed sequentially on the page that is displayed in the remote browser without having to reload the browser and start all over again from scratch between tests.
This can significantly reduce the pain of having to wait for page loads every single time you want to run a test.
The idea is that you can bunch relevant tests scripts together in each suite - ie. in the first suite, load the login page once and then execute all login based test scripts before tearing down. In the next suite, load the home page once and then execute all home page based test scripts before tearing down. You get the idea.
This is the best recommendation I can personally give you. Hopefully it helps!

Testing a Node library working with Docker containers

I'm currently writing a Node library to execute untrusted code within Docker containers. It basically maintains a pool of containers running, and provides an interface to run code in one of them. Once the execution is complete, the corresponding container is destroyed and replaced by a new one.
The four main classes of the library are:
Sandbox. Exposes a constructor with various options including the pool size, and two public methods: executeCode(code, callback) and cleanup(callback)
Job. A class with two attributes, code and callback (to be called when the execution is complete)
PoolManager, used by the Sandbox class to manage the pool of containers. Provides the public methods initialize(size, callback) and executeJob(job, callback). It has internal methods related to the management of the containers (_startContainer, _stopContainer, _registerContainer, etc.). It uses an instance of the dockerode library, passed in the constructor, to do all the docker related stuff per se.
Container. A class with the attributes tmpDir, dockerodeInstance, IP and a public method executeCode(code, callback) which basically sends a HTTP POST request against ContainerIP:3000/compile along with the code to compile (a minimalist API runs inside each Docker container).
In the end, the final users of the library will only be using the Sandbox class.
Now, my question is: how should I test this?
First, it seems pretty clear to my that I should begin by writing functional tests against my Sandbox class:
it should create X containers, where X is the required pool size
it should correctly execute code (including the security aspects: handling timeouts, fork bombs, etc. which are in the library's requirements)
it should correctly cleanup the resources it uses
But then I'm not sure what else it would make sense to test, how to do it, and if the architecture I'm using is suitable to be correctly tested.
Any idea or suggestion related to this is highly appreciated! :) And feel free to ask for a clarification if anything looks unclear.
Christophe
Try and separate your functional and unit testing as much as you can.
If you make a minor change to your constructor on Sandbox then I think testing will become easier. Sandbox should take a PoolManager directly. Then you can mock the PoolManager and test Sandbox in isolation, which it appears is just the creation of Jobs, calling PoolManager for Containers and cleanup. Ok, now Sandbox is unit tested.
PoolManager may be harder to unit test as the Dockerode client might be hard to mock (API is fairly big). Regardless if you mock it or not you'll want to test:
Growing/shrinking the pool size correctly
Testing sending more requests than available containers in the pool
How stuck containers are handled. Both starting and stopping
Handling of network failures (easier when you mock things)
Retries
Any of failure cases you can think of
The Container can be tested by firing up the API from within the tests (in a container or locally). If it's that minimal recreating it should be straightforward. Once you have that it's really just testing an HTTP client it sounds like.
The source code for the actual API within the container can be tested however you like with standard unit tests. Because you're dealing with untrusted code there are a lot of possibilities:
Doesn't compile
Never completes execution
Never starts
All sorts of bombs
Uses all host's disk space
Is a bot and talks over the network
The code could do basically anything. You'll have to pick the things you care about. Try and restrict everything else.
Functional tests are going to be important too, there are a log of pieces to deal with here and mocking Docker isn't going to be easy.
Code isolation is a difficult problem; I wish Docker was around last time I had to deal with it. Just remember that your customers will always do things you didn't expect! Good luck!

Running Spring tests in parallel with maven

I have a collection of integration tests running with SpringJUnit4ClassRunner. I'm trying to run these in parallel using maven surefire. However, I have noticed that the the code is blocking before entering the synchronized block in CacheAwareContextLoaderDelegate.loadContext().
Is there a way to bypass this cache? I tried doing this, but it seems like there is more shared state than just the cache itself since my application deadlocked inside Spring code. Or could the synchronization be made more fine-grained by somehow synchronizing on the map key rather than the entire map?
My motivation for parallelising tests is twofold:
In some tests, I replace beans with mocks. Since mocks are inherently stateful, I have to build a fresh ApplicationContext for every test method using #DirtiesContext.
In other tests, I only want to deploy a subset of Jersey resources. To do this, I specify a subset of Spring configuration classes. Since Spring uses the MergedContextConfiguration as a key in the context cache, these tests will be unable to share ApplicationContexts.
It is possible that may get a better turn-around time for your test suit if you disable the parallell test execution. In the testing chapter of Spring's reference docs there is a paragraph about Context caching:
Once the TestContext framework loads an ApplicationContext (or WebApplicationContext) for a test, that context will be cached and reused for all subsequent tests that declare the same unique context configuration within the same test suite.
Why is it implemented like this?
This means that the setup cost for loading an application context is incurred only once (per test suite), and subsequent test execution is much faster.
How does the cache work?
The Spring TestContext framework stores application contexts in a static cache. This means that the context is literally stored in a static variable. In other words, if tests execute in separate processes the static cache will be cleared between each test execution, and this will effectively disable the caching mechanism.
To benefit from the caching mechanism, all tests must run within the same process or test suite. This can be achieved by executing all tests as a group within an IDE. Similarly, when executing tests with a build framework such as Ant, Maven, or Gradle it is important to make sure that the build framework does not fork between tests. For example, if the forkMode for the Maven Surefire plug-in is set to always or pertest, the TestContext framework will not be able to cache application contexts between test classes and the build process will run significantly slower as a result.
One easy thing that I could think of is using #DirtiesContext

Execution of Test scripts with Coded UI Test consumes more time

We are facing few issues while executing Coded UI Test scripts.
Regulary we have to execute automated scripts on Coded UI Test, earlier we used Test Partner for execution. Recently we migrated few of our Test Partner scripts to Coded UI Test . However, we observed that Coded UI Test scripts execution time is more when compared toTest Partner exection time. Our automated scripts were completely hand written, no where we used recording and playback feature.
And few of our observations were
IE Browser hangs on executing Coded UI Test scripts on windows XP. Everytime we have to kill the process and we have to recreated the scenario to continue the execution further. So, it does not suffice the automation essentiality, as each and every time one has to monitor whether script execution goes fine without browser hang. Its a very frequent problem on XP.
If we execute Coded UI Test scripts on windows 7. The execution time is quite slow. It will consume more time then the execution time on XP. So our execution time drags, though script goes fine without Browser hang. We tried to execute scripts in release mode as well. But whenever script halts one has to execute script again in debug mode.
Could you please suggest on this. What exactly the point we are missing? By chaning tool settings can we improve performance of the execution time? Thanks for the support.
First of all you should enable the logging and see why the search takes up so many time.
You can also find useful information in the debug outputs that give warning when operations take more time than expected.
Here are two useful links for enabling those logs
For VS/MTM 2010 and 2012 beta: http://blogs.msdn.com/b/gautamg/archive/2009/11/29/how-to-enable-tracing-for-ui-test-components.aspx
For VS/MTM 2012 : http://blogs.msdn.com/b/visualstudioalm/archive/2012/06/05/enabling-coded-ui-test-playback-logs-in-visual-studio-2012-release-candidate.aspx
A friendly .html file with logs should be created in %temp%\UITestLogs*\LastRun\ directory.
As for the possible explanation to your issue - it doesn't matter if you coded your tests explicitly or by hand the produced calls to WpfControl.Find() or one of deriving classes, if the search fails at first it will move up to performing heuristics to find the targeted control anyway.
You can turn MatchExactHierachy setting of your Playback to be true, and stop using the smartmatch feature
(more on it here together with afew other usefull performance tips
http://blogs.msdn.com/b/mathew_aniyan/archive/2009/08/10/configuring-playback-in-vstt-2010.aspx)

Spring managed database NUnit tests fail when run in parallel with Resharper

The Goal: Run our database tests (that use different databases) to run in parallel.
We are using Resharper's Nunit option to integrate our Unit Tests into visual studio. Resharper allows you to set the number of assemblies to run in parallel. The tests never fail when run in serial; however, when we set the number of assemblies to run in parallel to 3 or higher (maybe two, although none have failed yet), some database tests consistently fail.
Our guess, is that data providers are getting switched out from under the tests. We use spring as an IOC and in our tests it also handles the transaction management. Our older tests required seeded data, but our new tests expect an empty database and creates any necessary data for the test. We think that because our bootstrap for the test fixtures is setting the connection string properties of the db provider, when another test runs in parallel, it can change the provider out from under the tests.
Either way, it would be nice to run the database tests in parallel and not lose the transaction management and test cleanup we get with spring.
The bootstrap that runs for each test is setting the connection string (and the db provider connection string).
Any ideas on how to get these tests (with different connection strings) to run in parallel.

Resources