Quit driver after each Geb Spock test - groovy

I am running Geb/Spock tests in Sauce Connect, and I would prefer to have unique instances of the RemoteWebDriver per test. This way, the Sauce reports would be divided by test, which makes it easy to diagnose failures. I'm not concerned (right now) about the additional performance overhead, because as it stands running all our Geb tests via one RemoteWebDriver instance is not helpful at all - it takes a very long time to coordinate the results with the Sauce screenshots/screencasts, and when timeouts occur (which is a high possibility in a long running job over Sauce Connect) there is usually some test failure spillover.
I tried this in a class that extends GebReportingSpec:
def cleanup() {
if (System.getProperty('geb.env')?.contains('sauce')) {
setSauceJobStatus()
driver.quit()
}
}
And of course, I create a new RemoteWebDriver in the setup() method.
With this approach I get a unique Sauce Connect session per test, and the results are all beautifully organized in Sauce. HOWEVER, all the tests fail due to:
"org.openqa.selenium.remote.SessionNotFoundException: Session ID is null. Using WebDriver after calling quit()?"
It turns out that the cleanup() method in GebReportingSpec calls out to this method:
void report(String label = "") {
browser.report(ReporterSupport.toTestReportLabel(_gebReportingSpecTestCounter, _gebReportingPerTestCounter++, _gebReportingSpecTestName.methodName, label))
}
Which throws this stack trace:
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:125)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:572)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:622)
at org.openqa.selenium.remote.RemoteWebDriver.getPageSource(RemoteWebDriver.java:459)
at geb.report.PageSourceReporter.getPageSource(PageSourceReporter.groovy:42)
at geb.report.PageSourceReporter.writePageSource(PageSourceReporter.groovy:38)
at geb.report.PageSourceReporter.writeReport(PageSourceReporter.groovy:29)
at geb.report.CompositeReporter.writeReport(CompositeReporter.groovy:31)
at geb.Browser.report(Browser.groovy:788)
at geb.spock.GebReportingSpec.report(GebReportingSpec.groovy:44)
at geb.spock.GebReportingSpec.cleanup(GebReportingSpec.groovy:39)
It's assuming that the WebDriver instance is still around when the GebReportingSpec cleanup() method is called, so that the reporting information can be prepared.
So, my approach is obviously not the "Geb way".... I'm wondering if anyone can clue me in on how to properly create a unique driver per Spock test?

Unfortunately you've hit a limitation of GebReportingSpec implementation and the fixed order of execution of Spock's setup and cleanup methods in inheritance hierarchy. What you should do is to quit your browser in a method that overrides GebSpec.resetBrowser() instead of cleanup():
void resetBrowser() {
def driver = browser.driver
super.resetBrowser()
if (System.getProperty('geb.env')?.contains('sauce')) {
driver.quit()
}
}
Getting a local reference to the driver and then calling super method is important because calling the super method will clear browser reference which means that you won't be able to get hold of the driver after that.
Also, you should not create a new RemoteWebDriver in setup() but you should disable driver caching which means that a new driver will be created per driver request (a driver is requested per browser creation and a new browser is created per each test) instead of the cached one being reused.

If you use browser.quit(), then you will get an exception and the test will be failed. You can try the following snippet at the very beginning of your class, it should work just fine:
def setup() {
browser.config.cacheDriver = false
browser.driver = browser.config.driver
}
def cleanup() {
browser.close()
}
Cheers!

Related

jupiter zerocode ParallelLoadExtension choose methods order

Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.

Execute Spock Test in Different Environments

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.
You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

Geb drive method

I'm trying to learn how to use Geb and I'm getting an error. Could you guys help me out?
I'm trying to use the drive method but it is not working. I've tested a few of the other Browser's methods and they work all right. Just the drive method is giving me trouble.
I've checked the API and googled around but didn't find anything helpful. The strange thing is the fact that I don't get an error message. There is no Exception. I'm running the code on Groovy's console and Firefox just chills for a while and then the execution finishes.
Geb 0.9.2, FirefoxDriver and JDK 7
import org.openqa.selenium.WebDriver;
import geb.Browser
import org.openqa.selenium.firefox.FirefoxDriver
public class MyTest {
Browser browser;
void test(){
browser = new Browser(driver: new FirefoxDriver())
browser.go "http://www.google.com" // this works
browser.$("div button", name: "btnK").text() == "Google Search" // this works
browser.drive { // WHY U NO WORK?!!
go "http://www.google.com"
}
}
}
x = MyTest()
x.test()
You should know that drive() is a static method and it's designed to be used in scripts where you don't instantiate a browser instance. You have to decide - you either use a browser instance or the Browser.drive {} method. Yo cannot do both.
You might also consider using one of the integrations with testing frameworks - by doing so you'll get Geb to manage a browser instance for you.

What are fixtures in programming?

I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state. 
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs

Using selenium web driver to run test on multiple browsers

I'm trying to run a same test across multiple browsers through for loop but it always run only on Firefox.
bros = ['FIREFOX','CHROME','INTERNET EXPLORER']
for bro in bros:
print "Running "+bro+"\n"
browser = webdriver.Remote(
command_executor='http://10.236.194.218:4444/wd/hub',
desired_capabilities={'browserName': bro,
'javascriptEnabled': True})
browser.implicitly_wait(60000)
browser.get("http://10.236.194.156")
One interesting observation; when I include the parameter platform: WINDOWS it's running only on Internet Explorer.
Does Selenium Webdriver works this way or my understanding is wrong?
I actually have done this in java, the following works well for me:
...
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
...
DesiredCapabilities[] browsers = {DesiredCapabilities.firefox(),DesiredCapabilities.chrome(),DesiredCapabilities.internetExplorer()};
for(DesiredCapabilities browser : browsers)
{
try{
System.out.println("Testing in Browser: "+browser.getBrowserName());
driver = new RemoteWebDriver(new URL("http://127.0.0.1:4444/wd/hub"), browser);
...
You will need to adapt this of course if you're writing your tests in a different language, I know it's possible in Java, not sure about otherwise.
Also, I agree with what you're trying to do, I think it is much better to have a class that runs the same tests with different browsers, instead of duplicating code many times over and being inelegant. If you are doing this in Java/other codes, I also highly suggest using a Page Object.
Good luck!
So if I got you right, you have one testcase and want this to be tested against different browsers.
I don't think a loop is a good idea even if it's possible (I don't know atm).
The idea is to be able to test every testcase standalone on the run with a specific browser (thats the JUnit philosophy), not to run all in order to get to that specific browser .
So you need to create a WebDriver with the specific browser and the specific testcase.
I suggest you seperate testcases by creating a testcase-class file for each browser.
Like: FirefoxTestOne.java, IeTestOne.java, ChromeTestOne.java .
Note that you can add multiple firefox tests in the FirefoxTestOne without problems. Theres no guarantee that they will be executed in a particular order through (JUnit philosophy).
For links and tutorials ask google. There are already looooots of examples written.
You will have to generate multiple test classes (or webdriver instances) with the chosen browsers.
A Webdriver is defined with one browser.
As Coretek said you need multiple webdriver instances. You will need to run the selenium-server .jar file and provide each one with an argument specifying the browser you want that instance of the server to run.
The argument for Internet Explorer is *iexplore, the argument for firefox is *firefox and the argument for chrome is *chrome. These are -forcedBrowserMode arguments. Otherwise selenium won't know what it should be running against. You may need to use *iexploreProxy for your tests, sometimes it works better than the *iexplore mode.
Check out this link for more arguments that may be useful:
http://seleniumforum.forumotion.net/t89-selenium-server-command-options-while-starting-server
This way (attached url) worked for me.
http://blog.varunin.com/2011/07/running-selenium-tests-on-different.html
The following point is different from the example.
#Parameters
public static List data() {
return Arrays.asList(new Object[][]{{"firefox"},{"ie"}});
}
#Before
public void setUp() throws Exception {
System.out.println("browser: " + browser);
if(browser.equalsIgnoreCase("ie")) {
System.setProperty("webdriver.ie.driver", "IEDriverServer64.exe");
driver = new InternetExplorerDriver();
} else if(browser.equalsIgnoreCase("firefox")) {
driver = new FirefoxDriver();
You can use TestNG for this
combination of selenium + testng gives you a batter result for this
just by adding parameters attribute you can do this
Have you considered using the composite design pattern to create a CompositeWebDriver that actually runs multiple component WebDriver (such as chrome, gecko,...)? To this end, you would extend the WebDriver class with a new one (e.g. CompositeWebDriver) that just delegates his calls to all the actual WebDrivers.
This could also be done with various instances of RemoteWebDriver as components.

Resources