Execute Spock Test in Different Environments - groovy

I have a Selenium test, which is executed with the help of Spock framework. In general it looks like this:
class SeleniumSpec extends Specification {
URL remoteAddress // Address of SE grid
Capabilities caps // Desired capabilities
WebDriver driver // Web driver
def setup() {
driver = new RemoteWebDriver(remoteAddress, caps)
}
def "some test" () {
expect:
driver.findElement(By.cssSelector("p.someParagraph")).text == 'Some text'
}
// other tests go here ...
}
The point here is, that my specification describes behavior of some component (in most cases - web views/pages). So the methods are expected to implement some business-relative logic (smth. like 'click on button and expect message in another field); but another thing, I would like to test, is to ensure that behavior is exactly the same in all browsers (capabilities).
To achieve this in 'ideal' world, I'd like to have a mechanism to specify, that a particular test class should be used several times, but with some different parameters. But for now, I see an ability to apply data sets only for a single method.
I came up only with several ideas to implement this (according to my current knowledge regarding Spock framework):
Use a list of drivers and execute each action over all list members. So each call to 'driver' will be replaced with 'drivers.each { it }' invocation. This approach, on the other hand, makes it hard to exactly discover, which of drivers failed the test.
Use method parameters and data sets to initiate a fresh copy of web driver on each iteration. This approach seems more logical according to Spock philosophy, but it requires a heavy operation of driver and web application initialization to be performed every time. It also removes ability to perform 'step-by-step' testing, since state of driver won't be preserved between test methods.
Combination of these approaches, when drivers are kept in a map, and each test invocation has exact name of driver to be used.
I'd appreciate, if anybody met this case and may come up with ideas, how to properly organize the testing process. Someone could also discover other approaches, or pros & contras of those above. Considering another test tool could be an option also.

You could create an abstract BaseSpec which contains all your features, but do not setup the driver in that spec. Then create a subspec for each different browser you want to test e.g.
class FirefoxSeleniumSpec extends BaseSeleniumSpec{
setupSpec(){
super.driver = new FirefoxDriver(...)
}
}
And then you can run all of the sub-specs to test all the browsers

Related

xUnit for test DAL in .net core 2 and DI- a little bit of confusion

I a little bit of confusion about xUnit for test my DAL.
My goal is to verify that my DAL correctly accesses the DB and extract the right data.
I create a xUnit test project and try to do a simpli test with Moq like this
[Fact]
public void Test1()
{
// Arrange
var mockMyClass = new Mock<IMyClassBLL>();
// Setup a mock stat repository to return some fake data within our target method
mockStAverageCost.Setup(ac => ac.GetBy(It.IsAny<MyClassVO>())).Returns(new List<MyClassVO>
{
new MyClassVO { HCO_ID = "1"},
new MyClassVO { HCO_ID = "2"},
new MyClassVO { HCO_ID = "3"},
new MyClassVO { HCO_ID = "4"}
});
// create our MyTest by injecting our mock repository
var MyTest = new MyClassBLL(mockMyClass.Object);
// ACT - call our method under test
var result = MyTest.GetBy();
// ASSERT - we got the result we expected - our fake data has 6 goals we should get this back from the method
Assert.True(result.Count == 4);
}
The method above work fine.
Now I want access directly to the db for get data.
Obviously something escapes me, I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
Can someone clarify my ideas?
Are you looking for a unit test or an integration test? They're fundamentally different things and serve different purposes.
If your goal is to ensure that GetBy (the unit of functionality under test) does what it's supposed to do, then you should not be using live data. A real connection with real data would introduce variables, causing the test to potentially fail when there's actually nothing wrong with GetBy. For a true unit test, you should only use mocks and test data.
If your goal is to ensure that your application can connect to your database and actually draw data out of it, then that's an integration test. You might potentially use GetBy/your repository, in general, in the test, but generally you'd want to avoid that. Again, connecting and querying directly with via something like ADO.NET serves to remove variables, so if the test fails, you'll know it was because there actually was a problem connecting/querying, in general, rather than just some issue with your repository or a particular method thereof.
Long and short, a good test tests just one thing. If that particular thing requires external components (such as a SQL Server database), then it's an integration test, and at that point, you're testing the integration of the component. Something like a repository method should not come into play, as that would be testing two different things in one test. If you need to test GetBy then there should be no external dependencies, such as a SQL Server database.
Additionally:
I did not understand how to perform a data test with .net core 2 simulating dependeny injection and accessing the data.
This would be an example of testing the framework, which is another no-no. You can safely assume that DI works in ASP.NET Core. It has its own test suite covering that. There is no need for you to add tests for that as well.

How to perform teardown for releasing resources after each Scenario In serenity BDD with Cucumber

I am using Serenity with BDD and need to perform a teardown step that must get executed after completion of each Scenario. Also, this teardown step should not be visible to report as it is technical thing and nothing to do with behavior to be exposed as part of cucumber such as releasing few expensive resource that got
I used cucumber's #After annotation which is working as expected, but the problem is now this step is also shown in my Report which I don't want to be visible.
Could someone please suggest me a solution that allows me to perform teardown step that gets executed per scenario but should not be added as step in my Serenity Report.
Current Solution I have is which does not satisfy my need:
Step Definition Class has following method:
#After
public void tearDown() {
systemAction.deleteCostlyResource(id);
}
but #After annotation makes it a candidate for Reporting Step.
If you are using Dependency Injection, you could have your DI framework teardown the resources at the end of the scenarios?
For instance, if you are using Spring:
If the "costly resource" is a class that you yourself have created, mark it with:
#Component
#Scope("cucumber-glue")
If the "costly resource" is not a class you created, but provided by a framework or whatever, you can register it as a bean in your spring (test)configuration and mark it with a "destroy method".
For example, to register Selenium WebDriver using annotation based configuration and making sure to quit after each Scenario, mark it with:
#Bean(destroyMethod = "quit")
In this example, quit() is WebDriver's method to quit(). In your situation, call "costly resource's" quit method, or equivalent thereof.

What are fixtures in programming?

I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state. 
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs

Using selenium web driver to run test on multiple browsers

I'm trying to run a same test across multiple browsers through for loop but it always run only on Firefox.
bros = ['FIREFOX','CHROME','INTERNET EXPLORER']
for bro in bros:
print "Running "+bro+"\n"
browser = webdriver.Remote(
command_executor='http://10.236.194.218:4444/wd/hub',
desired_capabilities={'browserName': bro,
'javascriptEnabled': True})
browser.implicitly_wait(60000)
browser.get("http://10.236.194.156")
One interesting observation; when I include the parameter platform: WINDOWS it's running only on Internet Explorer.
Does Selenium Webdriver works this way or my understanding is wrong?
I actually have done this in java, the following works well for me:
...
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
...
DesiredCapabilities[] browsers = {DesiredCapabilities.firefox(),DesiredCapabilities.chrome(),DesiredCapabilities.internetExplorer()};
for(DesiredCapabilities browser : browsers)
{
try{
System.out.println("Testing in Browser: "+browser.getBrowserName());
driver = new RemoteWebDriver(new URL("http://127.0.0.1:4444/wd/hub"), browser);
...
You will need to adapt this of course if you're writing your tests in a different language, I know it's possible in Java, not sure about otherwise.
Also, I agree with what you're trying to do, I think it is much better to have a class that runs the same tests with different browsers, instead of duplicating code many times over and being inelegant. If you are doing this in Java/other codes, I also highly suggest using a Page Object.
Good luck!
So if I got you right, you have one testcase and want this to be tested against different browsers.
I don't think a loop is a good idea even if it's possible (I don't know atm).
The idea is to be able to test every testcase standalone on the run with a specific browser (thats the JUnit philosophy), not to run all in order to get to that specific browser .
So you need to create a WebDriver with the specific browser and the specific testcase.
I suggest you seperate testcases by creating a testcase-class file for each browser.
Like: FirefoxTestOne.java, IeTestOne.java, ChromeTestOne.java .
Note that you can add multiple firefox tests in the FirefoxTestOne without problems. Theres no guarantee that they will be executed in a particular order through (JUnit philosophy).
For links and tutorials ask google. There are already looooots of examples written.
You will have to generate multiple test classes (or webdriver instances) with the chosen browsers.
A Webdriver is defined with one browser.
As Coretek said you need multiple webdriver instances. You will need to run the selenium-server .jar file and provide each one with an argument specifying the browser you want that instance of the server to run.
The argument for Internet Explorer is *iexplore, the argument for firefox is *firefox and the argument for chrome is *chrome. These are -forcedBrowserMode arguments. Otherwise selenium won't know what it should be running against. You may need to use *iexploreProxy for your tests, sometimes it works better than the *iexplore mode.
Check out this link for more arguments that may be useful:
http://seleniumforum.forumotion.net/t89-selenium-server-command-options-while-starting-server
This way (attached url) worked for me.
http://blog.varunin.com/2011/07/running-selenium-tests-on-different.html
The following point is different from the example.
#Parameters
public static List data() {
return Arrays.asList(new Object[][]{{"firefox"},{"ie"}});
}
#Before
public void setUp() throws Exception {
System.out.println("browser: " + browser);
if(browser.equalsIgnoreCase("ie")) {
System.setProperty("webdriver.ie.driver", "IEDriverServer64.exe");
driver = new InternetExplorerDriver();
} else if(browser.equalsIgnoreCase("firefox")) {
driver = new FirefoxDriver();
You can use TestNG for this
combination of selenium + testng gives you a batter result for this
just by adding parameters attribute you can do this
Have you considered using the composite design pattern to create a CompositeWebDriver that actually runs multiple component WebDriver (such as chrome, gecko,...)? To this end, you would extend the WebDriver class with a new one (e.g. CompositeWebDriver) that just delegates his calls to all the actual WebDrivers.
This could also be done with various instances of RemoteWebDriver as components.

Code Contracts and Auto Generated Files

When I enabled code contracts on my WPF control project I ran into a problem with an auto generated file which was created at compile time (XamlNamespace.GeneratedInternalTypeHelper). Note, the generated file is called GeneratedInternalTypeHelper.g.cs and is not the same as the GeneratedInternalTypeHelper.g.i.cs which there are several obsolete blog posts about.
I'm not exactly sure what its purpose is, but I am assuming it is important for some internal reflection to resolve XAML. The problem is that it does not have code contracts, nor is the code contract system smart enough to recognize it as an auto generated file. This leads to a bunch of errors from the static checker.
I tried searching for a solution to this problem, but it seems like nobody is developing WPF controls and using code contracts. I did come across an interesting attribute, ContractVerificationAttribute, which takes a boolean value to set whether the assembly or class is to be verified. This allows you to decorate a class as not verified. Sadly the GeneratedInternalTypeHelper is regenerated with every compile, so it is not possible to exclude just this one class. The inverse scenario is possible though, decorate the assembly as not verified and then opt in for every class.
To mitigate the obvious hack I wanted to create a test that would at least verify that the exposed classes have code contract verification with a test like the following to ensure that own classes were at least being verified:
[Fact]
public void AllAssemblyTypesAreDecoratedWithContractVerificationTrue()
{
var assembly = typeof(someType).Assembly;
var exposedTypes = assembly.GetTypes().Where(t=>!string.IsNullOrWhiteSpace(t.Namespace) && t.Namespace.StartsWith("MyNamespace") && !t.Name.StartsWith("<>"));
var areAnyNotContractVerified = exposedTypes.Any(t =>
{
var verificationAttribute = t.GetCustomAttributes(typeof(ContractVerificationAttribute), true).OfType<ContractVerificationAttribute>();
return verificationAttribute.Any() && verificationAttribute.First().Value;
});
Assert.False(areAnyNotContractVerified);
}
As you can see it takes all classes in the controls assembly and finds the one from the company namespace which are not also auto generated anonymous types (<>WeirdClassName).
(I also need to exclude Resources and settings, but I hope you get the idea).
I'm not loving the solution since there are ways of avoiding contract verification, but currently it's the best I can come up with. If anyone has a better solution, please let me know.
So you can treat this class exactly like you would treat any other "3rd party" class or library. I'm sure certain assumptions would hold with the interaction with this generated class so at the interaction points, decorate your own code with Contract.Assume(result != null) or similar.
var result = new GennedClass().GetSomeValue();
Contract.Assume(result != null);
What this does is translate into an assertion that is checked at run time, but it allows the static analyzer to reason about the rest of the code that you do control.

Resources