Pytest -- class inheritance as a variable for setup - python-3.x

I have a setup class that has setup and teardown functions among other things. My test class inherits this setup class currently so that all tests are ran using those functions as well as access to some variables.
Here's what my setup class looks like:
class SetupClass1:
def setup(self):
...
def teardown(self):
...
Here's what my test class looks like:
class MyTestClass(SetupClass1):
def test_1(self):
...
def test_2(self):
...
I have another setup class, SetupClass2 with its own setup, teardown functions among other variables that are setup for tests to use. I was hoping to have it so that I could parameterize MyTestClass so take in both setup classes but haven't had much luck.
I was hoping to do something like this but it didn't work:
#pytest.mark.parameterize('setup_class', [SetupClass1, SetupClass2])
class MyTestClass(setup_class):
def test_1(self):
...
def test_2(self):
...
I'm not sure how to accomplish this or if I'm going about this the complete wrong way. There's a lot of tests that I'm working with so I was hoping to do minimal changes to accomplish this. I was reading into pytest.fixtures but I couldn't get them to work. Thanks in advance.

Related

Same mock object has different ids in test class and testable class

I want to write unit tests for the main.py class. The file structure of my project is like this,
my unit tests are included in test/source/code_files folder. I want to mock some methods in the main class too. (which uses variables in source/config/config.py) I'm using the patch for this.
ex:
import main
#patch('main.config.retry_times')
def test_method(self, mock_retry_times):
mock_retry_times().return_value = 'retry_times_mock_val'
#calling the main class method
in the main class method, retry_times is defined like this,
from source.config import config
def method():
var1 = config.retry_times['url']
# Do other stuff
This gave me an error as,
I tried with Magic Mock object as well. But it didn't work as per this solution. My imports also work fine.
But I figure out one thing.
when I check mock IDs in both the testable class and the test class they were different like this.
It seems like an issue in these ids. I think they must be the same in both classes. Does anyone help me to sortout this issue.

Locust - How do I define multiple task sets for the same user?

Please consider the follow code:
class Task1(TaskSet):
#task
def task1_method(self):
pass
class Task2(TaskSet):
#task
def task2_method(self):
pass
class UserBehaviour(TaskSet):
tasks = [Task1, Task2]
class LoggedInUser(HttpUser):
host = "http://localhost"
wait_time = between(1, 5)
tasks = [UserBehaviour]
When I execute the code above with just one user, the method Task2.Method never gets executed, only the method from Task1.
What can I do to make sure the code from both tasks gets executed for the same user?
I would like to do it this way because I want to separate the tasks into different files for better organizing the project. If that is not possible, how can I have tasks defined into different files in an way that I can have tasks defined for each od my application modules?
I think I got it. To solve the problem I had to add a method at the end of each taskset to stop the execution of the task set:
def stop(self):
self.interrupt()
In addition to that, I had to change the inherited class to SequentialTaskSet so all tasks get executed in order.
This is the full code:
class Task1(SequentialTaskSet):
#task
def task1_method(self):
pass
#task
def stop(self):
self.interrupt()
class Task2(SequentialTaskSet):
#task
def task2_method(self):
pass
#task
def stop(self):
self.interrupt()
class UserBehaviour(SequentialTaskSet):
tasks = [Task1, Task2]
class LoggedInUser(HttpUser):
host = "http://localhost"
wait_time = between(1, 5)
tasks = [UserBehaviour]
Everything seems to be working fine now.
At first I thought this was a bug, but it is actually just as intended (although I dont really understand WHY it was implemented that way)
One important thing to know about TaskSets is that they will never
stop executing their tasks, and hand over execution back to their
parent User/TaskSet, by themselves. This has to be done by the
developer by calling the TaskSet.interrupt() method.
https://docs.locust.io/en/stable/writing-a-locustfile.html#interrupting-a-taskset
I would solve this issue with inheritance: Define a base TaskSet or User class that has the common tasks, and then subclass it, adding the user-type-specific tasks/code.
If you define a base User class, remember to set abstract = True if you dont want Locust to run that user as well.

Is it possible to inherit setup() and tearDown() methods?

I work with automation primarily in C# / Java and have been looking into Python for its speed.
In C#, I can write a class that implements a WebDriver instance, along with [SetUp] and [TearDown] methods. Then, every class containing test cases can inherit from this class, so I do not need to write my SetUp and TearDown for every test class I write.
The other benefit of SetUp / TearDown fixture is that I can use same WebDriver instance throughout all tests. SetUp will create the WebDriver instance, and pass it into test case class, where test case can use to initialize PageObjects & perform clicks etc. When test is finished, WebDriver instance gets passed back into TearDown for cleanup. This is highly efficient and easy to use.
My issue: I do not understand the Python best-practice on how to implement this functionality.
I have read through Python unittest docs here and read up on Python multiple inheritance here with little luck. I have also read this SO discussion here, but it is 10+ years old and contains many polarizing opinions. I did use discussion as guidance to get started, but I do not want to blindly write code without understanding it.
I am hung up on the part about how to actually inherit setUp(), tearDown(), and my webdriver instance. I don't want to declare a new webdriver instance, and re-write setUp() and tearDown() methods for every single test class, as this seems inefficient.
Here's what I've tried:
This is SetUp / TearDown fixture which is meant to handle setup and teardown for ALL test cases, and also keep track of singleton WebDriver instance.
Project Directory Structure:
base_test_fixture.py
from selenium import webdriver
import unittest
class BaseTestFixture(unittest.TestCase):
driver = None
def setUp(self):
print("Running SetUp")
self.driver = webdriver.Chrome()
def tearDown(self):
print("Running Teardown")
self.driver.close()
self.driver.quit()
Here is test_webdriver.py:
import unittest
import BaseTestFixture
class TestWebDriver(BaseTestFixture.SetUpTearDownFixture):
def test_should_start_webdriver(self):
super.setUp()
print("Running test 1")
super.driver.get("https://www.google.com")
assert "Google" in self.driver.title
super.tearDown()
def test_should_navigate_to_stackoverflow(self):
super.setUp()
print("Running test 2")
super.driver.get("https://www.stackoverflow.com")
assert "Stack Overflow" in self.driver.title
super.teardown()
if __name__ == '__main__':
unittest.main()
Here's the error my class declaration is showing: AttributeError: module 'BaseTestFixture' has no attribute 'SetUpTearDownFixture'
Is it possible to implement a single WebDriver, setUp(), and tearDown() for all Python test cases?
You are very close. the Python convention is that your module should be named with underscores, so I would rename BaseTestFixture.py to base_test_fixture.py, and the class in the module would be the camelcase version of the module name.
that would give us, base_test_fixture.py:
from selenium import webdriver
from unittest import TestCase
class BaseTestFixture(TestCase):
and test_web_driver.py:
import unittest
from base_test_fixture import BaseTestFixture
class TestWebDriver(BaseTestFixture):
If you're still having trouble, the problem may be in the directory structure of your package, so share that with us by editing your question above to indicate the structure of your directory and files.
Also, within your test, since the test class inherits self.driver, you just have to refer to it as self.driver (no super.).
Also, setUp() and tearDown() are automatically called by unittest, so you don't have to call them explicitly.

patching boto3.resource and resource.Table method for unit testing

Here is my class:
# Production.py
import boto3
class Production(object):
resource = boto3.resource('dynamodb', 'us-west-2')
def __init__(self):
self.table = Production.resource.Table('employee')
I am trying to test that resource.Table is called with arg 'employee'. I wrote a unit test for it
def test_init():
with patch('Production.boto3.resource') as resource:
mock_resource = MagicMock()
resource.return_value = mock_resource
pd = Production()
resource.assert_called_with('dynamodb', 'us-west-2')
table = resource.return_value.Table.return_value
table.assert_called_with('employee')
test_init()
But it doesn't seem to work... Can some one help me how to test this?
When you patch an object it mocks all of its methods for you. So (I didn't test the code but) I think just:
def test_resource_is_called_with_correct_params():
with patch('Production.boto3') as mock_boto:
Production()
mock_boto.resource.assert_called_once_with('dynamodb', 'us-west-2')
will do the first part of your test. I would then test the init function separately in another test, which is clearer easier and more simple (generally aim to test one thing per test):
def test_table_is_called_with_correct_params():
with patch('Production.boto3') as mock_boto:
Production()
mock_resource = mock_boto.resource.return_value
mock_resource.Table.assert_called_once_with('employee')
I would say a couple of things about this though:
It's nice to group your tests into a class which organises your tests. Also when you get subclass TestsCase you get a bunch of methods that come with it such as self.assertDictEqual that will provide good meaningful output and work well will test runners like nose2. So do something like:
class TestProduction(unittest.TestCase):
def test1():
pass
def test2():
pass
The stuff you are testing there is basically hard coded so these tests are not really meaningful. You are just testing that the language works. I would learn to test behaviour rather than implementation ... So what do you want you class to do? Sit down and think about it first before you write the class. Then you can write the specs out and use those specs to design your tests before you even start coding.

best practices for passing initialization arguments to superclasses?

I'm trying to figure out the best way to initialize sub/superclasses in Python3. Both the base and subclasses will take half a dozen parameters, all of which will be parsed from command line arguments.
The obvious way to implement this is to parse all the args at once, and pass them all in:
class Base:
def __init__(self, base_arg1, base_arg2, base_arg3):
class Sub(Base):
def __init__(self, sub_arg1, sub_arg2, sub_arg3,
base_arg1, base_arg2, base_arg3):
super().__init__(self, base_arg1, base_arg2, base_arg3)
main():
# parse args here
options = parser.parse_args()
obj = Sub(options.sub_arg1, options.sub_arg2, options.sub_arg3,
options.base_arg1, options.base_arg2, options.base_arg3)
If I have a Sub-subclass (which I will), things get really hairy in terms of the list of arguments passed up through successive super().init() calls.
But it occurs to me that argparse.parse_known_args() offers another path: I could have each subclass parse out the arguments it needs/recognizes and pass the rest of the arguments up the hierarchy:
class Base:
def __init__(self, args):
base_options = base_parser.parse_known_args(args)
class Sub(Base):
def __init__(self, args):
(sub_options, other_args) = sub_parser.parse_known_args(args)
super().__init__(self, other_args)
main():
obj = Sub(sys.argv)
This seems cleaner from an API point of view. But I can imagine that it violates some tenet of The Way Things Are Done In Python and is a bad idea for all sorts of reasons. My search of the web has not turned up any examples either way - could the mighty and all-knowing mind of Stack Overflow help me understand the Right Way to do this?
Look inside the argparse.py code. An ArgumentParser is a subclass of an _ActionsContainer. All the actions are subclasses of Action.
When you call
parser.add_argument('foo', action='store_action', ...)
the parameters are passed, mostly as *args and **kwargs to _StoreAction, which in turn passes them on to its supper (after a setting some defaults, etc).
As a module that is mean to be imported, and never run as a stand along script it does not have a if __name__.... block. But often I'll include such a block to invoke test code. That's the place to put the commandline parser, or at least to invoke it. If might be defined in a function in the body, but it normally shouldn't be called when module is imported.
In general argparse is a scripting tool, and shouldn't be part of a class definitions - unless you are a subclassing ArgumentParser to add some new functionality.
You might also want to look at https://pypi.python.org/pypi/plac. This package provides a different interface to argparse, and is a good example of subclassing this parser.
Thanks hpaulj! I think your response helped me figure out an even simpler way to go about it. I can parse all the options at the top level, then just pass the option namespace in, and let each subclass pull out the ones it needs. Kind of face-palm simple, compared to the other approaches:
class Base:
def __init__(self, options):
base_arg1 = options.base_arg1
base_arg2 = options.base_arg2
class Sub(Base):
def __init__(self, options):
super().__init__(self, options) # initialize base class
sub_arg1 = options.sub_arg1
sub_arg2 = options.sub_arg2
main():
options = parser.parse_args()
obj = Sub(options)

Resources