I have a folder structure as below /test
----test_abc.py
----test_bcd.py
----test_cde.py
----conftest.py
conftest.py contains all spark-initiation items and in each pytest file, I use these fixtures.
For executing all pytest files, I am writing a shell script that will call all tests as shown below. Is this the correct way? I think if I call these files individually as below, it will initiate a new spark session every time I execute. Is my assumption right, Can I use the same spark session for all pytests?
bashscript.sh
pytest ./tests/test_abc.py --emr
pytest ./tests/test_bcd.py --emr
pytest ./tests/test_cde.py --emr
If you want to create a single pytest session but only call a few files, you can pass them in as positional arguments to pytest:
pytest ./tests/test_abc.py ./tests/test_bcd.py ./tests/test_cde.py --emr
In this way, session scoped fixtures will only be created once.
Related
Here's my jupyter notebook's cell 1 (notebook is called tested.ipynb)
def func(a,b):
return a+b
Here's the testbook testing python code (tester.py):
import testbook
#testbook.testbook('tested.ipynb',execute=True)
def test_func(tb):
func=tb.ref("func")
assert func(1,2)==0
I then run the following command from terminal:
python tester.py
It should fail the unit test. But I'm not getting any output at all. No failures, no messages. How do I make the failure appear?
That's because you still need to use pytest, or another unit testing library, to run your tests. Note under 'Features' it says:
"Works with any unit testing library - unittest, pytest or nose" -SOURCE
Testbook just makes writing the unit tests easier. See here under 'Unit testing with testbook' for an example of the process of using testbook in your toolchain with pytest, although bear in mind a lot of the syntax doesn't match the the current documentation. And so instead of running python tester.py from the terminal, run the following command from terminal if you've installed pytest:
pytest tester.py
One thing I note, is that your import and decorator lines don't match the current documentation. Nevertheless, your code works when using pytest tester.py. However, it may be best to adopt the current best practices illustrated in the documentation to keep your code more robust as development continues.
I have developed a very basic microservice using Flask framework.
A method in the application looks like this
#app.route('/add, methods=['POST'])
def add_info():
final = []
try:
info_obj.append(json.loads(request.data))
...
return jsonify(final)
Now I am attempting to write Unittest for this method and other method in this microservice. I am using import unittest to write my test.
Now here I am confused is how can I write tests to test the functionality of these http functions, which don't take regular argument of return regular results but rather fetch arguments from request data and return json based on that.
Is my approach correct? and if yes how can I test Microservices-like functionality using unittest module?
If you absolutely want unittesting, follow Patrick's guide here. But I suggest using PyTest. It's a breeze to get started. First you need a conftest.py. Then add your testfiles named test_... .py . The where your conftest is do $ pytest
Patrick has yet another PyTest + Flask guide here. You can view a demo of a conftest in a project here on how to set up db etc and a test file here
I am trying to create a junit xml output file in pytest with custom attributes.
I searched for answers and found about xml_record_attribute and record_attribute.
def test(record_attribute):
record_attribute('index', '15')
...
At first I thought it would fit to my problem but then I realized it needs to be specified in each test.
I tried to do it with pytest_runtest_call hook so it would add attributes in each test run without the need to explicitly add the attributes in each test. But then it turned out that you can't use fixtures in hook (which makes sense).
Any idea how can I add an attribute to the junit xml output file without duplicating the code?
EDIT:
I have another idea of having a decorator which does that.
def xml_decorator(test):
def runner(xml_record_attribute):
xml_record_attribute('index', '15')
test()
reutrn runner
I am trying hook it with pytest_collection_modifyitems and decorate each test but it doesn't work.
def pytest_collection_modifyitems(session, config, items):
for item in items:
item.obj = xml_decorator(item.obj)
You could define a fixture that is automatically supplied to each test case:
import pytest
#pytest.fixture(autouse=True)
def record_index(record_xml_attribute):
record_xml_attribute('index', '15')
Note record_xml_attribute is not supported for xunit2, the default junit_family for pytest v6. To use record_xml_attribute with pytest v6, in pytest.ini set
[pytest]
junit_family = xunit1
I'm using Python 3.8 and pytest for unit testing. I have tests similar to the following
def test_insert_record(az_sql_db_inst):
"""."""
foreign_key_id = 10
I don't like hard-coding "10" in the file, and would prefer this be stored in some external configuration file where other tests can also access the value. Does pytest provide a way to do this (to load the values from a properties or yml file)? I could just open a file on my own, e.g.
def test_insert_record(az_sql_db_inst):
"""."""
file = open('test.txt')
for line in file:
fields = line.strip().split()
foreign_key_id = fields[0]
but it seems like there is something that can automatically do that for me.
If you simply want to initialize some constants, a separated config.py would do the trick.
from config import foreign_key_id
And PyYAML package would be good if you prefer general-purpose text files.
However, if you want to specify expected test results or even different results depending on scenarios, #pytest.mark.parametrize may be a good option.
I use the pytest runner to get the output of results from my automated test frameworks (Selenium and RestAPI tests). I use the pytest-html plugin to generate a single page html result file at the end of the test. I initiate the test run with the following command (from an active VirtEnv session).
python -m pytest -v --html="api_name_test.html" --self-contained-html
(it's a little more complicated in that I use a powershell script to run this and provide a datetime stamped result file name and email the file when it's finished but it's essentially the above command)
When the reports are generated and I open this report html I find that all the non passing tests are expanded. I want to make it so all rows are collapsed by default (Failed, XFailed, Error etc..).
My project contains a conftest.py file at the diretory root and a pytest.ini file where I specify the directory for the test scripts
In the conftest.py file in my simplest projects, I have one optional hook to obtain the target url of the tests and put that in the report summary:
import pytest
from py._xmlgen import html
import os
import rootdir_ref
import simplejson
#pytest.mark.optionalhook
def pytest_html_results_summary(prefix):
theRootDir = os.path.dirname(rootdir_ref.__file__)
credentials_path = os.path.join(theRootDir, 'TestDataFiles', 'API_Credentials.txt')
target_url = simplejson.load(open(credentials_path)).get('base_url')
prefix.extend([html.p("Testing against URL: " + target_url)])
The Github page mentions that a display query can be used to collapse rows with various results, but it doesn't mention where this information is entered.
https://github.com/pytest-dev/pytest-html
"By default, all rows in the Results table will be expanded except those that have Passed. This behavior can be customized with a query parameter: ?collapsed=Passed,XFailed,Skipped"
Currently I'm unsure if the ?collapsed=... line goes in the command line, or the conftest as a hook, or do I need to edit the default style.css or main.js that comes with the pytest-html plugin? (Also i'm not familiar with css and only know a small amount of html). I'm assuming it goes in the conftest.py file as a hook but don't really understand how to apply it.
https://pytest-html.readthedocs.io/en/latest/user_guide.html#display-options
Auto Collapsing Table Rows
By default, all rows in the Results table will be expanded except those that have Passed.
This behavior can be customized either with a query parameter: ?collapsed=Passed,XFailed,Skipped or by setting the render_collapsed in a configuration file (pytest.ini, setup.cfg, etc).
[pytest]
render_collapsed = True
NOTE: Setting render_collapsed will, unlike the query parameter, affect all statuses.