I'm having some real difficulty in running a pytest fixture within a conftest file only once when the tests are run in parallel via a shell script. Contents of the shell script are below:
#!/usr/bin/env bash
pytest -k 'mobile' --os iphone my_tests/ &
pytest -k 'mobile' --os ipad my_tests/
wait
The pytest fixture creates resources for the mobiles tests to use before running the tests:
#pytest.fixture(scope='session', autouse=True)
def create_resources():
// Do stuff to create the resources
yield
// Do stuff to remove the resources
When running each on its own it works perfectly. Creates the resources, runs the tests and finally removes the resources it created. When running in parallel (with the shell script) both try to run the create_resources fixture at the same time.
Does anyone know of a way I can run the create_resource fixture just once? If so then is it possible for the second device to wait until the resources are created before all devices run the tests?
Related
I need to run in CI some run_test.py script:
import os
os.system("test1.py")
os.system("test2.py")
test1.py and test2.py - both unit-test scripts
Even if there is, for example, an ERROR in test1.py, run_test.py is still executed correctly and CI pipeline is successfully passed, though I need it to fail.
Is there any chance to catch test1.py unit-test errors so that CI pipeline will fall?
PS: running test1.py and test2.py independently in CI file doesn't
work, only works by run_test.py script
I am trying to run a small test collection with Cypress using the cucumber plugin and the official Circle-ci Orb.
I've been going through the documentation and I've got them running without issues locally. The script I'm using to run them locally is this one:
"test:usa": "cypress-tags run --headless -b chrome -e TAGS='#usa'"
*Note the cypress-tags command and the TAGS option.
For the CI I use the official Circle-ci Orb and have a configuration like this:
- cypress/run:
name: e2e tests
executor: with-chrome
record: true
parallel: true
parallelism: 2
tags: 'on-pr'
group: 2x-chrome
ci-build-id: '${CIRCLE_BUILD_NUM}'
requires:
- org/deployment
As you can read, I want to raise 2 machines where I divide my feature files, setting the tag to 'on-pr' and grouping the run under '2x-chrome' using as well the ci-build-id.
The thing is that the official Orb uses the cypress run command which does not filter scenarios by their tags, so it is of no use here. My options were:
Using the command parameter in the orb to call the required script as I do locally:
command: npm run test:usa
My problem with this option is that the parallel configuration does not work as expected so I discarded it.
I tried to pass the TAGS parameter as an env var within the Circle-ci executor to see if the Orb was able to see it, but it was of no use as the Orb does not use cypress-tags run but cypress run.
environment:
TAGS: '#usa'
At this point my question is, is there a workaround to this (using cypress-tags in the Circle-ci orb) or shall I opt for a different way of testing this?
Thanks in advance
I am using pytest with --ignore and --junitxml options to generate a report of the test cases those are not ignore but when my report is generated, It also takes into account the ignored tests.
I am using the following command
pytest --ignore=tests/test_a.py --junitxml=pytest_not_a.xml
I am able to resolve this using pytest.mark.httpapi, so rather than using it over each test suite. I added pytest_collection_modifyingitems which puts the marker on the test on run time.
def pytest_collection_modifyitems(config, items):
for item in items:
if 'test_a.py' in str(item.fspath):
mark = getattr(pytest.mark, "httpapi")
item.add_marker(mark)
item.add_marker(pytest.mark.common)
Now the above command would be slightly changed like
py.test -v -m "not httpapi" --junitxml=pytest_not_a.xml. Now the Junit gitlab artifacts only takes the processed tests and do not include skipped tests in success rate calculation.
In a single file: test.py, I have 3 test functions: test1(), test2(), test3(). Does pytest and pytest-benchmark run these 3 test cases in parallel or in serial?
I have 3 files: test1.py, test2.py, test3.py. Respectively, I have a single test function in each file: test1(), test2(), test3(). Are these 3 tests run in parallel or in serial, if I simply run pytest or pytest-benchmark in the directory they are in?
according to responses from crosspost on Reddit and the comments above,
pytest and pytest-benchmark always run in serial.
pytest and pytest-benchmark always run in serial .
can run the file from main folder terminal
This will help you
I ran into a problem when running GUI tests in parallel inside a Docker container. I use a bunch: Selenium webdriver + Pytest + Xdist + Chrome.
I use following command to run the tests:
pytest -v -n=4 --headless=True --production=True --browser=chrome --dist=loadfile --junitxml=test.xml
But all the tests fail. If I do the same outside the docker container or use 1 thread - it's working fine.
So, how can I resolve this problem and execute tests in parallel inside the docker container? Thanks a lot)
I have this in logs:
selenium.common.exceptions.WebDriverException: Message: chrome not reachable (Session info: headless chrome=73.0.3683.86) (Driver info: chromedriver=73.0.3683.20 (8e2b610813e167eee3619ac4ce6e42e3ec622017),platform=Linux 4.15.0-46-generic x86_64)
Try to use boxed processes + tx flag: (--tx 3*popen//python=python3.6 --boxed, so run your tests with the command below:
pytest -v --headless=True --production=True --browser=chrome --dist=loadfile --junitxml=test.xml --tx 3*popen//python=python3.6 --boxed
More information of how you can run your tests in parallel is available under that SO answer .
Good Luck !