pytest & pytest-benchmark: serial or parallel? - python-3.x

In a single file: test.py, I have 3 test functions: test1(), test2(), test3(). Does pytest and pytest-benchmark run these 3 test cases in parallel or in serial?
I have 3 files: test1.py, test2.py, test3.py. Respectively, I have a single test function in each file: test1(), test2(), test3(). Are these 3 tests run in parallel or in serial, if I simply run pytest or pytest-benchmark in the directory they are in?

according to responses from crosspost on Reddit and the comments above,
pytest and pytest-benchmark always run in serial.

pytest and pytest-benchmark always run in serial .
can run the file from main folder terminal
This will help you

Related

How to get immediate output from a job run within gitlab-runner?

The command gitlab-runner lets you "test" a gitlab job locally. However, the local run of a job seems to have the same problem as a gitlab job run in gitlab CI: The output is not immediate!
What I mean: Even if your code/test/whatever produces printed output, it is not shown immediately in your log or console.
Here is how you can reproduce this behavior (on Linux):
Create a new git repository
mkdir testrepo
cd testrepo
git init
Create file .gitlab-ci.yml with the following content
job_test:
image: python:3.8-buster
script:
- python tester.py
Create a file tester.py with the following content:
import time
for index in range(10):
print(f"{time.time()} test output")
time.sleep(1)
Run this code locally
python tester.py
which produces the output
1648130393.143866 test output
1648130394.1441162 test output
1648130395.14529 test output
1648130396.1466148 test output
1648130397.147796 test output
1648130398.148115 test output
1648130399.148294 test output
1648130400.1494567 test output
1648130401.1506176 test output
1648130402.1508648 test output
with each line appearing on the console every second.
You commit the changes
git add tester.py
git add .gitlab-ci.yml
git commit -m "just a test"
You start the job within a gitlab runner
gitlab-runner exec docker job_test
....
1648130501.9057398 test output
1648130502.9068272 test output
1648130503.9079702 test output
1648130504.9090931 test output
1648130505.910158 test output
1648130506.9112566 test output
1648130507.9120533 test output
1648130508.9131665 test output
1648130509.9142723 test output
1648130510.9154003 test output
Job succeeded
Here you get essentially the same output, but you have to wait for 10 seconds and then you get the complete output at once!
What I want is to see the output as it happens. So like one line every second.
How can I do that for both, the local gitlab-runner and the gitlab CI?
In the source code, this is controlled mostly by the clientJobTrace's updateInterval and forceSendInterval properties.
These properties are not user-configurable. In order to change this functionality, you would have to patch the source code for the GitLab Runner and compile it yourself.
The parameters for the job trace are passed from the newJobTrace function and their defaults (where you would need to alter the source) are defined here.
Also note that the UI for GitLab may not necessarily get the trace in realtime, either. So, even if the runner has sent the trace to GitLab, the javascript responsible for updating the UI only polls for trace data every ~4 or 5 seconds.
You can poll gitlab web for new log lines as fast as you can:
For running job, use url like: https://gitlab.example.sk/grpup/project/-/jobs/42006/trace It will send you a json structure with lines of log file, offset, size and so on. You can have a look at documentation here: https://docs.gitlab.com/ee/api/jobs.html#get-a-log-file
Sidenote: you can use undocumented “state” parameter from response in subsequent request to get only new lines (if any). This is handy.
Through, this does not affect latency of arrival newlines from actual job from runner to gitlab web/backend. See sytech answer for this question.
This answer should help, when there is configured redis cache, incremental logging architecture, and someone wants to get logs from currently running job in "realtime". Polling is still needed through.
Some notes can be found also on forum: https://forum.gitlab.com/t/is-there-an-api-for-getting-live-log-from-running-job/73072

How to run Cypress tests on circle-ci orb using Cypress-tags

I am trying to run a small test collection with Cypress using the cucumber plugin and the official Circle-ci Orb.
I've been going through the documentation and I've got them running without issues locally. The script I'm using to run them locally is this one:
"test:usa": "cypress-tags run --headless -b chrome -e TAGS='#usa'"
*Note the cypress-tags command and the TAGS option.
For the CI I use the official Circle-ci Orb and have a configuration like this:
- cypress/run:
name: e2e tests
executor: with-chrome
record: true
parallel: true
parallelism: 2
tags: 'on-pr'
group: 2x-chrome
ci-build-id: '${CIRCLE_BUILD_NUM}'
requires:
- org/deployment
As you can read, I want to raise 2 machines where I divide my feature files, setting the tag to 'on-pr' and grouping the run under '2x-chrome' using as well the ci-build-id.
The thing is that the official Orb uses the cypress run command which does not filter scenarios by their tags, so it is of no use here. My options were:
Using the command parameter in the orb to call the required script as I do locally:
command: npm run test:usa
My problem with this option is that the parallel configuration does not work as expected so I discarded it.
I tried to pass the TAGS parameter as an env var within the Circle-ci executor to see if the Orb was able to see it, but it was of no use as the Orb does not use cypress-tags run but cypress run.
environment:
TAGS: '#usa'
At this point my question is, is there a workaround to this (using cypress-tags in the Circle-ci orb) or shall I opt for a different way of testing this?
Thanks in advance

Running a pytest fixture only once when running tests in parallel

I'm having some real difficulty in running a pytest fixture within a conftest file only once when the tests are run in parallel via a shell script. Contents of the shell script are below:
#!/usr/bin/env bash
pytest -k 'mobile' --os iphone my_tests/ &
pytest -k 'mobile' --os ipad my_tests/
wait
The pytest fixture creates resources for the mobiles tests to use before running the tests:
#pytest.fixture(scope='session', autouse=True)
def create_resources():
// Do stuff to create the resources
yield
// Do stuff to remove the resources
When running each on its own it works perfectly. Creates the resources, runs the tests and finally removes the resources it created. When running in parallel (with the shell script) both try to run the create_resources fixture at the same time.
Does anyone know of a way I can run the create_resource fixture just once? If so then is it possible for the second device to wait until the resources are created before all devices run the tests?

pytest test with --ignore and --junitxml generate xml with ignored tests

I am using pytest with --ignore and --junitxml options to generate a report of the test cases those are not ignore but when my report is generated, It also takes into account the ignored tests.
I am using the following command
pytest --ignore=tests/test_a.py --junitxml=pytest_not_a.xml
I am able to resolve this using pytest.mark.httpapi, so rather than using it over each test suite. I added pytest_collection_modifyingitems which puts the marker on the test on run time.
def pytest_collection_modifyitems(config, items):
for item in items:
if 'test_a.py' in str(item.fspath):
mark = getattr(pytest.mark, "httpapi")
item.add_marker(mark)
item.add_marker(pytest.mark.common)
Now the above command would be slightly changed like
py.test -v -m "not httpapi" --junitxml=pytest_not_a.xml. Now the Junit gitlab artifacts only takes the processed tests and do not include skipped tests in success rate calculation.

mikrotik python3 API call from celery

I use the Python3 API of mikrotik to create some backups files,
When celery executes the python script the process doesn’t complete
and the backup file is not created . I attach a screenshot so you can see the output from the api.
Any suggestion, please advice.
When I run the python script it works fine but when the script runs through celery i get this output. So I examined the api code and I added "time.sleep( 5 )" after the line with "apiros.writeSentence(inputsentence)" and it works fine. Seems that the api returns before the end of the backup process and sends the output to /dev/null

Resources