Running pytest with thread - python-3.x

I have a question for pytest
I would like to run same pytest script with multiple threads.
But,i am not sure how to create and run thread which is passing more than one param. (And running thread with pytest..)
for example I have
test_web.py
from selenium import webdriver
import pytest
class SAMPLETEST:
self.browser = webdriver.Chrome()
self.browser.get(URL)
self.browser.maximize_window()
def test_title(self):
assert "Project WEB" in self.browser.title
def test_login(self):
print('Testing Login')
ID_BOX = self.broswer.find_element_by_id("ProjectemployeeId")
PW_BOX = self.broswer.find_element_by_id("projectpassword")
ID_BOX.send_keys(self.ID) # this place for ID. This param come from thread_run.py
PW_BOX.send_keys(self.PW) # this place for PW. It is not working. I am not sure how to get this data from threa_run.py
PW_BOX.submit()
IN thread_run.py
import threading
import time
from test_web import SAMPLETEST
ID_List = ["0","1","2","3","4","5","6","7"]
PW_LIST = ["0","1","2","3","4","5","6","7"]
threads = []
print("1: Create thread")
for I in range(8):
print("Append thread" + str(I))
t = threading.Thread(target=SAMPLETEST, args=(ID_List[I], PW_LIST[I]))
threads.append(t)
for I in range(8):
print("Start thread:" + str(I))
threads[I].start()
i was able to run thread to run many SAMPLETEST class without pytest.
However, it is not working with pytest.
My question is.
First, how to initialize self.brower in insde of SAMPLETEST? I am sure below codes will not be working
self.browser = webdriver.Chrome()
self.browser.get(URL)
self.browser.maximize_window()
Second, in thread_run.py, how can i pass the two arguments(ID and Password) when I run thread to call SAMPLTEST on test_web.py?
ID_BOX.send_keys(self.ID) # this place for ID. This param come from thread_run.py
ID_BOX.send_keys(self.ID)
PW_BOX.send_keys(self.PW)
I was trying to build constructor (init) in SAMPLETEST class but it wasn't working...
I am not really sure how to run threads (which passing arguments or parameter ) with pytest.

There are 2 scenarios which i could read from this:
prepare the test data and pass in parameters to your test method which could be achieved by pytest-generate-tests and parameterise concept. You can refer to the documentation here
In case of running pytest in multi threading - Pytest-xdist or pytest-parallel

I had a similar issue and got it resolved by passing an argument in form of a list.
Like:,
I replaced below line
thread_1 = Thread(target=fun1, args=10)
with
thread_1 = Thread(target=fun1, args=[10])

Related

How to access a variable as a parameter from within a function from another module without running the function?

I don't know how to describe it any less confusing than in the title so I hope i can make it clear through the Code. It's not all, only the parts describing what i want ot do and what i do not how in order not to bloat.
testing_lib.py has the following function in it:
def browserconfig():
if browser == 'Chrome':
options = webdriver.ChromeOptions()
options.add_argument('ignore-certificate-errors')
else:
raise Exception
logp("SSL ", "passed")
# Driver und Seitenaufruf
driver = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=options)
driver.maximize_window()
return driver
def wait(driver):
WebDriverWait(driver, 180, poll_frequency=1).until(ajax_fin, "Wait timeout")
testing_base.py has a function which (among others) calls browserconfig():
import testing_lib
def start():
testing_lib.browserconfig()
and now, the last file, testing_main.py calls the start() function from testing_base.py but also the wait(driver) function from testing_lib.py which needs the driver variable from browserconfig() as a parameter in order to work.
import testing_lib
import testing_base
from testing_lib import wait
testing_base.start()
wait(driver)
My main question is: How do i get "driver" as a parameter into wait(driver) in testing_main.py without running "browserconfig()" there, since it's only supposed to run once.

Launching parallel tasks: Subprocess output triggers function asynchronously

The example I will describe here is purely conceptual so I'm not interested in solving this actual problem.
What I need to accomplish is to be able to asynchronously run a function based on a continuous output of a subprocess command, in this case, the windows ping yahoo.com -t command and based on the time value from the replies I want to trigger the startme function. Now inside this function, there will be some more processing done, including some database and/or network-related calls so basically I/O processing.
My best bet would be that I should use Threading but for some reason, I can't get this to work as intended. Here is what I have tried so far:
First of all I tried the old way of using Threads like this:
import subprocess
import re
import asyncio
import time
import threading
def startme(mytime: int):
print(f"Mytime {mytime} was started!")
time.sleep(mytime) ## including more long operation functions here such as database calls and even some time.sleep() - if possible
print(f"Mytime {mytime} finished!")
myproc = subprocess.Popen(['ping', 'yahoo.com', '-t'], shell=True, stdout=subprocess.PIPE)
def main():
while True:
output = myproc.stdout.readline()
if myproc.poll() is not None:
break
myoutput = output.strip().decode(encoding="UTF-8")
print(myoutput)
mytime = re.findall("(?<=time\=)(.*)(?=ms\s)", myoutput)
try:
mytime = int(mytime[0])
if mytime < 197:
# startme(int(mytime[0]))
p1 = threading.Thread(target=startme(mytime), daemon=True)
# p1 = threading.Thread(target=startme(mytime)) # tried with and without the daemon
p1.start()
# p1.join()
except:
pass
main()
But right after startme() fire for the first time, the pings stop showing and they are waiting for the startme.time.sleep() to finish.
I did manage to get this working using the concurrent.futures's ThreadPoolExecutor but when tried to replace the time.sleep() with the actual database query I found out that my startme() function will never complete so no Mytime xxx finished! message is ever shown nor any database entry is being made.
import sqlite3
import subprocess
import re
import asyncio
import time
# import threading
# import multiprocessing
from concurrent.futures import ThreadPoolExecutor
from concurrent.futures import ProcessPoolExecutor
conn = sqlite3.connect('test.db')
c = conn.cursor()
c.execute(
'''CREATE TABLE IF NOT EXISTS mytable (id INTEGER PRIMARY KEY, u1, u2, u3, u4)''')
def startme(mytime: int):
print(f"Mytime {mytime} was started!")
# time.sleep(mytime) ## including more long operation functions here such as database calls and even some time.sleep() - if possible
c.execute("INSERT INTO mytable VALUES (null, ?, ?, ?, ?)",(1,2,3,mytime))
conn.commit()
print(f"Mytime {mytime} finished!")
myproc = subprocess.Popen(['ping', 'yahoo.com', '-t'], shell=True, stdout=subprocess.PIPE)
def main():
while True:
output = myproc.stdout.readline()
myoutput = output.strip().decode(encoding="UTF-8")
print(myoutput)
mytime = re.findall("(?<=time\=)(.*)(?=ms\s)", myoutput)
try:
mytime = int(mytime[0])
if mytime < 197:
print(f"The time {mytime} is low enought to call startme()" )
executor = ThreadPoolExecutor()
# executor = ProcessPoolExecutor() # I did tried using process even if it's not a CPU-related issue
executor.submit(startme, mytime)
except:
pass
main()
I did try using asyncio but I soon realized this is not the case but I'm wondering if I should try aiosqlite
I also thought about using asyncio.create_subprocess_shell and run both as parallel subprocesses but can't think of a way to wait for a certain string from the ping command that would trigger the second script.
Please note that I don't really need a return from the startme() function and the ping command example is conceptually derived from the mitmproxy's mitmdump output command.
The first code wasn't working as I did a stupid mistake when creating the thread so p1 = threading.Thread(target=startme(mytime)) does not take the function with its arguments but separately like this p1 = threading.Thread(target=startme, args=(mytime,))
The reason why I could not get the SQL insert statement to work in my second code was this error:
SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 10688 and this is thread id 17964
that I didn't saw until I wrapped my SQL statement into a try/except and captured the error. So I needed to make the SQL database connection inside my startme() function
The other asyncio stuff was just nonsense and cannot be applied to the current issue here.

Django initialising AppConfig multiple times

I wanted to use the ready() hook in my AppConfig to start django-rq scheduler job. However it does so multiple times, every times I start the server. I imagine that's due to threading however I can't seem to find a suitable workaround. This is my AppConfig:
class AnalyticsConfig(AppConfig):
name = 'analytics'
def ready(self):
print("Init scheduler")
from analytics.services import save_hits
scheduler = django_rq.get_scheduler('analytics')
scheduler.schedule(datetime.utcnow(), save_hits, interval=5)
Now when I do runserver, Init scheduler is displayed 3 times. I've done some digging and according to this question I started the server with --noreload which didn't help (I still got Init scheduler x3). I also tried putting
import os
if os.environ.get('RUN_MAIN', None) != 'true':
default_app_config = 'analytics.apps.AnalyticsConfig'
in my __init__.py however RUN_MAIN appears to be None every time.
Afterwards I created a FileLock class, to skip configuration after the first initialization, which looks like this:
class FileLock:
def __get__(self, instance, owner):
return os.access(f"{instance.__class__.__name__}.lock", os.F_OK)
def __set__(self, instance, value):
if not isinstance(value, bool):
raise AttributeError
if value:
f = open(f"{instance.__class__.__name__}.lock", 'w+')
f.close()
else:
os.remove(f"{instance.__class__.__name__}.lock")
def __delete__(self, obj):
raise AttributeError
class AnalyticsConfig(AppConfig):
name = 'analytics'
locked = FileLock()
def ready(self):
from analytics.services import save_hits
if not self.locked:
print("Init scheduler")
scheduler = django_rq.get_scheduler('analytics')
scheduler.schedule(datetime.utcnow(), save_hits, interval=5)
self.locked = True
This does work, however the lock is not destroyed after the app quits. I tried removing the .lock files in settings.py but it also runs multiple times, making this pointless.
My question is: How can I prevent django from calling ready() multiple times, or how otherwise can I teardown the .lock files after django exits or right after it boots?
I'm using python 3.8 and django 3.1.5

How to capture the iteration number dyanmically inside test while using pytest-repeat

I'm executing my selenium script multiple times by using pytest-repeat. i need to capture the iteration number during execution and make use of it.
I explored pytest.mark, pytest.collect & pytest.Collector
class Testone():
#pytest.fixture()
def setup(self):
#pytest.mark.repeat(RowCount)
def test_create_eq(self,setup):
Need to capture the iteration number here.
I think there should be an easier and straightforward way than what I describe below. pytest-repeat has a fixture __pytest_repeat_step_number which I hoped could provide the current step number for the test but it did not.
request.node.name provides the name of the test function generated by pytest_repeat and it has the step number which can be extracted for your purpose.
import pytest
class Testone():
#pytest.fixture()
def setup(self):
pass
#pytest.mark.repeat(4)
def test_create_eq(self,setup,request):
current_step = request.node.name.split('[')[1].split('-')[0] #string form; parse to int, if required

Implementation of QThread in PyQt5 Application

I wrote a PyQt5 GUI (Python 3.5 on Win7). While it is running, its GUI is not responsive. To still be able to use the GUI, I tried to use QThread from this answer: https://stackoverflow.com/a/6789205/5877928
The module is now designed like this:
class MeasureThread(QThread):
def __init(self):
super().__init__()
def get_data(self):
data = auto.data
def run(self):
time.sleep(600)
# measure some things
with open(outputfile) as f:
write(things)
class Automation(QMainWindow):
[...]
def measure(self):
thread = MeasureThread()
thread.finished.connect(app.exit)
for line in open(inputfile):
thread.get_data()
thread.start()
measure() gets called once per measurement but starts the thread once per line in inputfile. The module now exits almost immediately after starting it (I guess it runs all thread at once and does not sleep) but I only want it to do all the measurements in another single thread so the GUI can still be accessed.
I also tried to apply this to my module, but could not connect the methods used there to my methods: http://www.xyzlang.com/python/PyQT5/pyqt_multithreading.html
The way I used to use the module:
class Automation(QMainWindow):
[...]
def measure(self):
param1, param2 = (1,2)
for line in open(inputfile):
self.measureValues(param1, param2)
def measureValues(self, param1, param2):
time.sleep(600)
# measure some things
with open(outputfile) as f:
write(things)
But that obviously used only one thread. Can you help me to find the right method to use here( QThread, QRunnable) or to map the example of the second link to my module?
Thanks!

Resources