how to unit test the components connected to Rpi? - python-3.x

I had completed my project in raspberry pi and I found it working. I have to test my components and codes.I had gone through pytest but I am not sure how it gonna help me in my case. Is there any automated tools that could be useful to me or any others testing modules that comes with python?

I guess that you can make a try with pytest, but probably you'll need to add something like looped delays or any other kinds of waits if you're going to test asynchronous components.
For example, if you have some class that operates with pins and want to verify high voltage level on button press, you could use something like this:
def test_reading_pin_value():
# let's pretend you have some class to control pins
with InOutController() as ctrl:
ctrl.setup_pin('button', 26, mode=pi.IN, pull_down=True)
while True:
value = ctrl['button']
if value == 1:
break
time.sleep(0.2)
Note that instead of while True you can set some reasonable limit of time and call assert with some condition or pytest.fail if timeout is reached:
import pytest
from time import time
def test_something_asynchronous():
# your external components controller
controller = Controller()
start = time()
wait, timeout = 10.0, False
while not timeout:
value = controller.wait_response()
if value:
assert value == 1, "Invalid response!"
break
elapsed = time() - start
if elapsed >= wait:
timeout = True
assert not timeout, "Timeout!"
If your components have some additional stochasticity, you could try "probabilistic asserts" like:
def probabilistic_assert(condition, trials=100):
n, ok = 0, False
while n < trials and not ok:
ok = condition()
n += 1
assert ok, "Condition has been failed during %d trials" % trials
Also, you could invoke gpio command and parse its output if you want to know voltage levels on pins.
Anyway, I think you will probably need "manual" intervention into tests to activate your outer components or to wait periodically to fulfill test expectations if your components are activated by some external condition.

Related

Trying to make a can counter on a PI 4 in python, Strugging with code for two sensors running at the same time

So I work in the beverage industry and I decided to try and make a can counter using a Raspberry PI4. It needs to use two industrial sensors on the GPIO that detect the cans.
Full Disclosure I have just been googling most code and reading when I get errors in terminal to try and fix the issue. I have done some rudimentary C# and C++ programming doing PLC stuff but it's nothing like what i'm trying to do right now. Just some simple statements for conversions and formulas.
I have had it counting via the sensor on a very rudimentary code
import RPi.GPIO as GPIO
import time
GPIN = 16
GPIO.setmode(GPIO.BCM)
GPIO.setup(GPIN, GPIO.IN)
counting = 0
while True:
while GPIO.input(GPIN) == 0:
time.sleep(0.1)
counting = counting + 1
print(counting)
while GPIO.input(GPIN) == 1:
time.sleep(0.1)
This counts in the terminal. It is of note I need to count the on and off state with a slight delay to keep accidental double counts from happening. I have even added in a GUI with guizero that makes it count in a window. although currently I cannot replicate that from what I remember working and i foolishly didn't save that as i was trying to get to the next stage, but the rough of it was instead of the print(counting) section in the above code I had the app.display() info.
Problem is I need it to count 2 sensors at the same time, one before the can rejector and one after. So I did some reading and figured I needed to run two (or maybe 3) loops at the same time. as I have 2 sensors that need a constant loop, plus it seems like a need another loop that runs and refreshes the GUI. I got turned into threading and have been trying to implement it as that seems like what I want but haven't been able to make heads or tails of it. I can get the GUI to display, but the sensors don't read. If I switch back to my simple code it counts away. Im having trouble meshing the two together.
import threading
from guizero import App, Text, PushButton
import RPi.GPIO as GPIO
import time
GPIN1 = 16
GPIO.setmode(GPIO.BCM)
GPIO.setup(GPIN1, GPIO.IN)
GPIN2 = 15
GPIO.setmode(GPIO.BCM)
GPIO.setup(GPIN2, GPIO.IN)
counting1 = 0
counting2 = 0
counting3 = counting1 - counting2
def sensor1():
global counting1
while GPIO.input(GPIN1) == 0:
time.sleep(0.1)
counting1 = counting1 + 1
while GPIO.input(GPIN1) == 1:
time.sleep(0.1)
def sensor2():
global counting2
while GPIO.input(GPIN2) == 0:
time.sleep(0.1)
counting2 = counting2 + 1
while GPIO.input(GPIN2) == 1:
time.sleep(0.1)
x = threading.Thread(target=sensor1)
y = threading.Thread(target=sensor2)
x.start()
y.start()
while True:
app = App(title="Can Count")
message = Text(app, text="Total")
message = Text(app, text=(counting1))
message = Text(app, text="Rejected")
message = Text(app, text=(counting3))
app.display()
I'm just a bit stumped I'm sure my way isn't the best way to do this, any advice, tips or pointers in the right direction would be appreciated. I'm trying to crash course youtube python tutorials on the side but I am still coming up short.
It seems like I can get the display to show updates if i close the window via the x it restarts the window and shows the update but I have tried a few different things with guizero using a def text(): above that code and text.repeat(10, text) thinking this would redraw the screen but that doesn't work or breaks the gui or the code.
Also I know I call PushButton and don't use it, but the end goal will have a simple reset the counter button.. Just haven't got there yet.

Running 2 infinite loops simultaneously switching control - Python3

I am building an image processing application that reads the current screen and does some processing. The code skeleton is as given below:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name):
while True:
mip.perform_some_image_processing_from_screen(image_name)
time.sleep(180) # Switch to perform_another_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, complete current iteration and then switch to the other function.
def perform_another_processing(self,inp):
while True:
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180) # Switch to perform_image_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, pause it and switch to the other function.
mp = myprocess()
mp.perform_image_processing(my_image) # Calling of image processing to identify items from the screen capture
mp.perform_another_processing(2) # Calling of application processing to calculate values based on screen capture
As of now, i am able to run only one of the function at a time.
Question here is:
How can i run both of them simultaneously (as 2 separate thread/process??) assuming both the functions may need to access/switch the same screen at the same time.
One option that i think of is both functions setting a common variable (to 1/0) and passing control of execution to each other before going on to sleep? Is it possible? How do i implement it?
Any help with this regards will help me adding multiple other similar functionalities in my application.
Thanks
Bishnu
Note:
For all who could not visualize what i wanted to achieve, here is my code that works fine. This bot code will check the screens to shield(to protect from attacks) or gather (resources in the kingdom)
def shield_and_gather_loop(self):
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
shield_timeout= time.time() + 60 * 3 if shield_sleep == True else None
while True:
while shield_sleep == False: #and current_shield_sts != 'SHIELDED':
self.reach_home_screen_from_anywhere()
current_shield_sts = self.renew_shield()
print("Current Shield Status # ", datetime.datetime.now(), " :", current_shield_sts)
shield_sleep = True
#self.go_to_sleep("Shield Check ", hours=0, minutes=3, seconds=0)
shield_timeout = time.time() + 60 * 3 #3 minutes from now
while num_of_gatherers < self.Max_Army_Size and gather_sleep == False:
if time.time() < shield_timeout:
self.device.remove_processed_files()
self.reach_kd_screen_from_anywhere()
self.found_rss_tile = 0
self.find_rss_tile_for_gather(random.choice(self.gather_items))
num_of_gatherers = self.num_of_troops_gathering()
print("Currently gathering: ", str(num_of_gatherers))
if gather_sleep == True and shield_sleep == True:
print("Both gather and shield are working fine.Going to sleep for 2 minutes...")
time.sleep(120)
# Wake up from sleep and recheck shield and gather status
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
Seems like the obvious thing to do is this, as it satisfies your criteria of having only one of the two actions running at a time:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
while True:
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180)
mp = myprocess()
mp.perform_image_processing(my_image,2)
If you need something more elaborate (such as perform_some_other_image_processing_from_screen() being able to interrupt a call to perform_some_image_processing_from_screen() before it has returned), that might require threads, but it would also require your perform_*() functions to serialize access to the data they are manipulating, which would make them slower and more complicated, and based on the comments supplied at the bottom of your example, it doesn't sound like the second function can do any useful work until the first function has completed its identification task anyway.
As a general principle, it's probably worthwhile to move the while True: [...] sleep(180) loop out of the function call and into the top-level code, so that the caller can control its behavior directly. Calling a function that never returns doesn't leave a lot of room for customization to the caller.
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
mp = myprocess()
while True:
mp.perform_image_processing(my_image,2)
sleep(180)

How to set timeout for a block of code which is not a function python3

After spending a lot of hours looking for a solution in stackoverflow, I did not find a good solution to set a timeout for a block of code. There are approximations to set a timeout for a function. Nevertheless, I would like to know how to set a timeout without having a function. Let's take the following code as an example:
print("Doing different things")
for i in range(0,10)
# Doing some heavy stuff
print("Done. Continue with the following code")
So, How would you break the for loop if it has not finished after x seconds? Just continue with the code (maybe saving some bool variables to know that timeout was reached), despite the fact that the for loop did not finish properly.
i think implement this efficiently without using functions not possible , look this code ..
import datetime as dt
print("Doing different things")
# store
time_out_after = dt.timedelta(seconds=60)
start_time = dt.datetime.now()
for i in range(10):
if dt.datetime.now() > time_started + time_out:
break
else:
# Doing some heavy stuff
print("Done. Continue with the following code")
the problem : the timeout will checked in the beginning of every loop cycle, so it may be take more than the specified timeout period to break of the loop, or in worst case it maybe not interrupt the loop ever becouse it can't interrupt the code that never finish un iteration.
update :
as op replayed, that he want more efficient way, this is a proper way to do it, but using functions.
import asyncio
async def test_func():
print('doing thing here , it will take long time')
await asyncio.sleep(3600) # this will emulate heaven task with actual Sleep for one hour
return 'yay!' # this will not executed as the timeout will occur early
async def main():
# Wait for at most 1 second
try:
result = await asyncio.wait_for(test_func(), timeout=1.0) # call your function with specific timeout
# do something with the result
except asyncio.TimeoutError:
# when time out happen program will break from the test function and execute code here
print('timeout!')
print('lets continue to do other things')
asyncio.run(main())
Expected output:
doing thing here , it will take long time
timeout!
lets continue to do other things
note:
now timeout will happen after exactly the time you specify. in this example code, after one second.
you would replace this line:
await asyncio.sleep(3600)
with your actual task code.
try it and let me know what do you think. thank you.
read asyncio docs:
link
update 24/2/2019
as op noted that asyncio.run introduced in python 3.7 and asked for altrnative on python 3.6
asyncio.run alternative for python older than 3.7:
replace
asyncio.run(main())
with this code for older version (i think 3.4 to 3.6)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
You may try the following way:
import time
start = time.time()
for val in range(10):
# some heavy stuff
time.sleep(.5)
if time.time() - start > 3: # 3 is timeout in seconds
print('loop stopped at', val)
break # stop the loop, or sys.exit() to stop the script
else:
print('successfully completed')
I guess it is kinda viable approach. Actual timeout is greater than 3 seconds and depends on the single step execution time.

How can i use multithreading (or multiproccessing?) for faster data upload?

I have a list of issues (jira issues):
listOfKeys = [id1,id2,id3,id4,id5...id30000]
I want to get worklogs of this issues, for this I used jira-python library and this code:
listOfWorklogs=pd.DataFrame() (I used pandas (pd) lib)
lst={} #dictionary for help, where the worklogs will be stored
for i in range(len(listOfKeys)):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
i+=1
else:
for j in range(len(worklogs)):
lst = {
'self': worklogs[j].self,
'author': worklogs[j].author,
'started': worklogs[j].started,
'created': worklogs[j].created,
'updated': worklogs[j].updated,
'timespent': worklogs[j].timeSpentSeconds
}
listOfWorklogs = listOfWorklogs.append(lst, ignore_index=True)
########### Below there is the recording to the .xlsx file ################
so I simply go into the worklog of each issue in a simple loop, which is equivalent to referring to the link:
https://jira.mycompany.com/rest/api/2/issue/issueid/worklogs and retrieving information from this link
The problem is that there are more than 30,000 such issues.
and the loop is sooo slow (approximately 3 sec for 1 issue)
Can I somehow start multiple loops / processes / threads in parallel to speed up the process of getting worklogs (maybe without jira-python library)?
I recycled a piece of code I made into your code, I hope it helps:
from multiprocessing import Manager, Process, cpu_count
def insert_into_list(worklog, queue):
lst = {
'self': worklog.self,
'author': worklog.author,
'started': worklog.started,
'created': worklog.created,
'updated': worklog.updated,
'timespent': worklog.timeSpentSeconds
}
queue.put(lst)
return
# Number of cpus in the pc
num_cpus = cpu_count()
index = 0
# Manager and queue to hold the results
manager = Manager()
# The queue has controlled insertion, so processes don't step on each other
queue = manager.Queue()
listOfWorklogs=pd.DataFrame()
lst={}
for i in range(len(listOfKeys)):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
i+=1
else:
# This loop replaces your "for j in range(len(worklogs))" loop
while index < len(worklogs):
processes = []
elements = min(num_cpus, len(worklogs) - index)
# Create a process for each cpu
for i in range(elements):
process = Process(target=insert_into_list, args=(worklogs[i+index], queue))
processes.append(process)
# Run the processes
for i in range(elements):
processes[i].start()
# Wait for them to finish
for i in range(elements):
processes[i].join(timeout=10)
index += num_cpus
# Dump the queue into the dataframe
while queue.qsize() != 0:
listOfWorklogs.append(q.get(), ignore_index=True)
This should work and reduce the time by a factor of little less than the number of CPUs in your machine. You can try and change that number manually for better performance. In any case I find it very strange that it takes about 3 seconds per operation.
PS: I couldn't try the code because I have no examples, it probably has some bugs
I have some troubles((
1) indents in the code where the first "for" loop appears and the first "if" instruction begins (this instruction and everything below should be included in the loop, right?)
for i in range(len(listOfKeys)-99):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
....
2) cmd, conda prompt and Spyder did not allow your code to work for a reason:
Python Multiprocessing error: AttributeError: module '__ main__' has no attribute 'spec'
After researching in the google, I had to set a bit higher in the code: spec = None (but I'm not sure if this is correct) and this error disappeared.
By the way, the code in Jupyter Notebook worked without this error, but listOfWorklogs is empty and this is not right.
3) when I corrected indents and set __spec __ = None, a new error occurred in this place:
processes[i].start ()
error like this:
"PicklingError: Can't pickle : attribute lookup PropertyHolder on jira.resources failed"
if I remove the parentheses from the start and join methods, the code will work, but I will not have any entries in the listOfWorklogs(((
I ask again for your help!)
How about thinking about it not from a technical standpoint but a logical one? You know your code works, but at a rate of 3sec per 1 issue which means it would take 25 hours to complete. If you have the ability to split up the # of Jira issues that are passed into the script (maybe use date or issue key, etc) you could create multiple different .py files with basically the same code, you would just be passing each one a different list of Jira tickets. So you could just run say 4 of them at the same time and you would reduce your time to 6.25 hours each.

Python27 Is it able to make timer without thread.Timer?

So, basically I want to make timer but I don't want to use thread.Timer for
efficiency
Python produces thread by itself, it is not efficient and better not to use it.
I search the essay related to this. And checked It is slow to use thread.
e.g) single process was divided into N, and made it work into Thread, It was slower.
However I need to use Thread for this.
class Works(object):
def __init__(self):
self.symbol_dict = config.ws_api.get("ASSET_ABBR_LIST")
self.dict = {}
self.ohlcv1m = []
def on_open(self, ws):
ws.send(json.dumps(config.ws_api.get("SUBSCRIPTION_DICT")))
everytime I get the message form web socket server, I store in self.dict
def on_message(self,ws,message):
message = json.loads(message)
if len(message) > 2 :
ticker = message[2]
pair = self.symbol_dict[(ticker[0])]
baseVolume = ticker[5]
timestmap = time.time()
try:
type(self.dict[pair])
except KeyError as e:
self.dict[pair] = []
self.dict[pair].append({
'pair':pair,
'baseVolume' : baseVolume,
})
def run(self):
websocket.enableTrace(True)
ws = websocket.WebSocketApp(
url = config.ws_api.get("WEBSOCK_HOST"),
on_message = self.on_message,
on_open = self.on_open
)
ws.run_forever(sslopt = {"cert_reqs":ssl.CERT_NONE})
'once in every 60s it occurs. calculate self.dict and save in to self.ohlcv1m
and will sent it to db. eventually self.dict and self.ohlcv1m initialized again to store 1min data from server'
def every60s(self):
threading.Timer(60, self.every60s).start()
for symbol in self.dict:
tickerLists = self.dict[symbol]
self.ohlcv1m.append({
"V": sum([
float(ticker['baseVolume']) for ticker in tickerLists]
})
#self.ohlcv1m will go to database every 1m
self.ohlcv1 = [] #init again
self.dict = {} #init again
if __name__ == "__main__":
work=Works()
t1 = threading.Thread(target=work.run)
t1.daemon = True
t1.start()
work.every60s()
(sorry for the indention)
I am connecting to socket by running run_forever() and getting realtimedata
Every 60s I need to check and calculate the data
Is there any way to make 60s without thread in python27?
I will be so appreciate you answer If you give me any advice.
Thank you
The answer comes down to if you need the code to run exactly every 60 seconds, or if you can just wait 60 seconds between runs (i.e. if the logic takes 5 seconds, it'll run every 65 seconds).
If you're happy with just a 60 second gap between runs, you could do
import time
while True:
every60s()
time.sleep(60)
If you're really set on not using threads but having it start every 60 seconds regardless of the last poll time, you could time the last execution and subtract that from 60 seconds to get the sleep time.
However, really, with the code you've got there you're not going to run into any of the issues with Python threads you might have read about. Those issues come in when you've got multiple threads all running at the same time and all CPU bound, which doesn't seem to be the case here unless there's some very slow, CPU intensive work that's not in your provided code.

Resources