I am looking at Face Detection, using Kairos API while working on this program with the following code
def Test():
image = cap.read()[1]
cv2.imwrite("opencv_frame.png",image)
recognized_faces = kairos_face.verify_face(file="filepath/opencv_frame.png", subject_id='David',gallery_name='Test')
print(recognized_faces)
if recognized_faces.get('images')[0].get('transaction').get('status') !='success':
print('No')
else:
print('Hello', recognized_faces.get('images')[0].get('transaction').get('subject_id'))
this works fine if i look straight at the camera, but if i turn my head it breaks with the following response.
kairos_face.exceptions.ServiceRequestError: {'Errors': [{'Message': 'no faces found in the image', 'ErrCode': 5002}]}
How can i handle the exception Error, and force the test function to keep running until a face is detected.
Can't you just catch the exception and try again?
def Test():
captured = False
while not captured:
try:
image = cap.read()[1]
cv2.imwrite("opencv_frame.png",image)
recognized_faces = kairos_face.verify_face(file="filepath/opencv_frame.png", subject_id='David',gallery_name='Test')
captured = True
except kairos_face.exceptions.ServiceRequestError:
pass # optionally wait
print(recognized_faces)
if recognized_faces.get('images')[0].get('transaction').get('status') !='success':
print('No')
else:
print('Hello', recognized_faces.get('images')[0].get('transaction').get('subject_id'))
Related
I have been working with with exceptions where im trying to still improve on how to raise correctly. I have done something like this
main.py
from lib.utils import ExceptionCounter
class BarryException():
pass
candy = "hello"
color = "world"
try:
test = type(int("string"))
except Exception as err:
# My own counter for exception that sends to discord if allowed_count is 0 by calling discord.notify(msg)
ExceptionCounter().check(
exception=BarryException,
allowed_count=0,
msg={
'Title': 'Unexpected Error',
'Reason': str(err),
'Payload': str({"candy": candy, "color": color})
}
)
lib.utils
class ExceptionCounter:
"""
Counter to check if we get exceptions x times in a row.
"""
def __init__(self):
self.exception_count = {}
def check(self, exception, msg, allowed_count):
exception_name = exception.__name__
# the dict.get() method will return the value from a dict if exists, else it will return the value provided
self.exception_count[exception_name] = self.exception_count.get(exception_name, 0) + 1
if self.exception_count[exception_name] >= allowed_count:
Discord_notify(msg)
raise exception(msg)
def reset(self):
self.exception_count.clear()
However Someone from Code Review recommended me to do this instead:
Raise with contextual data in a custom exception type:
try:
...
except Exception as e:
raise BarryException('Unexpected error testing', candy=candy, color=color) from e
Don't call Discord from there; don't do exception counting there; don't form a discord message payload dictionary there - just raise. In an upper method, you can except and do the Discord notification.
ExceptionCounter should not know about Discord at all, and should be strictly constrained to exception count limits and re-raising.
My problem is that I dont quite understand the part just raise. In an upper method, you can except and do the Discord notification. - I would appreciate if someone could explain or even show me a example of what it actually means :(
I am trying to have try and except block in my code to catch an error, then put it to sleep for 5 seconds and then I want to continue where it left off. Following is my code and currently as soon as it catches exception, it does not continue and stops after exception.
from botocore.exceptions import ClientError
tries = 0
try:
for pag_num, page in enumerate(one_submitted_jobs):
if 'NextToken' in page:
print("Token:",pag_num)
else:
print("No Token in page:", pag_num)
except ClientError as exception_obj:
if exception_obj.response['Error']['Code'] == 'ThrottlingException':
print("Throttling Exception Occured.")
print("Retrying.....")
print("Attempt No.: " + str(tries))
time.sleep(5)
tries +=1
else:
raise
How can I make it to continue after exception? Any help would be great.
Note - I am trying to catch AWS's ThrottlingException error in my code.
Following code is for demonstration to #Selcuk to show what I have currently from his answer. Following will be deleted as soon as we agree if I am doing it correct or not.
tries = 1
pag_num = 0
# Only needed if one_submitted_jobs is not an iterator:
one_submitted_jobs = iter(one_submitted_jobs)
while True:
try:
page = next(one_submitted_jobs)
# do things
if 'NextToken' in page:
print("Token: ", pag_num)
else:
print("No Token in page:", pag_num)
pag_num += 1
except StopIteration:
break
except ClientError as exception_obj:
# Sleep if we are being throttled
if exception_obj.response['Error']['Code'] == 'ThrottlingException':
print("Throttling Exception Occured.")
print("Retrying.....")
print("Attempt No.: " + str(tries))
time.sleep(3)
tries +=1
You are not able to keep running because the exception occurs in your for line. This is a bit tricky because in this case the for statement has no way of knowing if there are more items to process or not.
A workaround could be to use a while loop instead:
pag_num = 0
# Only needed if one_submitted_jobs is not an iterator:
one_submitted_jobs = iter(one_submitted_jobs)
while True:
try:
page = next(one_submitted_jobs)
# do things
pag_num += 1
except StopIteration:
break
except ClientError as exception_obj:
# Sleep if we are being throttled
I have a python 3 script that needs to make thousands of requests to multiple different websites and check if their source code pass in some pre defined rules.
I am using selenium to make the requests because I need to get the source code after JS finishes it excecution, but due to the high number of urls I need to check, I am trying to make it run multiple threads simultaneously. Each thread creates and maintain an instance of webdriver to make the requests. The problem is after a while all threads go silent and simply stop executing, leaving just a single thread doing all the work. Here is the relevant part of my code:
def get_browser(use_firefox = True):
if use_firefox:
options = FirefoxOptions()
options.headless = True
browser = webdriver.Firefox(options = options)
browser.implicitly_wait(4)
return browser
else:
chrome_options = ChromeOptions()
chrome_options.add_argument("--headless")
browser = webdriver.Chrome(chrome_options=chrome_options)
browser.implicitly_wait(4)
return browser
def start_validations(urls, rules, results, thread_id):
try:
log("thread %s started" % thread_id, thread_id)
browser = get_browser(thread_id % 2 == 1)
while not urls.empty():
url = "http://%s" % urls.get()
try:
log("starting %s" % url, thread_id)
browser.get(url)
time.sleep(0.5)
WebDriverWait(browser, 6).until(selenium_wait_reload(4))
html = browser.page_source
result = check_url(html, rules)
original_domain = url.split("://")[1].split("/")[0].replace("www.","")
tested_domain = browser.current_url.split("://")[1].split("/")[0].replace("www.","")
redirected_url = "" if tested_domain == original_domain else browser.current_url
results.append({"Category":result, "URL":url, "Redirected":redirected_url})
log("finished %s" % url, thread_id)
except Exception as e:
log("couldn't test url %s" % url, thread_id )
log(str(e), thread_id)
results.append({"Category":"Connection Error", "URL":url, "Redirected":""})
browser.quit()
time.sleep(2)
browser = get_browser(thread_id % 2 == 1)
except Exception as e:
log(str(e), thread_id)
finally:
log("closing thread", thread_id)
browser.quit()
def calculate_progress(urls):
progress_folder ="%sprogress/" % WEBROOT
if not os.path.exists(progress_folder):
os.makedirs(progress_folder)
initial_size = urls.qsize()
while not urls.empty():
current_size = urls.qsize()
on_queue = initial_size - current_size
progress = '{0:.0f}'.format((on_queue / initial_size * 100))
for progress_file in os.listdir(progress_folder):
file_path = os.path.join(progress_folder, progress_file)
if os.path.isfile(file_path) and not file_path.endswith(".csv"):
os.unlink(file_path)
os.mknod("%s%s" % (progress_folder, progress))
time.sleep(1)
if __name__ == '__main__':
while True:
try:
log("scraper started")
if os.path.isfile(OUTPUT_FILE):
os.unlink(OUTPUT_FILE)
manager = Manager()
rules = fetch_rules()
urls = manager.Queue()
fetch_urls()
results = manager.list()
jobs = []
p = Process(target=calculate_progress, args=(urls,))
jobs.append(p)
p.start()
for i in range(THREAD_POOL_SIZE):
log("spawning thread with id %s" % i)
p = Process(target=start_validations, args=(urls, rules, results, i))
jobs.append(p)
p.start()
time.sleep(2)
for j in jobs:
j.join()
save_results(results, OUTPUT_FILE)
log("scraper finished")
except Exception as e:
log(str(e))
As you can see, first I thought I could only have one instance of the browser, so I tried to run at least firefox and chrome in paralel, but this still leaves only a thread to do all the work.
Some times the driver crahsed and the thread stopped working even though it is inside a try/catch block, so I started to get a new instance of the browser everytime this happens,but it still didn't work. I also tried waiting a few seconds between creating each instance of the driver still with no results
here is a pastebin of one of the log files:
https://pastebin.com/TsjZdRYf
A strange thing that I noticed is that almost everytime the only thread that keeps running is the last one spawned (with id 3).
Thanks for your time and you help!
EDIT:
[1] Here is the full code: https://pastebin.com/fvVPwPVb
[2] custom selenium wait condition: https://pastebin.com/Zi7nbNFk
Am I allowed to curse on SO? I solved the problem, and I don't think this answer should exist on SO because nobody else will benefit from it. The problem was a custom wait condition that I had created. This class is in the pastebin that was added in edit 2, but I'll also add it here for convenince:
import time
class selenium_wait_reload:
def __init__(self, desired_repeating_sources):
self.desired_repeating_sources = desired_repeating_sources
self.repeated_pages = 0
self.previous_source = None
def __call__(self, driver):
while True:
current_source = driver.page_source
if current_source == self.previous_source:
self.repeated_pages = self.repeated_pages +1
if self.repeated_pages >= self.desired_repeating_sources:
return True
else:
self.previous_source = current_source
self.repeated_pages = 0
time.sleep(0.3)
The goal of this class was to make selenium wait because the JS could be loading additional DOM.
So, this class makes selenium wait a short time and check the code, wait a little again and check the code again. The class repeats this until the source code is the same 3 times in a row.
The problem is that there are some pages that have a js carrousel, so the source code is never the same. I thought that in cases like this the WebDriverWait second parameter would make it crash with a timeoutexception. I was wrong.
I am trying to build an application with Python using Kivy. This is what I am trying to achieve:
Use input from a video or camera to detect images using a detector that (to simplify) finds all circles in an image, and highlights them somehow, producing a new image as a result. This is done by a class which I call detector. Detector makes use of OpenCV for image processing.
The detector's output image is written into display.jpg
A Kivy app has an image field of which source is display.jpg
(This doesn't work) I reload the image source using self.image.reload() so that my application refreshes the output for the user
Here is my code
class GUI(App):
def build(self):
self.capture = cv2.VideoCapture(VIDEO_SOURCE)
self.detector = detector(self.capture)
layout = GridLayout(cols=2)
self.InfoStream = Label(text = 'Info', size_hint=(20,80))
StartButton = Button(text = 'Start', size_hint=(80,20))
StartButton.bind(on_press=lambda x:self.start_program())
self.image = Image(source = 'display.jpg')
layout.add_widget(self.image)
layout.add_widget(self.InfoStream)
layout.add_widget(StartButton)
layout.add_widget(Label(text = 'Test'), size_hint=(20,20))
return layout
def start_program(self):
while True:
self.detector.detect_frame()
self.update_image() # After detecting each frame, I run this
def update_image(self, *args):
if self.detector.output_image is not None:
cv2.imwrite('display.jpg', self.detector.output_image)
self.image.reload() # I use reload() to refresh the image on screen
def exit(self):
self.stop()
def on_stop(self):
self.capture.release()
if __name__ == '__main__':
GUI().run()
What happens is that I can successfully start the application by pressing my StartButton. I see output on console proving it's cyclically running through frames from my video stream, and I also see the source image display.jpg being updated in real time.
However, after I start, the app window seems to simply freeze. It goes "Not responding" and greys out, never showing any refreshed image.
Following some existing code from other sources, I also attempted to refresh the image using a scheduled task, with Clock.schedule_interval(self.update_image, dt=1), but the result was the same.
Is my way of refreshing the image correct? Are there better ways to do so?
The while True loop on the main thread will cause your App to be non-responsive. You can eliminate that loop by using Clock.schedule_interval as below:
class GUI(App):
def build(self):
self.capture = cv2.VideoCapture(VIDEO_SOURCE)
self.detector = detector(self.capture)
layout = GridLayout(cols=2)
self.InfoStream = Label(text = 'Info', size_hint=(20,80))
StartButton = Button(text = 'Start', size_hint=(80,20))
StartButton.bind(on_press=lambda x:self.start_program())
self.image = Image(source = 'display.jpg')
layout.add_widget(self.image)
layout.add_widget(self.InfoStream)
layout.add_widget(StartButton)
layout.add_widget(Label(text = 'Test'), size_hint=(20,20))
return layout
def start_program(self):
Clock.schedule_interval(self.update_image, 0.05)
def update_image(self, *args):
self.detector.detect_frame()
if self.detector.output_image is not None:
cv2.imwrite('display.jpg', self.detector.output_image)
self.image.reload() # I use reload() to refresh the image on screen
def exit(self):
self.stop()
def on_stop(self):
self.capture.release()
if __name__ == '__main__':
GUI().run()
Here is another question where the poster is using a Texture instead of writing the captured image to a file. Might be a more efficient approach.
I'm currently getting this error. I'm confused because from what I can tell Generator Exit just gets called whenever a generator finishes, but I have a ton of other Generators inheriting this class that do not call this error. Am I setting the Generator up properly? or is there some implicit code I'm not taking into account that is calling close()?
"error": "Traceback (most recent call last):\n File \"/stashboard/source/stashboard/checkers.py\", line 29, in run\n yield self.check()\nGeneratorExit\n",
the code where this yield statement is called:
class Checker():
def __init__(self, event, frequency, params):
self.event = event
self.frequency = frequency
self.params = params
#gen.coroutine
def run(self):
""" Run check method every <frequency> seconds
"""
while True:
try:
yield self.check()
except GeneratorExit:
logging.info("EXCEPTION")
raise GeneratorExit
except:
data = {
'status': events.STATUS_ERROR,
'error': traceback.format_exc()
}
yield self.save(data)
yield gen.sleep(self.frequency)
#gen.coroutine
def check(self):
pass
#gen.coroutine
def save(self, data):
yield events.save(self.event, data)
and this is the code that is inheriting from it:
class PostgreChecker(Checker):
# checks list of Post
formatter = 'stashboard.formatters.PostgreFormatter'
def __init__(self, event, frequency, params):
super().__init__(event, frequency, params)
self.clients = []
for DB in configuration["postgre"]:
# setup and create connections to PG servers.
postgreUri = queries.uri(DB["host"], DB["port"], DB["dbName"],
DB["userName"], DB["password"])
# creates actual link to DB
client = queries.TornadoSession(postgreUri)
# starts connection
client.host = DB["host"]
self.clients.append(client)
#gen.coroutine
def check(self):
for client in self.clients:
try:
yield client.validate()
self.save({'host': client.host,
'status': events.STATUS_OK})
except (ConnectionError, AutoReconnect, ConnectionFailure):
self.save({'host': client.host,
'status': events.STATUS_FAIL})
Tornado never calls close() on your generators, but the garbage collector does (starting in Python 3.4 I think). How is checker.run() called? Use IOLoop.spawn_callback() for fire-and-forget coroutines; this will keep a reference to them and allow them to keep running indefinitely.
the specific issue here was that my db cursors were not automatically re-connecting. I was using the queries library, but switched over to momoko and the issue is gone