Strange issue with Python recursive function in Chalice framework - python-3.x

I have defined this SNS-triggered Lambda in Chalice:
#app.on_sns_message(topic='arn:aws:sns:us-west-1:XXXXXXXX:MyTopic')
def step1_photo_url_preload(event, retry = 3):
try:
js = json.loads(event.message)
... some logic here, event object is never modified ...
except:
if retry:
print("WARNING: failed, %d retries remaining" % retry)
return step1_photo_url_preload(event, retry-1)
else:
raise
When an exception is raised, the function should retry up to 3 times.
Instead, what I get is the exception below. Look closely at the trace: Line 56 shows the error occurs when attempting the recursive call:
[ERROR] TypeError: 'SNSEvent' object is not subscriptable
Traceback (most recent call last):
File "/var/task/chalice/app.py", line 1459, in __call__
return self.func(event_obj)
File "/var/task/app.py", line 56, in step1_photo_url_preload
return step1_photo_url_preload(event, retry-1)
File "/var/task/chalice/app.py", line 1458, in __call__
event_obj = self.event_class(event, context)
File "/var/task/chalice/app.py", line 1486, in __init__
self._extract_attributes(event_dict)
File "/var/task/chalice/app.py", line 1532, in _extract_attributes
first_record = event_dict['Records'][0]
Mysteriously, the function can't work with the event object that it received the first time.
What could cause this?
I suspect this might have something to do with the magic behind #app.on_sns_message, but I'm not sure where to look next.

The problem is the fact that the function is decorated, the failure is in the code the decorator is running. Pull the functionality you want to run recursively into a separate function and the problem should go away.

Related

Cannot use custom key words with pvporcupine

I have already created an account in picovoice and recieved an access key, but when I try to put the path of my ppn file, it shows an error:
`
[ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'
Traceback (most recent call last):
File "e:\Personal Project\import struct.py", line 13, in <module>
porcupine = pvporcupine.create(access_key='access key',
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\__init__.py", line 77, in create
return Porcupine(
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\porcupine.py", line 158, in __init__
raise self._PICOVOICE_STATUS_TO_EXCEPTION[status]()
pvporcupine.porcupine.PorcupineIOError
the code is:
`
porcupine=None
paud=None
audio_stream=None
try:
access_key="access key"
porcupine = pvporcupine.create(access_key='access key',
keyword_paths=['C:\\Users\Priyam\Desktop\hey keyazip'],keywords=['hey keya']) #pvporcupine.KEYWORDS for all keywords
paud=pyaudio.PyAudio()
audio_stream=paud.open(rate=porcupine.sample_rate,channels=1,format=pyaudio.paInt16,input=True,frames_per_buffer=porcupine.frame_length)
while True:
keyword=audio_stream.read(porcupine.frame_length)
keyword=struct.unpack_from("h"*porcupine.frame_length,keyword)
keyword_index=porcupine.process(keyword)
if keyword_index>=0:
print("hotword detected")
finally:
if porcupine is not None:
porcupine.delete()
if audio_stream is not None:
audio_stream.close()
if paud is not None:
paud.terminate()
`
I tried the code above and the code provided by picovoice itself, yet I am facing the same issues
It looks like Porcupine is having trouble finding your keyword file: [ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'.
The Picovoice console provides the keyword inside a compressed .zip file. You will need to decompress the file and in your code update the path to lead to the .ppn file located inside. For example: C:\\Users\Priyam\Desktop\hey-keya_en_windows_v2_1_0\hey-keya_en_windows_v2_1_0.ppn

Trouble sending a batch create entity request in dialogflow

I have defined the following function. The purpose is to make batch create entity request with dialogflow client. I am using this method after sending many individual tests did not scale well.
The problem seems to be the line that defines EntityType. Seems like "entityType" is not valid but that is what is in the dialogflow v2 documentation which is the current version I am using.
Any ideas on what the issue is?
def create_batch_entity_types(self):
client = self.get_entity_client()
print(DialogFlowClient.batch_list)
EntityType = {
"entityTypes": DialogFlowClient.batch_list
}
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
def callback(operation_future):
# Handle result.
result = operation_future.result()
print(result)
response.add_done_callback(callback)
After running the function I received this error
Traceback (most recent call last):
File "df_client.py", line 540, in <module>
create_entity_types_from_database()
File "df_client.py", line 426, in create_entity_types_from_database
df.create_batch_entity_types()
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/dialogflow/dialogflow_accessor.py", line 99, in create_batch_entity_types
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/venv/lib/python3.7/site-packages/dialogflow_v2/gapic/entity_types_client.py", line 767, in batch_update_entity_types
update_mask=update_mask,
ValueError: Protocol message EntityTypeBatch has no "entityTypes" field.
The argument for entity_type_batch_inline must have the same form as EntityTypeBatch.
Look how that type looks like: https://dialogflow-python-client-v2.readthedocs.io/en/latest/gapic/v2/types.html#dialogflow_v2.types.EntityTypeBatch
It has to have entity_types field, not entityTypes.

python logging.critical() to raise exception and dump stacktrace and die

I'm porting some code from perl (log4perl) and java (slf4j). All is fine except for logging.critical() does not dump stacktrace and die like it does in the other frameworks, need to add a lot of extra code, logger.exception() also only writes error.
Today I do:
try:
errmsg = "--id={} not found on --host={}".format(args.siteid, args.host)
raise GX8Exception(errmsg)
except GX8Exception as e:
log.exception(e)
sys.exit(-1)
This produces:
2018-01-10 10:09:56,814 [ERROR ] root --id=7A4A7845-7559-4F89-B678-8ADFECF5F7C3 not found on --host=welfare-qa
Traceback (most recent call last):
File "./gx8-controller.py", line 85, in <module>
raise GX8Exception(errmsg)
GX8Exception: --id=7A4A7845-7559-4F89-B678-8ADFECF5F7C3 not found on --host=welfare-qa
Is there a way to config pythonmodule logger to do this, or any other framework to do the same:
log.critical("--id={} not found on --host={}".format(args.siteid, args.host))
One approach would be to create a custom Handler that does nothing but pass log messages on to its super and then exit if the log level is high enough:
import logging
class ExitOnExceptionHandler(logging.StreamHandler):
def emit(self, record):
super().emit(record)
if record.levelno in (logging.ERROR, logging.CRITICAL):
raise SystemExit(-1)
logging.basicConfig(handlers=[ExitOnExceptionHandler()], level=logging.DEBUG)
logger = logging.getLogger('MYTHING')
def causeAProblem():
try:
raise ValueError("Oh no!")
except Exception as e:
logger.exception(e)
logger.warning('Going to try something risky...')
causeAProblem()
print("This won't get printed")
Output:
rat#pandion:~$ python test.py
ERROR:root:Oh no!
Traceback (most recent call last):
File "test.py", line 14, in causeAProblem
raise ValueError("Oh no!")
ValueError: Oh no!
rat#pandion:~$ echo $?
255
However, this could cause unexpected behavior for users of your code. It would be much more straightfoward, if you want to log an exception and exit, to simply leave the exception uncaught. If you want to log a traceback and exit wherever the code is currently calling logging.critical, change it to raise an exception instead.
I inherited some code where I could not change the handler class. I resorted to run time patching of the handler which is a variation on the solution by #nathan-vērzemnieks:
import types
def patch_logging_handler(logger):
def custom_emit(self, record):
self.orig_emit(record)
if record.levelno == logging.FATAL:
raise SystemExit(-1)
handler = logger.handlers[0]
setattr(handler, 'orig_emit', handler.emit)
setattr(handler, 'emit', types.MethodType(custom_emit, handler))
Nathans anwser is great! Been looking for this for a long time,
will just add that you can also do:
if record.levelno >= logging.ERROR:
instead of
if record.levelno in (logging.ERROR, logging.CRITICAL):
to set the minimum level that would cause an exit.

Print traceback "ending" on the assert-line, in Python3 unittest addFailure

I have made a simple, custom TestResult class (not inheriting from anything). When my python unittest fails, addFailure(self, test, err) is called as expected.
err[2] contains a traceback
I now print the traceback with this command: traceback.print_tb(err[2])
The print out contains two more levels than expected/desired.
File "/usr/lib64/python3.4/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib64/python3.4/unittest/case.py", line 580, in run
testMethod()
File "/home/xplatformer/code/tools/python/exception_test/my_test.py", line 23, in test_my4
self.assertEqual(5,4)
File "/usr/lib64/python3.4/unittest/case.py", line 800, in assertEqual
assertion_func(first, second, msg=msg)
File "/usr/lib64/python3.4/unittest/case.py", line 793, in _baseAssertEqual
raise self.failureException(msg)
How can I get the traceback to "end" at the assertEqual (line 23 in my_test.py)
Similarly, when extracting the filename like this: err[2].tb_frame.f_code.co_filename, I get case.py and not my_test.py as expected/desired.
How can I get the filename where the assertion occurred?
from the log it is pretty clear that, there is self.assertEqual(5,4) which fails the test case on line 23 in the method test_my4 in file /home/xplatformer/code/tools/python/exception_test/my_test.py
change the self.assertEqual(5,5) will pass the test case.

Paginated request using python-asana API

I am trying to export all tasks from all of my asana work-spaces using python-asana API. But at some point it exists after giving the following error message.
Traceback (most recent call last):
File "export.py", line 56, in <module>
for index, task in enumerate(tasks):
File "build\bdist.win32\egg\asana\page_iterator.py", line 58, in items
File "build\bdist.win32\egg\asana\page_iterator.py", line 54, in next
File "build\bdist.win32\egg\asana\page_iterator.py", line 43, in __next__
File "build\bdist.win32\egg\asana\page_iterator.py", line 74, in get_next
File "build\bdist.win32\egg\asana\client.py", line 104, in get
File "build\bdist.win32\egg\asana\client.py", line 75, in request
asana.error.InvalidRequestError: Invalid Request: Your pagination token has expired.
I read that to solve this we need to make paginated requests. But I tried passing only offset to my request as following:
tasks = client.tasks.find_all({'project' : project['id']}, limit=50)
But, there was no difference as I was not getting any 'next_page' value even though there was more than 50 tasks in the project.
So my question is:
How can I do paginated request using python-asana API? An explanation with an example would be best!
EDIT:
I am fetching the tasks as below:
tasks = client.tasks.find_all({'project' : project['id']}, item_limit=1)
print "Tasks", tasks # Prints generator object
for index, task in enumerate(tasks):
complete_task = client.tasks.find_by_id(task["id"])
print complete_task #Prints complete task dictionary
Now My question is where will I get the next_page content for the remaining tasks and how do I access it.

Resources