Paginated request using python-asana API - pagination

I am trying to export all tasks from all of my asana work-spaces using python-asana API. But at some point it exists after giving the following error message.
Traceback (most recent call last):
File "export.py", line 56, in <module>
for index, task in enumerate(tasks):
File "build\bdist.win32\egg\asana\page_iterator.py", line 58, in items
File "build\bdist.win32\egg\asana\page_iterator.py", line 54, in next
File "build\bdist.win32\egg\asana\page_iterator.py", line 43, in __next__
File "build\bdist.win32\egg\asana\page_iterator.py", line 74, in get_next
File "build\bdist.win32\egg\asana\client.py", line 104, in get
File "build\bdist.win32\egg\asana\client.py", line 75, in request
asana.error.InvalidRequestError: Invalid Request: Your pagination token has expired.
I read that to solve this we need to make paginated requests. But I tried passing only offset to my request as following:
tasks = client.tasks.find_all({'project' : project['id']}, limit=50)
But, there was no difference as I was not getting any 'next_page' value even though there was more than 50 tasks in the project.
So my question is:
How can I do paginated request using python-asana API? An explanation with an example would be best!
EDIT:
I am fetching the tasks as below:
tasks = client.tasks.find_all({'project' : project['id']}, item_limit=1)
print "Tasks", tasks # Prints generator object
for index, task in enumerate(tasks):
complete_task = client.tasks.find_by_id(task["id"])
print complete_task #Prints complete task dictionary
Now My question is where will I get the next_page content for the remaining tasks and how do I access it.

Related

Cannot use custom key words with pvporcupine

I have already created an account in picovoice and recieved an access key, but when I try to put the path of my ppn file, it shows an error:
`
[ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'
Traceback (most recent call last):
File "e:\Personal Project\import struct.py", line 13, in <module>
porcupine = pvporcupine.create(access_key='access key',
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\__init__.py", line 77, in create
return Porcupine(
File "C:\Users\Priyam\AppData\Roaming\Python\Python310\site-packages\pvporcupine\porcupine.py", line 158, in __init__
raise self._PICOVOICE_STATUS_TO_EXCEPTION[status]()
pvporcupine.porcupine.PorcupineIOError
the code is:
`
porcupine=None
paud=None
audio_stream=None
try:
access_key="access key"
porcupine = pvporcupine.create(access_key='access key',
keyword_paths=['C:\\Users\Priyam\Desktop\hey keyazip'],keywords=['hey keya']) #pvporcupine.KEYWORDS for all keywords
paud=pyaudio.PyAudio()
audio_stream=paud.open(rate=porcupine.sample_rate,channels=1,format=pyaudio.paInt16,input=True,frames_per_buffer=porcupine.frame_length)
while True:
keyword=audio_stream.read(porcupine.frame_length)
keyword=struct.unpack_from("h"*porcupine.frame_length,keyword)
keyword_index=porcupine.process(keyword)
if keyword_index>=0:
print("hotword detected")
finally:
if porcupine is not None:
porcupine.delete()
if audio_stream is not None:
audio_stream.close()
if paud is not None:
paud.terminate()
`
I tried the code above and the code provided by picovoice itself, yet I am facing the same issues
It looks like Porcupine is having trouble finding your keyword file: [ERROR] loading keyword file at 'C:\Users\Priyam\Desktop\hey keyazip' failed with 'IO_ERROR'.
The Picovoice console provides the keyword inside a compressed .zip file. You will need to decompress the file and in your code update the path to lead to the .ppn file located inside. For example: C:\\Users\Priyam\Desktop\hey-keya_en_windows_v2_1_0\hey-keya_en_windows_v2_1_0.ppn

Trouble sending a batch create entity request in dialogflow

I have defined the following function. The purpose is to make batch create entity request with dialogflow client. I am using this method after sending many individual tests did not scale well.
The problem seems to be the line that defines EntityType. Seems like "entityType" is not valid but that is what is in the dialogflow v2 documentation which is the current version I am using.
Any ideas on what the issue is?
def create_batch_entity_types(self):
client = self.get_entity_client()
print(DialogFlowClient.batch_list)
EntityType = {
"entityTypes": DialogFlowClient.batch_list
}
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
def callback(operation_future):
# Handle result.
result = operation_future.result()
print(result)
response.add_done_callback(callback)
After running the function I received this error
Traceback (most recent call last):
File "df_client.py", line 540, in <module>
create_entity_types_from_database()
File "df_client.py", line 426, in create_entity_types_from_database
df.create_batch_entity_types()
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/dialogflow/dialogflow_accessor.py", line 99, in create_batch_entity_types
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/venv/lib/python3.7/site-packages/dialogflow_v2/gapic/entity_types_client.py", line 767, in batch_update_entity_types
update_mask=update_mask,
ValueError: Protocol message EntityTypeBatch has no "entityTypes" field.
The argument for entity_type_batch_inline must have the same form as EntityTypeBatch.
Look how that type looks like: https://dialogflow-python-client-v2.readthedocs.io/en/latest/gapic/v2/types.html#dialogflow_v2.types.EntityTypeBatch
It has to have entity_types field, not entityTypes.

Jira Python API to insert comment with WIKI formatting

While using jira python module to add comments, is there a possibility to insert comments using WIKI markup?
I noticed some example showed REST API calls with representation:wiki can do this.
But I noticed current python Jira only supports plain text as comments. Is this a limitation or I am missing something?
I checked jira source code and noticed data is represented as dictionary and also being dumped using json.dumps, I tried pushing body as dict "{'storage': {'value': '== Activity: == error-rate-percentage-is-at-acceptable-limits .. Started', 'representation': 'wiki'}}"
But got below error back from Jira API call I guess
[2019-12-02 01:07:22 DEBUG] [__init__:386] Before-control 'jira-integration' failed
Traceback (most recent call last):
File "<<HIDDEN BY ME>>>/lib/python3.7/site-packages/chaoslib/control/__init__.py", line 377, in apply_controls
settings=settings)
File "<<HIDDEN BY ME>>>/python3.7/site-packages/chaoslib/control/python.py", line 147, in apply_python_control
func(context=context, **arguments)
File "<<HIDDEN BY ME>>>/python3.7/site-packages/<<HIDDEN BY ME>>/controls/jira/tickets.py", line 220, in before_activity_control
add_comment(os.environ["SUB_TASK_TICKET"], content_as_wiki(formatting.format_as_heading2("Activity: ") + str(context["name"]) + " .. Started"))
File "<<HIDDEN BY ME>>>/python3.7/site-packages/<<HIDDEN BY ME>>/controls/jira/tickets.py", line 58, in add_comment
test = JIRA_CLIENT.add_comment(issue, comment)
File "<<HIDDEN BY ME>>/python3.7/site-packages/jira/client.py", line 126, in wrapper
result = func(*arg_list, **kwargs)
File "<<HIDDEN BY ME>>/python3.7/site-packages/jira/client.py", line 1367, in add_comment
url, data=json.dumps(data)
File "<<HIDDEN BY ME>>/python3.7/site-packages/jira/resilientsession.py", line 154, in post
return self.__verb('POST', url, **kwargs)
File "<<HIDDEN BY ME>>/python3.7/site-packages/jira/resilientsession.py", line 147, in __verb
raise_on_error(response, verb=verb, **kwargs)
File "<<HIDDEN BY ME>>/python3.7/site-packages/jira/resilientsession.py", line 57, in raise_on_error
r.status_code, error, r.url, request=request, response=r, **kwargs)
jira.exceptions.JIRAError: JiraError HTTP 400 url: https://<<HIDDEN BY ME>>>
text: Can not deserialize instance of java.lang.String out of START_OBJECT token
at [Source: com.enhancera.auditor.common.filter.RestReadingServletRequest$1#4e08280a; line: 1, column: 2] (through reference chain: com.atlassian.jira.issue.fields.rest.json.beans.CommentJsonBean["body"])
I was able to create a comment using markup by passing a string formatted according to the documentation here. For a test I created an issue and added a comment to add a two column 1 row (+ header row) table with:
comment = jira.add_comment(issue, "||header1||header2||\n|one|two|")
Which produced:
Make sure to wrap your comment string in double quotes.

pymysql cannot execute query remotely

I'm executing a set of very heavy queries remotely to the company databases from our central server. Unfortunately, python logging raises this error from the database when trying to execute certain INSERT query.
/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py:329: Warning: (1592, 'Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it uses a UDF which may not return the same value on the slave.')
self._do_get_result()
/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py:329: Warning: (1592, 'Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statements writing to a table with an auto-increment column after selecting from another table are unsafe because the order in which rows are retrieved determines what (if any) rows will be written. This order cannot be predicted and may differ on master and the slave.')
self._do_get_result()
/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py:329: Warning: (1592, 'Unsafe statement written to the binary log using statement format since BINLOG_FORMAT = STATEMENT. Statement is unsafe because it invokes a trigger or a stored function that inserts into an AUTO_INCREMENT column. Inserted values cannot be logged correctly.')
self._do_get_result()
wihout python logging, the error es expressed differently, and does not locks the python program execution:
[2019-09-27 13:24:47,228 root ERROR] Fallo ejecucion de query: INSERT INTO movimiento (numero_sala, tipo_movimiento, fec_movimiento, hora_movimiento, nro_tarjeta, id_maquina, monto, lugar_im, fecha_hora_im) VALUES ( 3, 20, current_date, current_time, 157299522, 0, 40.000000, 2, current_timestamp)
Traceback (most recent call last):
File "/opt/cruciscripts/crucidmcs/connectionManager.py", line 58, in alter
cursor.execute(query)
File "/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.5/dist-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 684, in _read_packet
packet.check_error()
File "/usr/local/lib/python3.5/dist-packages/pymysql/protocol.py", line 220, in check_error
err.raise_mysql_exception(self._data)
File "/usr/local/lib/python3.5/dist-packages/pymysql/err.py", line 109, in raise_mysql_exception
raise errorclass(errno, errval)
pymysql.err.InternalError: (1205, 'Lock wait timeout exceeded; try restarting transaction')
Depending on the SQL engine, there are limitations for the size of a query.
The obvious solution was to partition the query in smaller chunks.

Gunicorn + pyramid 'OPTIONS' method has stopped working

I have an app server and api server running with gunicorn. Both servers are pyramid apps.
Browser loads html and static files from app server and then talks to api server directly.
Browser needs to do a file upload operation. This has always worked. I have done the following for the OPTION method that does file upload because it is a cross domain post
response.headers['Access-Control-Allow-Origin'] = origin
The issue is that the local machine has started timing out the OPTION call to the api server. The first time this error shows up
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/workers/sync.py", line 87, in handle
req = six.next(parser)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/parser.py", line 39, in __next__
self.mesg = self.mesg_class(self.cfg, self.unreader, self.req_count)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 152, in __init__
super(Request, self).__init__(cfg, unreader)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 49, in __init__
unused = self.parse(self.unreader)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 167, in parse
line, rbuf = self.read_line(unreader, buf, self.limit_request_line)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 217, in read_line
self.get_data(unreader, buf)
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/message.py", line 155, in get_data
data = unreader.read()
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/unreader.py", line 38, in read
d = self.chunk()
File "/home/ranjith/workspace/venv/local/lib/python2.7/site-packages/gunicorn-0.17.4-py2.7.egg/gunicorn/http/unreader.py", line 65, in chunk
return self.sock.recv(self.mxchunk)
error: [Errno 4] Interrupted system call
2013-05-27 11:37:17 [19097] [INFO] Handling signal: winch
2013-05-27 11:37:17,533 ERROR [gunicorn.error][MainThread] Error processing request.
And after that it stops working altogether. All other calls work. The OPTIONS call used to work. I am not sure why this is not working and it has left me frustrated. The same works on our test servers etc. I am not sure what is broken and why

Resources