getting nodes with location specified via NeomodelPoint - neomodel

getting nodes with location, specified with NeomodelPoint give an error "Invalid instantiation via no arguments", while setting node in the same way works well (below it gives ConstraintError). What is the right way of getting nodes with specific location?
location=(51.3454, -6.2434)
try:
property = Property.nodes.get(location=NeomodelPoint(location,crs='cartesian'))
except:
property = neo4j.Property(location=NeomodelPoint(location,crs='cartesian')).save()
ValueError: Invalid instantiation via no arguments. A Point needs default values either in x,y,z or longitude, latitude, height coordinates
ValueError: Invalid instantiation via no arguments. A Point needs default values either in x,y,z or longitude, latitude, height coordinates
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dmitriy/anaconda3/lib/python3.7/site-packages/neomodel/util.py", line 211, in cypher_query
response = session.run(query, params)
File "/Users/dmitriy/anaconda3/lib/python3.7/site-packages/neo4j/v1/api.py", line 331, in run
self._connection.fetch()
File "/Users/dmitriy/anaconda3/lib/python3.7/site-packages/neo4j/bolt/connection.py", line 287, in fetch
return self._fetch()
File "/Users/dmitriy/anaconda3/lib/python3.7/site-packages/neo4j/bolt/connection.py", line 327, in _fetch
response.on_failure(summary_metadata or {})
File "/Users/dmitriy/anaconda3/lib/python3.7/site-packages/neo4j/v1/result.py", line 70, in on_failure
raise CypherError.hydrate(**metadata)
neo4j.exceptions.ConstraintError: Node(2484) already exists with label Property and property location = {geometry: {type: "Point", coordinates: [51.3454, -6.2434], crs: {type: link, properties: {href: "http://spatialreference.org/ref/sr-org/7203/", code: 7203}}}}

That was a module bug, which is eventually fixed

Related

Strange issue with Python recursive function in Chalice framework

I have defined this SNS-triggered Lambda in Chalice:
#app.on_sns_message(topic='arn:aws:sns:us-west-1:XXXXXXXX:MyTopic')
def step1_photo_url_preload(event, retry = 3):
try:
js = json.loads(event.message)
... some logic here, event object is never modified ...
except:
if retry:
print("WARNING: failed, %d retries remaining" % retry)
return step1_photo_url_preload(event, retry-1)
else:
raise
When an exception is raised, the function should retry up to 3 times.
Instead, what I get is the exception below. Look closely at the trace: Line 56 shows the error occurs when attempting the recursive call:
[ERROR] TypeError: 'SNSEvent' object is not subscriptable
Traceback (most recent call last):
File "/var/task/chalice/app.py", line 1459, in __call__
return self.func(event_obj)
File "/var/task/app.py", line 56, in step1_photo_url_preload
return step1_photo_url_preload(event, retry-1)
File "/var/task/chalice/app.py", line 1458, in __call__
event_obj = self.event_class(event, context)
File "/var/task/chalice/app.py", line 1486, in __init__
self._extract_attributes(event_dict)
File "/var/task/chalice/app.py", line 1532, in _extract_attributes
first_record = event_dict['Records'][0]
Mysteriously, the function can't work with the event object that it received the first time.
What could cause this?
I suspect this might have something to do with the magic behind #app.on_sns_message, but I'm not sure where to look next.
The problem is the fact that the function is decorated, the failure is in the code the decorator is running. Pull the functionality you want to run recursively into a separate function and the problem should go away.

Python image save error - raise ValueError("unknown file extension: {}".format(ext)) from e ValueError: unknown file extension:

Am just having four weeks of experience in Python. Creating a tool using Tkinter to paste a new company logo on top of the existing images.
The Below method is to, get all images in the given directory and paste the new logo on the initial level. Existing image, edited image, x-position, y-position, a preview of the image and few data's are store in global instance self.images_all_arr.
def get_img_copy(self):
self.images_all_arr = []
existing_img_fldr = self.input_frame.input_frame_data['existing_img_folder']
for file in os.listdir(existing_img_fldr):
img_old = Image.open(os.path.join(existing_img_fldr, file))
img_new_copy = img_old.copy()
self.pasteImage(img_new_copy, initpaste=True) #process to paste new logo.
view_new_img = ImageTk.PhotoImage(img_new_copy)
fname, fext = file.split('.')
formObj = {
"fname": fname,
"fext": fext,
"img_old": img_old,
"img_new": img_new_copy,
"img_new_view": view_new_img,
"add_logo": 1,
"is_default": 1,
"is_opacityImg": 0,
"pos_x": self.defult_x.get(),
"pos_y": self.defult_y.get()
}
self.images_all_arr.append(formObj)
After previewing each image in Tkinter screen, doing some adjustment in position x and y(updating pos_x and pos_y in the list self.images_all_arr) depends upon the necessity.
Well, once all done. Need to save the edited images. Below method to save images, iterating the list self.images_all_arr and call save method as img['img_new'].save(dir_output) since img['img_new'] has updated image.
def generate_imgae(self):
if len(self.images_all_arr):
dir_output = 'xxxxx'
for img in self.images_all_arr:
print(img['img_new'])
img['img_new'].save(dir_output)
print('completed..')
But it returns below error,
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\site-packages\PIL\Image.py", line 2138, in save
format = EXTENSION[ext]
KeyError: ''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\Python38-32\lib\tkinter_init_.py", line 1883, in call
return self.func(*args)
File "C:\Users\662828\WORKSPACE\AAA_python_projects\AMI_automation_poc2\position_and_images.py", line 241, in generate_imgae
img['img_new'].save(dir_output)
File "C:\Program Files (x86)\Python38-32\lib\site-packages\PIL\Image.py", line 2140, in save
raise ValueError("unknown file extension: {}".format(ext)) from e
ValueError: unknown file extension:
dir_output doesn't contain the file extension, its just xxxxx. You need to specify what image file format you want. the error tells us this by saying "unknown file format".
Basically, you either need to include the extension in the file name, or pass it as the next parameter in image.save. You can check out the documentation here
eg.
image.save('image.png')
or
image.save('image', 'png')
The below code solved my issue. By giving the exact directory, filename and extension of the image as param to the method image.save(). Here the result of opfile is, C:\Users\WORKSPACE\AAA_python_projects\test\valid.png.
def generate_imgae(self):
if len(self.images_all_arr):
dir_output = r"C:\Users\WORKSPACE\AAA_python_projects\test"
if not os.path.isdir(dir_output):
os.mkdir(dir_output)
for img in self.images_all_arr:
opfile = os.path.join(dir_output, '{}.{}'.format(img['fname'], img['fext'] ))
img['img_new'].save(opfile)
print('completed..')
Thanks #dantechguy

Trouble sending a batch create entity request in dialogflow

I have defined the following function. The purpose is to make batch create entity request with dialogflow client. I am using this method after sending many individual tests did not scale well.
The problem seems to be the line that defines EntityType. Seems like "entityType" is not valid but that is what is in the dialogflow v2 documentation which is the current version I am using.
Any ideas on what the issue is?
def create_batch_entity_types(self):
client = self.get_entity_client()
print(DialogFlowClient.batch_list)
EntityType = {
"entityTypes": DialogFlowClient.batch_list
}
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
def callback(operation_future):
# Handle result.
result = operation_future.result()
print(result)
response.add_done_callback(callback)
After running the function I received this error
Traceback (most recent call last):
File "df_client.py", line 540, in <module>
create_entity_types_from_database()
File "df_client.py", line 426, in create_entity_types_from_database
df.create_batch_entity_types()
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/dialogflow/dialogflow_accessor.py", line 99, in create_batch_entity_types
response = client.batch_update_entity_types(parent=AGENT_PATH, entity_type_batch_inline=EntityType)
File "/Users/andrewflorial/Documents/PROJECTS/curlbot/venv/lib/python3.7/site-packages/dialogflow_v2/gapic/entity_types_client.py", line 767, in batch_update_entity_types
update_mask=update_mask,
ValueError: Protocol message EntityTypeBatch has no "entityTypes" field.
The argument for entity_type_batch_inline must have the same form as EntityTypeBatch.
Look how that type looks like: https://dialogflow-python-client-v2.readthedocs.io/en/latest/gapic/v2/types.html#dialogflow_v2.types.EntityTypeBatch
It has to have entity_types field, not entityTypes.

TypeError: 'Context' object is not iterable for google cloud function

Can someone help diagnose and correct what the issue is as this error keeps referencing the "Context" object(which I'm theorizing is interpreting the schema_df as that object)?
I've been trying to deploy a cloud function that works locally on the portion below.
def convert_schema(results_df, schema_df):
"""Converts data types in dataframe to match BigQuery destination table"""
dict(schema_df)
print(schema_df)
for k in schema_df: #for each column name in the dictionary, convert the data type in the dataframe
results_df[k] = results_df[k].astype(schema_df.get(k))
results_df_transformed = results_df
print("Updated schema to match BigQuery destination table")
return results_df_transformed
schema_df = {'_comments': 'object',
'_direction': 'object',
'_fromst': 'object',
'_last_updt': 'datetime64',
'_length': 'float64',
'_lif_lat': 'float64',
'_lit_lat': 'float64',
'_lit_lon': 'float64',
'_strheading': 'object',
'_tost': 'object',
'_traffic': 'int64',
'segmentid': 'int64',
'start_lon': 'float64',
'street': 'object'
}
However, it doesn't work and does not recognize the passed in object as an iterable dictionary.
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 349, in run_background_function
_function_handler.invoke_user_function(event_object)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 215, in invoke_user_function
return call_user_function(request_or_event)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 212, in call_user_function
event_context.Context(**request_or_event.context))
File "/user_code/main.py", line 43, in handler
results_df_transformed = convert_schema(results_df, schema_df)
File "/user_code/lib/data_ingestion.py", line 84, in convert_schema
dict(schema_df)
TypeError: 'Context' object is not iterable
A function that responds to an event (i.e., a "Background Function") needs to have the signature:
def my_function(data, context):
...
where data and context are provided by the Cloud Functions runtime on every new event.
You probably want to be calling your convert_schema function inside the top level background function instead. See "Writing Background Functions" for more details.

Paginated request using python-asana API

I am trying to export all tasks from all of my asana work-spaces using python-asana API. But at some point it exists after giving the following error message.
Traceback (most recent call last):
File "export.py", line 56, in <module>
for index, task in enumerate(tasks):
File "build\bdist.win32\egg\asana\page_iterator.py", line 58, in items
File "build\bdist.win32\egg\asana\page_iterator.py", line 54, in next
File "build\bdist.win32\egg\asana\page_iterator.py", line 43, in __next__
File "build\bdist.win32\egg\asana\page_iterator.py", line 74, in get_next
File "build\bdist.win32\egg\asana\client.py", line 104, in get
File "build\bdist.win32\egg\asana\client.py", line 75, in request
asana.error.InvalidRequestError: Invalid Request: Your pagination token has expired.
I read that to solve this we need to make paginated requests. But I tried passing only offset to my request as following:
tasks = client.tasks.find_all({'project' : project['id']}, limit=50)
But, there was no difference as I was not getting any 'next_page' value even though there was more than 50 tasks in the project.
So my question is:
How can I do paginated request using python-asana API? An explanation with an example would be best!
EDIT:
I am fetching the tasks as below:
tasks = client.tasks.find_all({'project' : project['id']}, item_limit=1)
print "Tasks", tasks # Prints generator object
for index, task in enumerate(tasks):
complete_task = client.tasks.find_by_id(task["id"])
print complete_task #Prints complete task dictionary
Now My question is where will I get the next_page content for the remaining tasks and how do I access it.

Resources