How to inspect mapped tasks' inputs from reduce tasks in Prefect - prefect

I'm exploring Prefect's map-reduce capability as a powerful idiom for writing massively-parallel, robust importers of external data.
As an example - very similar to the X-Files tutorial - consider this snippet:
#task
def retrieve_episode_ids():
api_connection = APIConnection(prefect.context.my_config)
return api_connection.get_episode_ids()
#task(max_retries=2, retry_delay=datetime.timedelta(seconds=3))
def download_episode(episode_id):
api_connection = APIConnection(prefect.context.my_config)
return api_connection.get_episode(episode_id)
#task(trigger=all_finished)
def persist_episodes(episodes):
db_connection = DBConnection(prefect.context.my_config)
...store all episodes by their ID with a success/failure flag...
with Flow("import_episodes") as flow:
episode_ids = retrieve_episode_ids()
episodes = download_episode.map(episode_ids)
persist_episodes(episodes)
The peculiarity of my flow, compared with the simple X-Files tutorial, is that I would like to persist results for all the episodes that I have requested, even for the failed ones. Imagine that I'll be writing episodes to a database table as the episode ID decorated with an is_success flag. Moreover, I'd like to write all episodes with a single task instance, in order to be able to perform a bulk insert - as opposed to inserting each episode one by one - hence my persist_episodes task being a reduce task.
The trouble I'm having is in being able to gather the episode ID for the failed downloads from that reduce task, so that I can store the failed information in the table under the appropriate episode ID. I could of course rewrite the download_episode task with a try/catch and always return an episode ID even in the case of failure, but then I'd lose the automatic retry/failure functionality which is a good deal of the appeal of Prefect.
Is there a way for a reduce task to infer the argument(s) of a failed mapped task? Or, could I write this differently to achieve what I need, while still keeping the same level of clarity as in my example?

Mapping over a list preserves the order. This is a property you can use to link inputs with the errors. Check the code I have below, will add more explanation after.
from prefect import Flow, task
import prefect
#task
def retrieve_episode_ids():
return [1,2,3,4,5]
#task
def download_episode(episode_id):
if episode_id == 5:
return ValueError()
return episode_id
#task()
def persist_episodes(episode_ids, episodes):
# Note the last element here will be the ValueError
prefect.context.logger.info(episodes)
# We change that ValueError into a "fail" message
episodes = ["fail" if isinstance(x, BaseException) else x for x in episodes]
# Note the last element here will be the "fail"
prefect.context.logger.info(episodes)
result = {}
for i, episode_id in enumerate(episode_ids):
result[episode_id] = episodes[i]
# Check final results
prefect.context.logger.info(result)
return
with Flow("import_episodes") as flow:
episode_ids = retrieve_episode_ids()
episodes = download_episode.map(episode_ids)
persist_episodes(episode_ids, episodes)
flow.run()
The handling will largely happen in the persist_episodes. Just pass the list of inputs again and then we can match the inputs with the failed tasks. I added some handling around identifying errors and replacing them with what you want. Does that answer the question?
Always happy to chat more. You can reach out in the Prefect Slack or Discourse as well.

Related

RuntimeWarning: coroutine 'NewsExtraction.get_article_data_elements' was never awaited

I have always resisted using asyncio within my code, but using it might help with some performance issues that I'm having.
Here is my scenario:
An end user provides a list of news sites to scrape
Each element is passed to an Article Class
A valid article is passed to an Extraction Class
The Extraction Class passes data to a NewsExtraction Class
90% this of the time this flow is flawless, but on an occasion one of the 12 functions in the NewsExtraction Class fails to extract data, which exist in the HTML provide. It seems that my code is "stepping on itself," which cause the data element not to be parsed. When I rerun the code all the elements are parsed correctly.
The NewsExtraction Class has this function get_article_data_elements, which is called from the Extraction Class.
The function get_article_data_elements call these items:
published_date = self._extract_article_published_date()
modified_date = self._extract_article_modified_date()
title = self._extract_article_title()
description = self._extract_article_description()
keywords = self._extract_article_key_words()
tags = self._extract_article_tags()
authors = self._extract_article_author()
top_image = self._extract_top_image()
language = self._extract_article_language()
categories = self._extract_article_category()
text = self._extract_textual_content()
url = self._extract_article_url()
Each of these data elements are used to populate a Python Dictionary, which is eventually passed back to the End User.
I have been trying to add asyncio code to the NewsExtraction Class, but I kept getting this error message:
RuntimeWarning: coroutine 'NewsExtraction.get_article_data_elements' was never awaited
I have spent the last 3 days trying to figure this issue out. I have looked at dozens of questions on Stack Overflow on this error RuntimeWarning: coroutine never awaited. I have also looked at numerous articles on using asyncio, but I cannot figure out how to use asyncio with my NewsExtraction Class, which is called from the Extraction Class.
Can someone provide me some pointers to solve my issue?
class NewsExtraction(object):
"""
This class is used to extract common data elements from a news article
on xyz
"""
def __init__(self, url, soup):
self._url = url
self._raw_soup = soup
truncated...
async def _extract_article_published_date(self):
"""
This function is designed to extract the publish date for the article being parsed.
:return: date article was originally published
:rtype: string
"""
json_date_published = JSONExtraction(self._url, self._raw_soup).extract_article_published_date()
if json_date_published is not None:
if len(json_date_published) != 0:
return json_date_published
else:
return None
elif json_date_published is None:
if self._raw_soup.find(name='div', attrs={'class': regex.compile("--publishDate")}):
date_published = self._raw_soup.find(name='div', attrs={'class': regex.compile("--publishDate")})
if len(date_published) != 0:
return date_published.text
else:
logger.info('The HTML tag to extract the publish date for the following article was not found.')
logger.info(f'Article URL -- {self._url}')
return None
truncated...
async def get_article_data_elements(self):
"""
This function is designed to extract all the common data elements from a
news article on xyz.
:return: dictionary of data elements related to the article
:rtype: dict
"""
article_data_elements = {}
# I have tried this:
published_date = self._extract_article_published_date().__await__()
# and this
published_date = self.task(self._extract_article_published_date())
await published_date
truncated...
I have also tried to use:
if __name__ == "__main__":
asyncio.run(NewsExtraction.get_article_data_elements())
# asyncio.run(self.get_article_data_elements())
I'm really banging my head on the wall with using asyncio in my news extraction code.
If this question is off base, I will be happy to delete it and keep reading about how to use asyncio correctly.
Can someone provide me some pointers to solve my issue?
Thanks in advance for any guidance on using asyncio
Your are defining _extract_article_published_date and get_article_data_elements as coroutines, and this coroutines must be await-ed in your code to get the result of their execution in an asynchronous way.
You can do this creating an instance of type NewsExtraction and calling this methods with the keyword await in front, this await pass the execution to other task in the loop until his awaited task completes its execution. Note that there are no threads or process involved in this task execution, the execution is passed only if it is no using cpu-time (await-ing I/O operations or sleeping).
if __name__ == '__main__':
extractor = NewsExtraction(...)
# this creates the event loop and runs the coroutine
asyncio.run(extractor.get_article_data_elements())
Inside your _extract_article_published_date you must also await your coroutines that perform requests over the network, if you are using some library for the scraping make sure that uses async/await behind the scenes to get a real performance while using asyncio.
async def get_article_data_elements(self):
article_data_elements = {}
# note here that the instance is self
published_date = await self._extract_article_published_date()
truncated...
You must dive into the asyncio documentation to get a better understanding of these features of Python 3.7+.

What changes occur when using tf_agents.environments.TFPyEnvironment to convert a Python RL environment into a TF environment?

I noticed something weird happening when converting a Python environment into a TF environment using tf_agents.environments.TFPyEnvironment and I'd like to ask you what general changes occur.
To clarify the question please find below my code. I want the environment to simulate (in an oversimplied manner) interactions with a customers who want to buy fruits or vegetables. The agent should learn that when a customer asks for fruits, action 0 should be executed for example.
class CustomEnv(py_environment.PyEnvironment):
def __init__(self):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=1)
self._observation_spec = array_spec.BoundedArraySpec(
shape=(1,1), dtype=np.int32, minimum=0, maximum=1)
self._state = [0]
self._counter = 0
self._episode_ended = False
self.dictionary = {0: ["Fruits"],
1: ["Vegetables"]}
def action_spec(self):
return self._action_spec
def observation_spec(self):
return self._observation_spec
def _reset(self):
self._state = [0]
self._counter = 0
self._episode_ended = False
return ts.restart(np.array([self._state], dtype=np.int32))
def preferences(self):
return np.random.randint(2)
def pickedBasket(self, yes):
reward = -1.0
if yes:
reward = 0.0
return reward
def _step(self, action):
if self._episode_ended:
self._reset()
if self._counter<50:
self._counter += 1
basket = self.preferences()
condition = basket in self.dictionary[action]
reward = self.pickedBasket(condition)
self._state[0] = basket
if self._counter==50:
self._episode_ended=True
return ts.termination(np.array([self._state],
dtype=np.int32),
reward,
1)
else:
return ts.transition(np.array([self._state],
dtype=np.int32),
reward,
discount=1.0)
When I execute the following to code to check everything is working just fine:
py_env = ContextualMBA()
tf_env = tf_py_environment.TFPyEnvironment(py_env)
time_step = tf_env.reset()
action = 0
next_time_step = tf_env.step(action)
I get an unhashable type: 'numpy.ndarray' for the line condition = basket in self.dictionary[action] so I changed it into condition = basket in self.dictionary[int(action)] and it worked just fine. I'd also like to precise that it worked as a Python environment even without adding the int part. So I'd like to ask what changes the tf_agents.environments.TFPyEnvironment. I don't see how it can influence the type of action action since it isn't related to action_spec or anything (at least directly in the code).
Put basically, tf_agents.environments.TFPyEnvironment is a translator working between your Python environment and the TF-Agents API. The TF-Agents API does not know how many actions it is allowed to choose from, what data to observe and learn from or specially how the choice of actions will influence your custom environment.
Your custom environment is there to provide the rules of the environment and it follows some standards in order for the TFPyEnvironment to be able to translate it correctly so the TF-Agent can work with it. You need to define elements and methods in your custom environment, for example, such as:
__init__()
self._action_spec
self._observation_spec
_reset()
_step()
I'm not sure if your doubt came from the fact that you gave an action = 0 for the agent and, unrelated to the action_spec, the agent actually worked. The action_spec had no relation with your _step() function, and that is correct. Your step function takes some action and applies it to the environment. How this action is shaped is the real point.
The problem is you chose the value and gave it to the tf_env.step() function. If you had actually delegated the choice of action to the agent, by tf_env.step(agent.policy.action) (or tf_env.step(agent.policy.action.action), sometimes TF-Agents make me confuse), the agent would have to look to your action_spec definition to understand what the environment expects the action to look like.
If action_spec is not defined, the agent would not know what to choose between 0 for "Fruits", 1 for "Vegetables" - that you wanted, and defined - or unexpected results as 2 for "Meat", or [3, 2] for 2 bottles of water, since 3 could stand for "Bottle of Water". The TF-Agent needs these definitions so it knows the rules of your environment.
As for the actual changes and what they do with your custom environment code, I believe you would get a better idea by looking at the source code of the TF-Agents library.

Python 3 Try/Except best practice?

This is a question about code structure, rather than syntax. I'd like to know what is best practice and why.
Imagine you've got a python programme. You've got a main class like so:
import my_db_manager as dbm
def main_loop():
people = dbm.read_db()
#TODO: Write rest of programme doing something with the data from people...
if __name__ == '__main__':
main_loop()
Then you've got several separate .py files for managing interactions with various tables in a database. One of these .py files, my_db_manager looks like this:
def read_db():
people = []
db = connection_manager.get_connection()
cursor = db.cursor()
try:
#Database reading statement
sql = 'SELECT DISTINCT(name) FROM people'
cursor.execute(sql)
results = cursor.fetchall()
people = [x[0] for x in results]
except Exception as e:
print(f'Error: {e}')
finally:
return people
In the example above the function read_db() is called from main_loop() in the main class. read_db() contains try/except clauses to manage errors in interacting with the database. While this works fine, The try/except clauses could instead be placed in main_loop() when calling read_db(). They could equally be located in both places. What is 'best practice' when using try/except? using try/except in the db_manager or using it in the main_loop() where you're managing the programmes logic flow, or using it in both places? Bear in mind I'm giving the above specific example but I'm trying to extrapolate a general rule for applying try/except when writing python.
The best way to write try-except -- in Python or anywhere else -- is as narrow as possible. It's a common problem to catch more exceptions than you meant to handle! (Credits: This Answer)
In your particular case, of course inside the function. It:
a) creates abstraction from db errors for the main thread
b) eliminates the suspicion of db errors in case some other thing caused exception inside of main thread (tho you can see but still it plucks out every last chance)
c) enables you to deal with all database related errors at one place. Efficiently and creatively. How are you gonna make list people outside of function in the main thread. It will double the mess.
Finally you should stick to this minimalism even inside the function although while catering every place where exception could occur as:
def read_db():
#Database reading statement
sql = 'SELECT DISTINCT(name) FROM people'
try:
db = connection_manager.get_connection()
cursor = db.cursor()
cursor.execute(sql)
results = cursor.fetchall()
except Exception as e:
print(f'Error: {e}')
return []
else:
return [x[0] for x in results]

Creating custom component in SpaCy

I am trying to create SpaCy pipeline component to return Spans of meaningful text (my corpus comprises pdf documents that have a lot of garbage that I am not interested in - tables, headers, etc.)
More specifically I am trying to create a function that:
takes a doc object as an argument
iterates over the doc tokens
When certain rules are met, yield a Span object
Note I would also be happy with returning a list([span_obj1, span_obj2])
What is the best way to do something like this? I am a bit confused on the difference between a pipeline component and an extension attribute.
So far I have tried:
nlp = English()
Doc.set_extension('chunks', method=iQ_chunker)
####
raw_text = get_test_doc()
doc = nlp(raw_text)
print(type(doc._.chunks))
>>> <class 'functools.partial'>
iQ_chunker is a method that does what I explain above and it returns a list of Span objects
this is not the results I expect as the function I pass in as method returns a list.
I imagine you're getting a functools partial back because you are accessing chunks as an attribute, despite having passed it in as an argument for method. If you want spaCy to intervene and call the method for you when you access something as an attribute, it needs to be
Doc.set_extension('chunks', getter=iQ_chunker)
Please see the Doc documentation for more details.
However, if you are planning to compute this attribute for every single document, I think you should make it part of your pipeline instead. Here is some simple sample code that does it both ways.
import spacy
from spacy.tokens import Doc
def chunk_getter(doc):
# the getter is called when we access _.extension_1,
# so the computation is done at access time
# also, because this is a getter,
# we need to return the actual result of the computation
first_half = doc[0:len(doc)//2]
secod_half = doc[len(doc)//2:len(doc)]
return [first_half, secod_half]
def write_chunks(doc):
# this pipeline component is called as part of the spacy pipeline,
# so the computation is done at parse time
# because this is a pipeline component,
# we need to set our attribute value on the doc (which must be registered)
# and then return the doc itself
first_half = doc[0:len(doc)//2]
secod_half = doc[len(doc)//2:len(doc)]
doc._.extension_2 = [first_half, secod_half]
return doc
nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner"])
Doc.set_extension("extension_1", getter=chunk_getter)
Doc.set_extension("extension_2", default=[])
nlp.add_pipe(write_chunks)
test_doc = nlp('I love spaCy')
print(test_doc._.extension_1)
print(test_doc._.extension_2)
This just prints [I, love spaCy] twice because it's two methods of doing the same thing, but I think making it part of your pipeline with nlp.add_pipe is the better way to do it if you expect to need this output on every document you parse.

Multiprocessing a function that tests a given dataset against a list of distributions. Returning function values from each iteration through list

I am working on processing a dataset that includes dense GPS data. My goal is to use parallel processing to test my dataset against all possible distributions and return the best one with the parameters generated for said distribution.
Currently, I have code that does this in serial thanks to this answer https://stackoverflow.com/a/37616966. Of course, it is going to take entirely too long to process my full dataset. I have been playing around with multiprocessing, but can't seem to get it to work right. I want it to test multiple distributions in parallel, keeping track of sum of square error. Then I want to select the distribution with the lowest SSE and return its name along with the parameters generated for it.
def fit_dist(distribution, data=data, bins=200, ax=None):
#Block of code that tests the distribution and generates params
return(distribution.name, best_params, sse)
if __name__ == '__main__':
p = Pool()
result = p.map(fit_dist, DISTRIBUTIONS)
p.close()
p.join()
I need some help with how to actually make use of the return values on each of the iterations in the multiprocessing to compare those values. I'm really new to python especially multiprocessing so please be patient with me and explain as much as possible.
The problem I'm having is it's giving me an "UnboundLocalError" on the variables that I'm trying to return from my fit_dist function. The DISTRIBUTIONS list is 89 objects. Could this be related to the parallel processing, or is it something to do with the definition of fit_dist?
With the help of Tomerikoo's comment and some further struggling, I got the code working the way I wanted it to. The UnboundLocalError was due to me not putting the return statement in the correct block of code within my fit_dist function. To answer the question I did the following.
from multiprocessing import Pool
def fit_dist:
#put this return under the right section of this method
return[distribution.name, params, sse]
if __name__ == '__main__':
p = Pool()
result = p.map(fit_dist, DISTRIBUTIONS)
p.close()
p.join()
'''filter out the None object results. Due to the nature of the distribution fitting,
some distributions are so far off that they result in None objects'''
res = list(filter(None, result))
#iterates over nested list storing the lowest sum of squared errors in best_sse
for dist in res:
if best_sse > dist[2] > 0:
best_sse = dis[2]
else:
continue
'''iterates over list pulling out sublist of distribution with best sse.
The sublists are made up of a string, tuple with parameters,
and float value for sse so that's why sse is always index 2.'''
for dist in res:
if dist[2]==best_sse:
best_dist_list = dist
else:
continue
The rest of the code simply consists of me using that list to construct charts and plots with that best distribution overtop of a histogram of my raw data.

Resources