How to check if a similar scheduled job exists in python-rq? - python-3.x

Below is the function called for scheduling a job on server start.
But somehow the scheduled job is getting called again and again, and this is causing too many calls to that respective function.
Either this is happening because of multiple function calls or something else? Suggestions please.
def redis_schedule():
with current_app.app_context():
redis_url = current_app.config["REDIS_URL"]
with Connection(redis.from_url(redis_url)):
q = Queue("notification")
from ..tasks.notification import send_notifs
task = q.enqueue_in(timedelta(minutes=5), send_notifs)

Refer - https://python-rq.org/docs/job_registries/
Needed to read scheduled_job_registry and retrieve jobids.
Currently below logic works for me as I only have a single scheduled_job.
But in case of multiple jobs, I will need to loop these jobids to find the right job exists or not.
def redis_schedule():
with current_app.app_context():
redis_url = current_app.config["REDIS_URL"]
with Connection(redis.from_url(redis_url)):
q = Queue("notification")
if len(q.scheduled_job_registry.get_job_ids()) == 0:
from ..tasks.notification import send_notifs
task = q.enqueue_in(timedelta(seconds=30), send_notifs)

Related

How to inspect mapped tasks' inputs from reduce tasks in Prefect

I'm exploring Prefect's map-reduce capability as a powerful idiom for writing massively-parallel, robust importers of external data.
As an example - very similar to the X-Files tutorial - consider this snippet:
#task
def retrieve_episode_ids():
api_connection = APIConnection(prefect.context.my_config)
return api_connection.get_episode_ids()
#task(max_retries=2, retry_delay=datetime.timedelta(seconds=3))
def download_episode(episode_id):
api_connection = APIConnection(prefect.context.my_config)
return api_connection.get_episode(episode_id)
#task(trigger=all_finished)
def persist_episodes(episodes):
db_connection = DBConnection(prefect.context.my_config)
...store all episodes by their ID with a success/failure flag...
with Flow("import_episodes") as flow:
episode_ids = retrieve_episode_ids()
episodes = download_episode.map(episode_ids)
persist_episodes(episodes)
The peculiarity of my flow, compared with the simple X-Files tutorial, is that I would like to persist results for all the episodes that I have requested, even for the failed ones. Imagine that I'll be writing episodes to a database table as the episode ID decorated with an is_success flag. Moreover, I'd like to write all episodes with a single task instance, in order to be able to perform a bulk insert - as opposed to inserting each episode one by one - hence my persist_episodes task being a reduce task.
The trouble I'm having is in being able to gather the episode ID for the failed downloads from that reduce task, so that I can store the failed information in the table under the appropriate episode ID. I could of course rewrite the download_episode task with a try/catch and always return an episode ID even in the case of failure, but then I'd lose the automatic retry/failure functionality which is a good deal of the appeal of Prefect.
Is there a way for a reduce task to infer the argument(s) of a failed mapped task? Or, could I write this differently to achieve what I need, while still keeping the same level of clarity as in my example?
Mapping over a list preserves the order. This is a property you can use to link inputs with the errors. Check the code I have below, will add more explanation after.
from prefect import Flow, task
import prefect
#task
def retrieve_episode_ids():
return [1,2,3,4,5]
#task
def download_episode(episode_id):
if episode_id == 5:
return ValueError()
return episode_id
#task()
def persist_episodes(episode_ids, episodes):
# Note the last element here will be the ValueError
prefect.context.logger.info(episodes)
# We change that ValueError into a "fail" message
episodes = ["fail" if isinstance(x, BaseException) else x for x in episodes]
# Note the last element here will be the "fail"
prefect.context.logger.info(episodes)
result = {}
for i, episode_id in enumerate(episode_ids):
result[episode_id] = episodes[i]
# Check final results
prefect.context.logger.info(result)
return
with Flow("import_episodes") as flow:
episode_ids = retrieve_episode_ids()
episodes = download_episode.map(episode_ids)
persist_episodes(episode_ids, episodes)
flow.run()
The handling will largely happen in the persist_episodes. Just pass the list of inputs again and then we can match the inputs with the failed tasks. I added some handling around identifying errors and replacing them with what you want. Does that answer the question?
Always happy to chat more. You can reach out in the Prefect Slack or Discourse as well.

Complete specific task in ThreadpoolExecutor in python then loop it over?

I have been working to develop an algorithm in python in which certain tasks need to be computed in parallel. I am using threadpoolexecutor to do it. My specific section of code is:
with concurrent.futures.ThreadPoolExecutor(max_workers=number_of_threads) as executor:
for chunk in NearEndChunks:
test = ED.EchoDetection()
futures = {executor.submit(test.echoDetection, chunk, FarEndChunks, i, i + chunk_shift): i for i in
range(zero_ms, delay_line_max, chunk_shift)}
if ED.EchoDetection.FOUND.value == 1:
print(
f'Echo detected and indexes are as : near end index = {test.NEARINDEX.value} and '
f'Farend '
f'index = {test.FARINDEX.value}')
break
The problem I am facing :
This function test.echoDetection doesn't return anything so futures have nothing to do with this code.
Problem is that when I run this code, it creates multiple threads as mentioned in variable number_of_threads which is 15 in my case for now but it doesn't compute all the tasks and then new threads are being created and it goes on and on till I get following error:
Process finished with exit code -1073741819 (0xC0000005)
Solution I want:
How do I make it work that it completes all the tasks and then new loop runs? In java there is ThreadPoolExecutor#getActiveCount(). What's the alternative for it in python. Also, is there any other better approach to perform these calculations?
Regards,
Khubaib

Multiproccesing and lists in python

I have a list of jobs but due to certain condition not all of the jobs should run in parallel at the same time because sometimes it is important that a finishes before I start b or vice versa (actually its not important which one runs first just not that they run both at the same time) so i thought i keep a list of the currently running threads and when ever a new on starts it checks in this list of currently running threads if the thread can proceed or not. I wrote some sample code for that:
from time import sleep
from multiprocessing import Pool
def square_and_test(x):
print(running_list)
if not x in running_list:
running_list = running_list.append(x)
sleep(1)
result_list = result_list.append(x**2)
running_list = running_list.remove(x)
else:
print(f'{x} is currently worked on')
task_list = [1,2,3,4,1,1,4,4,2,2]
running_list = []
result_list = []
pool = Pool(2)
pool.map(square_and_test, task_list)
print(result_list)
this code fails with UnboundLocalError: local variable 'running_list' referenced before assignment so i guess my threads don't have access to global variables. Is there a way around this? If not is there another way to solve this problem?

Multiple MongoDB database triggers in same Python module

I am trying to implement multiple database triggers for my MongoDB (connected through Pymongo to my Python code).
I am able to successfully implement a database trigger but not able to extend this to multiple ones.
The code for the single database trigger can be found below:
try:
resume_token = None
pipeline = [{"$match": {"operationType": "insert"}}]
with db.collection.watch(pipeline) as stream:
for insert_change in stream:
print("postprocessing logic goes here")
except pymongo.errors.PyMongoError:
logging.error("Oops")
The problem is that once a single database is implemented, the post processing code waits to receive incoming requests for that collection and am able to include other collection watches in the same module.
Any help appreciated
Create separate functions for each watch and launch them in separate threads with something like:
import queue
import threading
main_q = queue.Queue()
thread1 = threading.Thread(target=function1, args=(main_q, None))
thread1.daemon = True
thread1.start()
thread2 = threading.Thread(target=function2, args=(main_q,))
thread2.daemon = True
thread2.start()

Get current task of a process

I am trying to flow through tasks pragmattically using custom viewswhich will be controlled by an api. I have figured out how to flow through a process if i pass in the a task object to a flow_function
task = Task.objects.get(id=task_id)
response = hello_world_approve(task, json=json, user=user)
#flow_func
def hello_world_approve(activation, **kwargs):
activation.process.approved = True
activation.process.save()
activation.prepare()
activation.done()
return activation
However i would like to be able to get the current task from the process object instead, like so
process = HelloWorldFlow.process_class.objects.get(id=task_id)
task = process.get_current_task()
Is this the way i should be going about this and is it possible or is there another approach i am missing?
It's available as activation.task

Resources