Using module schedule run schedule immediately then again every hour - python-3.x

Im trying to schedule a task with the module "schedule" for every hour. My problem is i need the task to first run then run again every hour.
This code works fine but it waits an hour before initial run
import schedule
import time
def job():
print("This happens every hour")
schedule.every().hour.do(job)
while True:
schedule.run_pending()
I would like to avoid doing this:
import schedule
import time
def job():
print("This happens immediately then every hour")
schedule.every().hour.do(job)
while i == 0:
job()
i = i+1
while i == 1:
schedule.run_pending()
Ideally it would be nice to have a option like this:
schedule.run_pending_now()

Probably the easiest solution is to just run it immediately as well as scheduling it, such as with:
import schedule
import time
def job():
print("This happens every hour")
schedule.every().hour.do(job)
job() # Runs now.
while True:
schedule.run_pending() # Runs every hour, starting one hour from now.

To run all jobs regardless if they are scheduled to run or not, use schedule.run_all(). Jobs are re-scheduled after finishing, just like they would if they were executed using run_pending().
def job_1():
print('Foo')
def job_2():
print('Bar')
schedule.every().monday.at("12:40").do(job_1)
schedule.every().tuesday.at("16:40").do(job_2)
schedule.run_all()
# Add the delay_seconds argument to run the jobs with a number
# of seconds delay in between.
schedule.run_all(delay_seconds=10)```

If you have many tasks that takes some time to execute and you want to run them independently during start you can use threading
import schedule
import time
def job():
print("This happens every hour")
def run_threaded(task):
job_thread = threading.Thread(target=task)
job_thread.start()
run_threaded(job) #runs job once during start
schedule.every().hour.do(run_threaded, job)
while True:
schedule.run_pending() # Runs every hour, starting one hour from now.

Actually I don't think that calling the function directly would be so wise, since it will block the thread without the scheduler, right?
I think there is nothing wrong about setting the job to be executed once, and every 30 sec for example like that:
scheduler.add_job(MPOStarter.run, args=ppi_args) # run once, then every 30 sec
scheduler.add_job(MPOStarter.run, "interval", seconds=30, args=ppi_args)

Related

python windows how to forcefully exit or join a thread

What i'm trying to do this run a function on a thread and start a timer and then forcefully join or exit the thread once a certain duration has been reached, now i have been informed that this can be done in linux but i don't know the equivalent to windows so i did this:
#idea is to want to run the function for 7 secs but it will stop at 5 secs
import time
import threading
def func():
for i in range(0,7):#print not yet done 7 times
print("not yet done")
time.sleep(1)
t1=threading.Thread(target=func())
timer=time.perf_counter
while t1.join()==False:#loop while function is not done
if timer>5:#stop function at 5 seconds
print(timer)
print("done")
t1.join()

Calling function after every 24hrs without affecting following code

from threading import Timer
def Add():
...Some process...
Timer(86400, Add).start() ## 86400 secs in 24 hours.
if __name__ == '__main__':
Add()
"Consuming Kafka Messages and will run continously once started".
Once the code runs Add function is called and program starts to consume kafka messages and waits for it continously. I want to call the "Add" function every 24 hrs without disturbing kafka process. For this I have tried one threading.timer but I am not sure will it work or not.
I have one more question- is this thread initialize every 24hrs or it one thread just calls every 24 hrs.
Please advice I am doing correct and will it work or not and thanks in advance!!
You can simply use an extra function that calls add() and adds the timer for you, something like this:
def task1():
while True:
add()
time.sleep(86400)
if __name__ == "__main__":
thread = Thread(target=task1)
thread.start()
# add kafka code

Get number of active instances for BackgroundScheduler jobs

I have a simple BackgroundScheduler and a simple task. The BackgroundScheduler is configured to run only a single instance for that task:
from apscheduler.schedulers.background import BackgroundScheduler
scheduler.add_job(run_task, 'interval', seconds=10)
scheduler.start()
When a tasks starts, it takes much more than 10 seconds to complete and I get the warning:
Execution of job "run_tasks (trigger: interval[0:00:10], next run at: 2020-06-17 18:25:32 BST)" skipped: maximum number of running instances reached (1)
This works as expected.
My problem is that I can't find a way to check if an instance of that task is currently running.
In the docs, there are many ways to get all and individual scheduled tasks, but I can't find a way to check if a task is currently running or not.
I would ideally want something like:
def job_in_progress():
job = scheduler.get_job(id=job_id)
instances = job.get_instances()
return instances > 0
Any ideas?
Not great because you have to access a private attribute, but the only thing I could find.
def job_in_progress():
job = scheduler.get_job(id=job_id)
instances = scheduler._instances[job_id]
return instances > 0
If someone else has another idea, don't use this.

Schedule a task apscheduler for every hour and wait if task is not complete

I'm trying to schedule a task to run every hour. The task is a function that will take longer and longer execute. The minimum break the function needs is an hour from its start.
Once the function starts the one hour clock should be started for it to start again. With that comes a problem. Being that the function gets longer and longer to complete it could take over an hour to complete that function. This means the scheduler would have called the function before it completes. The goal is to start that function every hour from when it began but to wait for it to complete once its starts to take over an hour to run.
This is what iv'e been trying:
from threading import Thread
from apscheduler.schedulers.blocking import BlockingScheduler
import threading
import time
def job():
# This is a function that takes longer
# and longer
# and longer
# to complete
thread = Thread(target = job)
scheduler = BlockingScheduler()
scheduler.add_job(job, 'interval', hours=1)
if thread.is_alive():
print("waiting")
else:
scheduler.start()

Cron vs APscheduler vs something else for 2 second interval

I need to pull data from a serial connection at a fixed interval of 2 second with a piece of python code. The software is running on a Raspberry Pi 24/7.
As far as i see it, i have three options:
Start my python script as a service (with systemd) and use an APscheduler
Use a cron-job (possible?)
Use another solution
What is the recommended way of doing it?
Here's how you can do this job in apscheduler
from apscheduler.schedulers.background import BackgroundScheduler
def pull_data():
print("code comes here")
scheduler = BackgroundScheduler()
scheduler.add_job(pull_data, "interval", seconds=2)
scheduler.start()
apscheduler also supports async code
from apscheduler.schedulers.asyncio import AsyncIOScheduler
async def pull_data():
await print("code comes here")
scheduler = AsyncIOScheduler()
scheduler.add_job(pull_data, "interval", seconds=2)
scheduler.start()
You can also do this job with lightweight python library schedule.
import time
import schedule
def pull_data():
print("code comes here")
schedule.every(2).seconds.do(pull_data)
while True:
schedule.run_pending()
time.sleep(1)

Resources