Example i was try use progress.bar, alive_progress.
from alive_progress import alive_bar
with alive_bar(100) as bar:
for i in range(100):
for i in combinations_with_replacement(['a','b','c','a','b','c','a','b'], 8):
b = (''.join(i))
bar()
and i hv many annoing problems or it slow down script , not work as expecter with print ( print simple print over this bar or its creates new bar after any print.
If i run it with no bar/counting it finish at ~0.001 second .
as exmaple i was try with tqdm
from itertools import *
from tqdm import trange
for i in trange(100):
for i in combinations_with_replacement(['a','b','c','a','b','c','a','b','a','b','c','a','b','c','a','b','a','b','c','a','b','c','a','b'], 8):
b = (''.join(i))
if i dissble it it finish less than second , if i use it , it expect time is 3 m , how to this this and not slowdown script?
UPD : if belive info what find at goodle that progress bars and counters for pyhon is not usable , only if you dont care bout 1000% slowdown script.
but some how progress bars is used at any brutforce scripts , generators etc...
i dont undestand .
from itertools import *
from collections import Counter
from tqdm import *
#for i in tqdm(Iterable):
limit = product(['0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f'], repeat = 8)
for i in trange(100):
for j in limit:
b = (''.join(j))
if b in ['bbbbbbbb','11111111','aaaaaaaa','00001111']:
print (b)
its static before script is and so to not work. ( stay at 0% all runing and go instantly to 100% when it end)
You can commnent logic "if b in and print" notting no change .
The reason the script is slowing down is that the combinations_with_replacement() function is being called for each progress bar update.
I calculated the value outside the loop and the output was instant.
The rate of execution went from 2 s/it (0.5 it/s) -> 60 it/s
from itertools import *
from tqdm import trange
limit = combinations_with_replacement(['a','b','c','a','b','c','a','b','a','b','c','a','b','c','a','b','a','b','c','a','b','c','a','b'], 8)
for i in trange(100):
for j in limit:
b = (''.join(j))
Related
I have a basic rich progress bar implemented like this:
import time
from rich.progress import *
with Progress(TextColumn("[progress.description]{task.description}"),
BarColumn(), TaskProgressColumn(),
TimeElapsedColumn()) as progress:
total = 20
for x in range(total):
task1 = progress.add_task(f"[green]Processing Algorithm-{x}.",
total=total)
progress.update(task1, advance=1)
time.sleep(0.1)
It works as expected.
But now I want to remove the initialization the the progress bar in a separate file, so I
created a file task_progress.py and put the code in there.
from rich.progress import *
import contextlib
#contextlib.contextmanager
def init_progress():
yield Progress(BarColumn(), TaskProgressColumn(), TimeElapsedColumn())
And I updated the original progress bar as below:
import time
from task_progress import init_progress
with init_progress() as progress:
total = 20
for x in range(total):
task1 = progress.add_task(f"[green]Processing Algorithm-{x}.",
total=total)
progress.update(task1, advance=1)
time.sleep(0.1)
But, now when I run the code the progress bar does not appear on the terminal!
You don't need to wrap the creation of the Progress class in a context manager. The Progress class can already act like a context manager. A function that returns an Progress object will work fine:
from rich.progress import *
def init_progress():
Progress(BarColumn(), TaskProgressColumn(), TimeElapsedColumn())
I try to find a simple way to "speed up" simple functions for a big script so I googled for it and found 3 ways to do that.
but it seems the time they need is always the same.
so what I am doing wrong testing them?
file1:
from concurrent.futures import ThreadPoolExecutor as PoolExecutor
from threading import Thread
import time
import os
import math
#https://dev.to/rhymes/how-to-make-python-code-concurrent-with-3-lines-of-code-2fpe
def benchmark():
start = time.time()
for i in range (0, 40000000):
x = math.sqrt(i)
print(x)
end = time.time()
print('time', end - start)
with PoolExecutor(max_workers=3) as executor:
for _ in executor.map((benchmark())):
pass
file2:
#the basic way
from threading import Thread
import time
import os
import math
def calc():
start = time.time()
for i in range (0, 40000000):
x = math.sqrt(i)
print(x)
end = time.time()
print('time', end - start)
calc()
file3:
import asyncio
import uvloop
import time
import math
#https://github.com/magicstack/uvloop
async def main():
start = time.time()
for i in range (0, 40000000):
x = math.sqrt(i)
print(x)
end = time.time()
print('time', end - start)
uvloop.install()
asyncio.run(main())
every file needs about 180-200 sec
so i 'can't see' a difference.
I googled for it and found 3 ways to [speed up a function], but it seems the time they need is always the same. so what I am doing wrong testing them?
You seemed to have found strategies to speed up some code by parallelizing it, but you failed to implement them correctly. First, the speedup is supposed to come from running multiple instances of the function in parallel, and the code snippets make no attempt to do that. Then, there are other problems.
In the first example, you pass the result benchmark() to executor.map, which means all of benchmark() is immediately executed to completion, thus effectively disabling parallelization. (Also, executor.map is supposed to receive an iterable, not None, and this code must have printed a traceback not shown in the question.) The correct way would be something like:
# run the benchmark 5 times in parallel - if that takes less
# than 5x of a single benchmark, you've got a speedup
with ThreadPoolExecutor(max_workers=5) as executor:
for _ in range(5):
executor.submit(benchmark)
For this to actually produce a speedup, you should try to use ProcessPoolExecutor, which runs its tasks in separate processes and is therefore unaffected by the GIL.
The second code snippet never actually creates or runs a thread, it just executes the function in the main thread, so it's unclear how that's supposed to speed things up.
The last snippet doesn't await anything, so the async def works just like an ordinary function. Note that asyncio is an async framework based on switching between tasks blocked on IO, and as such can never speed CPU-bound calculations.
I'm writing a script where a user has to provide input for each element of a large list. I'm trying to use tqdm to provide a progress bar for the user, but I can't find a good way to get input within the tqdm loop without breaking the output.
I'm aware of tqdm.write() for writing to the terminal during a tqdm loop, but is there a way of getting input?
For an example of what I'm trying to do, consider the code below:
from tqdm import tqdm
import sys
from time import sleep
def do_stuff(x): sleep(0.5)
stuff_list = ['Alpha', 'Beta', 'Gamma', 'Omega']
for thing in tqdm(stuff_list):
input_string = input(thing + ": ")
do_stuff(input_string)
If I run this code, I get the following output:
0%| | 0/4 [00:00<?, ?it/s]Alpha: A
25%|█████████████████████ | 1/4 [00:02<00:07, 2.54s/it]Beta: B
50%|██████████████████████████████████████████ | 2/4 [00:03<00:04, 2.09s/it]Gamma: C
75%|███████████████████████████████████████████████████████████████ | 3/4 [00:04<00:01, 1.72s/it]Omega: D
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:05<00:00, 1.56s/it]
I've tried using tqdm.external_write_mode, but this simply didn't display the progress bar whenever an input was waiting, which is not the behaviour I'm looking for.
Is there an easy way of doing this, or am I going to have to swap libraries?
It isn't possible to display the progress bar while inside the input() function, because once a line is finished, it cannot be removed any more. It's a technical limitation of how command lines work. You can only remove the current line until you wrote a newline.
Therefore, I think the only solution is to remove the status bar, let the user input happen and then display it again.
from tqdm import tqdm
import sys
from time import sleep
def do_stuff(x): sleep(0.5)
stuff_list = ['Alpha', 'Beta', 'Gamma', 'Omega']
# To have more fine-control, you need to create a tqdm object
progress_iterator = tqdm(stuff_list)
for thing in progress_iterator:
# Remove progress bar
progress_iterator.clear()
# User input
input_string = input(thing + ": ")
# Write the progress bar again
progress_iterator.refresh()
# Do stuff
do_stuff(input_string)
If you don't like the fact that the progress_iterator object exists after the loop, use the with syntax:
with tqdm(stuff_list) as progress_iterator:
for thing in progress_iterator:
...
EDIT:
If you are willed to sacrifice platform independence, you can freely move the cursor and delete lines with this:
from tqdm import tqdm
import sys
from time import sleep
def do_stuff(x): sleep(0.5)
stuff_list = ['Alpha', 'Beta', 'Gamma', 'Omega']
# Special console commands
CURSOR_UP_ONE = '\x1b[1A'
# To have more fine-control, you need to create a tqdm object
progress_iterator = tqdm(stuff_list)
for thing in progress_iterator:
# Move the status bar one down
progress_iterator.clear()
print(file=sys.stderr)
progress_iterator.refresh()
# Move the cursor back up
sys.stderr.write('\r')
sys.stderr.write(CURSOR_UP_ONE)
# User input
input_string = input(thing + ": ")
# Refresh the progress bar, to move the cursor back to where it should be.
# This step can be omitted.
progress_iterator.refresh()
# Do stuff
do_stuff(input_string)
I think this is the closest you will get to tqdm.write(). Note that the behaviour of input() can never be identical to tqdm.write(), because tqdm.write() first deletes the bar, then writes the message, and then writes the bar again. If you want to display the bar while being in input(), you have to do some platform-dependent stuff like this.
I want to run a piece of code at exact time intervals (of the order of 15 seconds)
Initially I used time.sleep(), but then the problem is the code takes a second or so to run, so it will get out of sync.
I wrote this, which I feel is untidy because I don't like using while loops. Is there a better way?
import datetime as dt
import numpy as np
iterations = 100
tstep = dt.timedelta(seconds=5)
for i in np.arange(iterations):
startTime = dt.datetime.now()
myfunction(doesloadsofcoolthings)
while dt.datetime.now() < startTime + tstep:
1==1
Ideally one would use threading to accomplish this. You can do something like
import threading
interval = 15
def myPeriodicFunction():
print "This loops on a timer every %d seconds" % interval
def startTimer():
threading.Timer(interval, startTimer).start()
myPeriodicFunction()
then you can just call
startTimer()
in order to start the looping timer.
Consider tracking the time it takes the code to run (a timer() function), then sleeping for 15 - exec_time seconds after completion.
start = datetime.now()
do_many_important_things()
end = datetime.now()
exec_time = end - start
time.sleep(15-exec_time.total_seconds())
You can use a simple bash line:
watch -n 15m python yourcode.py
How can I run a function in Python, at a given time?
For example:
run_it_at(func, '2012-07-17 15:50:00')
and it will run the function func at 2012-07-17 15:50:00.
I tried the sched.scheduler, but it didn't start my function.
import time as time_module
scheduler = sched.scheduler(time_module.time, time_module.sleep)
t = time_module.strptime('2012-07-17 15:50:00', '%Y-%m-%d %H:%M:%S')
t = time_module.mktime(t)
scheduler_e = scheduler.enterabs(t, 1, self.update, ())
What can I do?
Reading the docs from http://docs.python.org/py3k/library/sched.html:
Going from that we need to work out a delay (in seconds)...
from datetime import datetime
now = datetime.now()
Then use datetime.strptime to parse '2012-07-17 15:50:00' (I'll leave the format string to you)
# I'm just creating a datetime in 3 hours... (you'd use output from above)
from datetime import timedelta
run_at = now + timedelta(hours=3)
delay = (run_at - now).total_seconds()
You can then use delay to pass into a threading.Timer instance, eg:
threading.Timer(delay, self.update).start()
Take a look at the Advanced Python Scheduler, APScheduler: http://packages.python.org/APScheduler/index.html
They have an example for just this usecase:
http://packages.python.org/APScheduler/dateschedule.html
from datetime import date
from apscheduler.scheduler import Scheduler
# Start the scheduler
sched = Scheduler()
sched.start()
# Define the function that is to be executed
def my_job(text):
print text
# The job will be executed on November 6th, 2009
exec_date = date(2009, 11, 6)
# Store the job in a variable in case we want to cancel it
job = sched.add_date_job(my_job, exec_date, ['text'])
Might be worth installing this library: https://pypi.python.org/pypi/schedule, basically helps do everything you just described. Here's an example:
import schedule
import time
def job():
print("I'm working...")
schedule.every(10).minutes.do(job)
schedule.every().hour.do(job)
schedule.every().day.at("10:30").do(job)
schedule.every().monday.do(job)
schedule.every().wednesday.at("13:15").do(job)
while True:
schedule.run_pending()
time.sleep(1)
Here's an update to stephenbez' answer for version 3.5 of APScheduler using Python 2.7:
import os, time
from apscheduler.schedulers.background import BackgroundScheduler
from datetime import datetime, timedelta
def tick(text):
print(text + '! The time is: %s' % datetime.now())
scheduler = BackgroundScheduler()
dd = datetime.now() + timedelta(seconds=3)
scheduler.add_job(tick, 'date',run_date=dd, args=['TICK'])
dd = datetime.now() + timedelta(seconds=6)
scheduler.add_job(tick, 'date',run_date=dd, kwargs={'text':'TOCK'})
scheduler.start()
print('Press Ctrl+{0} to exit'.format('Break' if os.name == 'nt' else 'C'))
try:
# This is here to simulate application activity (which keeps the main thread alive).
while True:
time.sleep(2)
except (KeyboardInterrupt, SystemExit):
# Not strictly necessary if daemonic mode is enabled but should be done if possible
scheduler.shutdown()
I've confirmed the code in the opening post works, just lacking scheduler.run(). Tested and it runs the scheduled event. So that is another valid answer.
>>> import sched
>>> import time as time_module
>>> def myfunc(): print("Working")
...
>>> scheduler = sched.scheduler(time_module.time, time_module.sleep)
>>> t = time_module.strptime('2020-01-11 13:36:00', '%Y-%m-%d %H:%M:%S')
>>> t = time_module.mktime(t)
>>> scheduler_e = scheduler.enterabs(t, 1, myfunc, ())
>>> scheduler.run()
Working
>>>
I ran into the same issue: I could not get absolute time events registered with sched.enterabs to be recognized by sched.run. sched.enter worked for me if I calculated a delay, but is awkward to use since I want jobs to run at specific times of day in particular time zones.
In my case, I found that the issue was that the default timefunc in the sched.scheduler initializer is not time.time (as in the example), but rather is time.monotonic. time.monotonic does not make any sense for "absolute" time schedules as, from the docs, "The reference point of the returned value is undefined, so that only the difference between the results of consecutive calls is valid."
The solution for me was to initialize the scheduler as
scheduler = sched.scheduler(time.time, time.sleep)
It is unclear whether your time_module.time is actually time.time or time.monotonic, but it works fine when I initialize it properly.
dateSTR = datetime.datetime.now().strftime("%H:%M:%S" )
if dateSTR == ("20:32:10"):
#do function
print(dateSTR)
else:
# do something useful till this time
time.sleep(1)
pass
Just looking for a Time of Day / Date event trigger:
as long as the date "string" is tied to an updated "time" string, it works as a simple TOD function. You can extend the string out to a date and time.
whether its lexicographical ordering or chronological order comparison,
as long as the string represents a point in time, the string will too.
someone kindly offered this link:
String Comparison Technique Used by Python
had a really hard time getting these answers to work how i needed it to,
but i got this working and its accurate to .01 seconds
from apscheduler.schedulers.background import BackgroundScheduler
sched = BackgroundScheduler()
sched.start()
def myjob():
print('job 1 done at: ' + str(dt.now())[:-3])
dt = datetime.datetime
Future = dt.now() + datetime.timedelta(milliseconds=2000)
job = sched.add_job(myjob, 'date', run_date=Future)
tested accuracy of timing with this code:
at first i did 2 second and 5 second delay, but wanted to test it with a more accurate measurement so i tried again with 2.55 second delay and 5.55 second delay
dt = datetime.datetime
Future = dt.now() + datetime.timedelta(milliseconds=2550)
Future2 = dt.now() + datetime.timedelta(milliseconds=5550)
def myjob1():
print('job 1 done at: ' + str(dt.now())[:-3])
def myjob2():
print('job 2 done at: ' + str(dt.now())[:-3])
print(' current time: ' + str(dt.now())[:-3])
print(' do job 1 at: ' + str(Future)[:-3] + '''
do job 2 at: ''' + str(Future2)[:-3])
job = sched.add_job(myjob1, 'date', run_date=Future)
job2 = sched.add_job(myjob2, 'date', run_date=Future2)
and got these results:
current time: 2020-12-10 19:50:44.632
do job 1 at: 2020-12-10 19:50:47.182
do job 2 at: 2020-12-10 19:50:50.182
job 1 done at: 2020-12-10 19:50:47.184
job 2 done at: 2020-12-10 19:50:50.183
accurate to .002 of a second with 1 test
but i did run a lot of tests and accuracy ranged from .002 to .011
never going under the 2.55 or 5.55 second delay
#everytime you print action_now it will check your current time and tell you should be done
import datetime
current_time = datetime.datetime.now()
current_time.hour
schedule = {
'8':'prep',
'9':'Note review',
'10':'code',
'11':'15 min teabreak ',
'12':'code',
'13':'Lunch Break',
'14':'Test',
'15':'Talk',
'16':'30 min for code ',
'17':'Free',
'18':'Help ',
'19':'watever',
'20':'watever',
'21':'watever',
'22':'watever'
}
action_now = schedule[str(current_time.hour)]