Tensorflow while loop runs only once - python-3.x

The below while loop should print "\n\nInside while..." 10 times but when I run the graph, "\n\nInside while..." is printed exactly once. Why is that?
i = tf.constant(0)
def condition(i):
return i < 10
def body(i):
print("\n\nInside while...", str(i))
return i + 1
r = tf.while_loop(condition, body, [i])

Your issue comes from conflating TensorFlow graph building with graph execution.
The functions you pass to tf.while_loop get executed once, to generate the TensorFlow graph responsible for executing the loop itself. So if you had put a tf.Print in there (for example, saying return tf.Print(i+1, [i+1])) you'd see it print 10 times when the loop is actually executed by the TensorFlow system.

I know practically nothing about TensorFlow and cannot help you with your immediate problem, but you can accomplish something similar (maybe) if you write your code differently. Following the logic of your program, a different implementation of while_loop was devised below. It takes your condition and body to run a while loop that has been parameterized with functions passed to it. Shown below is a conversation with the interpreter showing how this can be done.
>>> def while_loop(condition, body, local_data):
while condition(*local_data):
local_data = body(*local_data)
return local_data
>>> i = 0
>>> def condition(i):
return i < 10
>>> def body(i):
print('Inside while', i)
return i + 1,
>>> local_data = while_loop(condition, body, (i,))
Inside while 0
Inside while 1
Inside while 2
Inside while 3
Inside while 4
Inside while 5
Inside while 6
Inside while 7
Inside while 8
Inside while 9
>>> local_data
(10,)
>>>

Related

Running 2 infinite loops simultaneously switching control - Python3

I am building an image processing application that reads the current screen and does some processing. The code skeleton is as given below:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name):
while True:
mip.perform_some_image_processing_from_screen(image_name)
time.sleep(180) # Switch to perform_another_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, complete current iteration and then switch to the other function.
def perform_another_processing(self,inp):
while True:
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180) # Switch to perform_image_processing as next iteration is not needed for next 3 minutes. If this processing takes more than 3 minutes, pause it and switch to the other function.
mp = myprocess()
mp.perform_image_processing(my_image) # Calling of image processing to identify items from the screen capture
mp.perform_another_processing(2) # Calling of application processing to calculate values based on screen capture
As of now, i am able to run only one of the function at a time.
Question here is:
How can i run both of them simultaneously (as 2 separate thread/process??) assuming both the functions may need to access/switch the same screen at the same time.
One option that i think of is both functions setting a common variable (to 1/0) and passing control of execution to each other before going on to sleep? Is it possible? How do i implement it?
Any help with this regards will help me adding multiple other similar functionalities in my application.
Thanks
Bishnu
Note:
For all who could not visualize what i wanted to achieve, here is my code that works fine. This bot code will check the screens to shield(to protect from attacks) or gather (resources in the kingdom)
def shield_and_gather_loop(self):
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
shield_timeout= time.time() + 60 * 3 if shield_sleep == True else None
while True:
while shield_sleep == False: #and current_shield_sts != 'SHIELDED':
self.reach_home_screen_from_anywhere()
current_shield_sts = self.renew_shield()
print("Current Shield Status # ", datetime.datetime.now(), " :", current_shield_sts)
shield_sleep = True
#self.go_to_sleep("Shield Check ", hours=0, minutes=3, seconds=0)
shield_timeout = time.time() + 60 * 3 #3 minutes from now
while num_of_gatherers < self.Max_Army_Size and gather_sleep == False:
if time.time() < shield_timeout:
self.device.remove_processed_files()
self.reach_kd_screen_from_anywhere()
self.found_rss_tile = 0
self.find_rss_tile_for_gather(random.choice(self.gather_items))
num_of_gatherers = self.num_of_troops_gathering()
print("Currently gathering: ", str(num_of_gatherers))
if gather_sleep == True and shield_sleep == True:
print("Both gather and shield are working fine.Going to sleep for 2 minutes...")
time.sleep(120)
# Wake up from sleep and recheck shield and gather status
current_shield_sts = self.renew_shield()
num_of_gatherers = self.num_of_troops_gathering()
gather_sleep = False if num_of_gatherers < self.Max_Army_Size else True
shield_sleep = True if current_shield_sts == 'SHIELDED' else False
Seems like the obvious thing to do is this, as it satisfies your criteria of having only one of the two actions running at a time:
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
while True:
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
time.sleep(180)
mp = myprocess()
mp.perform_image_processing(my_image,2)
If you need something more elaborate (such as perform_some_other_image_processing_from_screen() being able to interrupt a call to perform_some_image_processing_from_screen() before it has returned), that might require threads, but it would also require your perform_*() functions to serialize access to the data they are manipulating, which would make them slower and more complicated, and based on the comments supplied at the bottom of your example, it doesn't sound like the second function can do any useful work until the first function has completed its identification task anyway.
As a general principle, it's probably worthwhile to move the while True: [...] sleep(180) loop out of the function call and into the top-level code, so that the caller can control its behavior directly. Calling a function that never returns doesn't leave a lot of room for customization to the caller.
import my_img_proc as mip
import my_app_proc as map
import time
class myprocess:
def perform_image_processing(self,image_name,inp):
mip.perform_some_image_processing_from_screen(image_name)
map.perform_some_other_image_processing_from_screen(inp)
mp = myprocess()
while True:
mp.perform_image_processing(my_image,2)
sleep(180)

Why is this Jupyter cell executing ahead of asynchronous loop cell?

I'm trying to send 6 API requests in one session and record how long it took. Do this n times. I store the amount of time in a list. After that I print out some list info and visualize the list data.
What works: downloading as .py, removing ipython references, and running the code as a command line script.
What also works: manually running the cells, and the erroring cell after the loop cell completes.
What doesn't work: restarting and running all cells within the jupyter notebook. The last cell seems to not wait for the prior one; the latter appears to execute first, and complain about an empty list. Error in image below.
Cell 1:
# submit 6 models at the same time
# using support / first specified DPE above
auth = aiohttp.BasicAuth(login=USERNAME, password=API_TOKEN)
async def make_posts():
for i in range(0, 6):
yield df_input['deployment_id'][i]
async def synch6():
#url = "%s/predApi/v1.0/deployments/%s/predictions" % (PREDICTIONSENDPOINT,DEPLOYMENT_ID)
async with aiohttp.ClientSession(auth=auth) as session:
post_tasks = []
# prepare the coroutines that post
async for x in make_posts():
post_tasks.append(do_post(session, x))
# now execute them all at once
await asyncio.gather(*post_tasks)
async def do_post(session, x):
url = "%s/predApi/v1.0/deployments/%s/predictions" % (PREDICTIONSENDPOINT, x)
async with session.post(url, data = df_scoreme.to_csv(), headers=PREDICTIONSHEADERS_csv) as response:
data = await response.text()
#print (data)
Cell 2:
chonk_start = (datetime.datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%S.000Z"))
perf1 = []
n = 100
for i in range(0, n):
start_ts = round(time.time() * 1000)
loop = asyncio.get_event_loop()
loop.run_until_complete(synch6())
end_ts = round(time.time() * 1000)
perf1.append(end_ts - start_ts)
Cell 3:
perf_string(perf1, 'CHONKS')
The explicit error (see image) appears to simply be the result of trying to work on an empty list. The list appears to be empty because that cell is executing before the loop test cell actually populates the list - although I don't know why. This appears to only be a problem inside the notebook.
EDIT: In further testing... this appears to work fine on my local (python3, mac) jupyter notebook. Where it is failing is on a AWS Sagemaker conda python3 notebook.

Generator function that always processes, but does not always yield

I have a generator function that does 2 things:
Reads a file and writes another file based on the output.
Yields key values for the record it's just written.
The issue is that I don't always want to do point (2), and when I call it in such a way that I only want the lines written to a new file, it simply doesn't get called (i.e. a print statement as the first line does not even get output. Try/except catches nothing here either when it's called).
I've set up a simplified test case just to verify that this is "normal", and it reproduces the same results.
test.py
from test2 import run_generator
if __name__ == '__main__':
print('*** Starting as generator ***')
for num in run_generator(max=10, generate=True):
print(f'Calling : {num}')
print('*** Starting without yielding ***')
run_generator(max=10, generate=False)
print('*** Finished ***')
test2.py
def run_generator(max, generate):
print('*** In the generator function ***')
sum = 1
for i in range(max):
print(f'Generator: {i}')
sum += i
if generate:
yield i
print(f'End of generator, sum={sum}')
This gives me the output:
$ python3 test.py
*** Starting as generator ***
*** In the generator function ***
Generator: 0
Calling : 0
Generator: 1
Calling : 1
Generator: 2
Calling : 2
Generator: 3
Calling : 3
Generator: 4
Calling : 4
Generator: 5
Calling : 5
Generator: 6
Calling : 6
Generator: 7
Calling : 7
Generator: 8
Calling : 8
Generator: 9
Calling : 9
End of generator, sum=46
*** Starting without yielding ***
*** Finished ***
In the test example, I'd like the generator function to still print the values when called but told not to yield. (In my real example, I still want it to do a f.write() to a different file, under which everything is nested since it's an with open(file, 'w') as f: statement.
Am I asking it to do something stupid? The alternative seems to be 2 definitions that do almost the same thing, which violates the DRY principle. Since in my primary example the yield is nested within the "with open", it's not really something that can be pulled out of there and done separately.
it simply doesn't get called - it's because you don't call it. To solve the problem call it like any other generator. For example, like you did in the first case: for num in run_generator(max=10, generate=False): pass.
I guess, another way is to use next(run_generator(max=10, generate=False)) syntax inside try/except since yield is never reached so you will get StopIteration error.
Or something like result = list(run_generator(5, True/False)).

What is the best way to keep local state in a node in Bonobo-etl?

If I have an input queue with 20 numbers, how can I get e.g. the sum of all numbers? So far this is what I came up with:
import bonobo as bb
from bonobo.config import Configurable, ContextProcessor
from bonobo.util import ValueHolder
def extract_nums():
yield 1
yield 2
yield 3
class TransformNumber(Configurable):
#ContextProcessor
def total(self, context):
yield ValueHolder({'extract':0,'transform':0})
def __call__(self, total, num, **kwargs):
total['extract']+=num
transform_num = num * 10
total['transform']+=transform_num
if num==3: # Final number
print("TOTALS:",total.get())
yield transform_num
graph = bb.Graph()
graph.add_chain(
extract_nums,
TransformNumber(),
bb.PrettyPrinter()
)
It is ok to do it like this or is there a better way?
There are different available options to keep local state in a Bonobo ETL node.
It's ok to do it like you did (although I think it's hard to read), I tend to prefer to use closures which I think are more readable (but I agree, that's debatable):
import bonobo
def CumSum():
total = 0
def cum_sum(x):
nonlocal total
total += x
yield x, total
return cum_sum
def get_graph(**options):
graph = bonobo.Graph()
graph.get_cursor() >> range(100) >> CumSum() >> print
return graph
# The __main__ block actually execute the graph.
if __name__ == "__main__":
parser = bonobo.get_argument_parser()
with bonobo.parse_args(parser) as options:
bonobo.run(get_graph(**options))
A few examples are available in the bonobo source code, please look in https://github.com/python-bonobo/bonobo/blob/develop/bonobo/nodes/basics.py (and there are examples written in different styles).
Note that I'm using the Bonobo 0.7 (incoming) syntax here to build the graph, but the same thing can be used with current stable version (0.6) by replacing ">>" operators by add_chain calls.

Trying to thread in python 3

I have been working on a pentester academy challenge to brute force digest auth which is now working but I now want to thread it so it goes quicker. this, however, is not working and produces the error below.
ERROR MESSAGE
self.__target(*self.__args, **self.__kwargs)
TypeError: attempt_user() takes exactly 1 argument (5 given)
I can't figure out why its taking 5 arguments when I'm only giving one, any help is appreciated. my code is below.
CODE
import hashlib
import requests
import re
from threading import *
from requests.auth import HTTPDigestAuth
URL = 'http://pentesteracademylab.appspot.com/lab/webapp/digest/1'
lines = [line.rstrip('\n') for line in open('wordl2.txt')]
def attempt_user(i):
try:
r = requests.get(URL, auth=HTTPDigestAuth('admin', i))
test = r.status_code
print ('status code for {} is {}'.format(i, test))
print (r.headers)
except:
print ('fail')
# Loop ports informed by parameter
for i in lines:
# Create a thread by calling the connect function and passing the host and port as a parameter
t = Thread(target=attempt_user, args=(i))
# start a thread
t.start()
The reason this didn't work is that args should be an iterable containing the arguments. What you gave it was not (as you may have thought) a tuple, but a single value (in your case a string (!)).
This is not a tuple:
("foo")
This is a tuple:
("foo",)
So when you do t = Thread(target=attempt_user, args=(i)), Thread.__init__ takes every element in i (in this case, the five characters) and hands them as individual parameters to attempt_user.
The fix, as stated in my comment, is to actually hand over a tuple:
# Loop ports informed by parameter
for i in lines:
# Create a thread by calling the connect function and passing the host and port as a parameter
t = Thread(target=attempt_user, args=(i,))
# start a thread
t.start()

Resources