Python for loop calls a function that calls another function - python-3.x

I am using a for loop to iterate over a list of switches.
For each device in switch_list, I call function1.
Function1 then calls function2.
However, that's when the processing ends.
I need to get back to the for loop so that I can process switch2, switch3, etc...
Here is the output:
We are in main
We are in function1 and the device name is switch1
We are in function2 and the device name is switch1
Here is my code:
switch_list = ['switch1', 'switch2']
def main():
print('We are in main')
for device in switch_list:
main_action = function1(device)
return(device)
def function1(device):
print(f'We are in function1 and the device name is {device}')
function1_action = function2(device)
def function2(device):
print(f'We are in function2 and the device name is {device}')
if __name__ == '__main__':
main()
Any assistance would be greatly appreciated.

It's because the return() statement in your main function is inside the for loop. Your problem would be solved if you take it out of the for loop.
return() symbolizes the end of a function. So when your code sees the return statement, it exits main() and you get output for only the first device.
You can create a list of the devices since you are running a for loop and pass it on after it's completion.
Something like this:
def main():
main_output_list = []
print("We are in main")
for device in switches:
main_action = function1(device)
main_output_list.append(output of main)
return(main_output_list)

As suggested by Alexander, the return keyword exits the function, returning a provided value to the place where the method was called.
ex.
def give_10():
return 10
print("I am unreachable because I am after a return statement")
print(give_10()) # give_10() returns 10 which makes the statement
# as print(10). Which in turn processes the value and prints to stdout.

Related

How to stop awaiting ainput funcion from another coroutine/task

I want to stop awaiting ainput function at 6th iteration of for loop
import asyncio
from aioconsole import ainput
class Test():
def __init__(self):
self.read = True
async def read_from_concole(self):
while self.read:
command = await ainput('$>')
if command == 'exit':
self.read = False
if command == 'greet':
print('greetings :J')
async def count(self):
console_task = asyncio.create_task(self.read_from_concole())
for c in range(10):
await asyncio.sleep(.5)
print(f'number: {c}')
if c == 5: # 6th iteration
# What shoud I do here?
# Following code doesn't meet my expectations
self.read = False
console_task.cancel()
await console_task
# async def run_both(self):
# await asyncio.gather(
# self.read_from_concole(),
# self.count()
# )
if __name__ == '__main__':
o1 = Test()
loop = asyncio.new_event_loop()
loop.run_until_complete(o1.count())
Of course, this code is simplified, but covers the idea: write a program where one coroutine can cancel another which is awaiting something (in this example ainput.
asyncio.Task.cancel() is not the solution because it won't make coroutine stop awaiting (so I need put arbitrary character into console and press enter, this is not what I want).
I don't even know whether my approach makes sense, I'm a fresh asyncio user, for now, I know only the basics. In my real project, the situation is very similar. I have a GUI application and a console window. By clicking 'X' button I want to close window and terminate ainput (which read commands from the console) to completely finish the program (the console part is working on a different thread, and because of that I can't close my program completely - this thread will run until ainput receive some input from a user).

Is there a way to await on an asyncio.Task.result from sync code

I've a situation like below,
event_loop = asyncio.new_event_loop()
async def second_async():
# some async job
print("I'm here")
return 123
def sync():
return asyncio.run_coroutine_threadsafe(second_async(), loop=event_loop).result()
async def first_async():
sync()
event_loop.run_until_complete(first_async())
I call the sync function from a different thread(where the event_loop is not running), it works fine. The problem is if I run the event_loop.run_complete... line, the .result() call on the Task returned by run_coroutine_threadsafe blocks the execution of the loop, which makes sense. To avoid this, I tried changing this as follows,
event_loop = asyncio.new_event_loop()
async def second_async():
# some async job
print("I'm here")
return 123
def sync():
# if event_loop is running on current thread
res = loop.create_task(second_async()).result()
# else
res = asyncio.run_coroutine_threadsafe(second_async(), loop=event_loop).result()
# Additional processing on res
# Need to evaluate the result of task right here in sync.
return res
async def first_async():
sync()
event_loop.run_until_complete(first_async())
This works fine, but the .result() call on the Task object returned by create_task always raises an InvalidStateError. The set_result is never called on the Task object.
Basically, I want the flow to be as such
(async code) -> sync code (a non blocking call ->) async code
I know this is a bad way of doing things, but I'm integrating stuff, so I don't really have an option.
Here is a little single-threaded program that illustrates the problem.
If you un-comment the line asyncio.run(first_async1()), you see the same error as you're seeing, and for the same reason. You're trying to access the result of a task without awaiting it first.
import asyncio
event_loop = asyncio.new_event_loop()
async def second_async():
# some async job
print("I'm here")
return 123
def sync1():
return asyncio.create_task(second_async()).result()
async def first_async1():
print(sync1())
def sync2():
return asyncio.create_task(second_async())
async def first_async2():
print(await sync2())
# This prints I'm here,
# the raises invalid state error:
# asyncio.run(first_async1())
# This works, prints "I'm here" and "123"
asyncio.run(first_async2())
With that line commented out again, the second version of the program (first_async2) runs just fine. The only difference is that the ordinary function, sync2, returns an awaitable instead of a result. The await is done in the async function that called it.
I don't see why this is a bad practice. To me, it seems like there are situations where it's absolutely necessary.
Another approach is to create a second daemon thread and set up an event loop there. Coroutines can be executed in this second thread with asyncio.run_coroutine_threadsafe, which returns a concurrent.futures.Future. Its result method will block until the Future's value is set by the other thread.
#! python3.8
import asyncio
import threading
def a_second_thread(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
loop2 = asyncio.new_event_loop()
threading.Thread(target=a_second_thread, args=(loop2,), daemon=True).start()
async def second_async():
# some async job
print("I'm here")
for _ in range(4):
await asyncio.sleep(0.25)
print("I'm done")
return 123
def sync1():
# Run the coroutine in the second thread -> get a concurrent.futures.Future
fut = asyncio.run_coroutine_threadsafe(second_async(), loop2)
return fut.result()
async def first_async1():
print(sync1())
def sync2():
return asyncio.create_task(second_async())
async def first_async2():
print(await sync2())
# This works, prints "I'm here", "I'm done", and "123"
asyncio.run(first_async1())
# This works, prints "I'm here", "I'm done", and "123"
asyncio.run(first_async2())
Of course this will still block the event loop in the main thread until fut.result() returns. There is no avoiding that. But the program runs.

Thread is blocked in django - Python

the last 4 hours I have been trying to understand threading with django. Nothing seems to work. I want to let the website run in the foreground and let the backend communicate with some other devices on a thread. I want the thread to start at the startup of the website but the program is stuck when I call the thread until the thread comes to an end.
Do you know a way to fix it? Please I need help.
The urls.py file
def add(x, y):
i=0
while i < 100000000:
x += y
i += 1
def postpone(function):
t = threading.Thread(target=function, args=(1,))
t.setDaemon(True)
t.start()
return 0
print("Before thread")
postpone(add(4,4))
print("After thread")
The server will not start until the while loop is finished.
Thanks for reading, I hope someone knows an answer.
add function is called before the thread started, you need to pass add as reference though.
# decomposition
# first, add gets called
r = add(4,4)
# then the result is passed to func `postpone`
postpone(r)
# postpone accept a function and args, which eventually get passed to the function
def postpone(function, *args):
t = threading.Thread(target=function, args=args)
t.setDaemon(True)
t.start()
return 0
print("Before thread")
# pass func as a reference, also send args to the postpone func also
postpone(add, 4,4)
print("After thread")

How to not wait for the python line to fully complete and jump to next line

For Example:
def abc():
for I in range(1,10000000000):
print(I)
def def():
for I in range(1,1000000000000):
print(I)
abc()
def()
How to let the abc() keep running and not to wait abc() to complete and jump to def()
You can use threads to perform this:
from threading import Thread
def abc():
for I in range(1,10000000000): print(I)
def other():
for I in range(1,10000000000): print(I)
abc_thread = Thread(target=abc)
abc_thread.start()
# This starts the abc() function and then immediately
# continues to the next line of code. This is possible because the
# function is executed on another thread separate from the main program's thread
other()
Also as a side note, I am not sure what your implementation of this will be but because you are new, I have to point out that it is bad practice to name your functions, classes, variables, etc. to the same name as a builtin python object. This will cause headaches later on when you will run into errors.

Why is this queue.join call blocking indefinitely?

I'm playing about with a personal project in python3.6 and I've run into the following issue which results in the my_queue.join() call blocking indefinitely. Note this isn't my actual code but a minimal example demonstrating the issue.
import threading
import queue
def foo(stop_event, my_queue):
while not stop_event.is_set():
try:
item = my_queue.get(timeout=0.1)
print(item) #Actual logic goes here
except queue.Empty:
pass
print('DONE')
stop_event = threading.Event()
my_queue = queue.Queue()
thread = threading.Thread(target=foo, args=(stop_event, my_queue))
thread.start()
my_queue.put(1)
my_queue.put(2)
my_queue.put(3)
print('ALL PUT')
my_queue.join()
print('ALL PROCESSED')
stop_event.set()
print('ALL COMPLETE')
I get the following output (it's actually been consistent, but I understand that the output order may differ due to threading):
ALL PUT
1
2
3
No matter how long I wait I never see ALL PROCESSED output to the console, so why is my_queue.join() blocking indefinitely when all the items have been processed?
From the docs:
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls
task_done() to indicate that the item was retrieved and all work on it
is complete. When the count of unfinished tasks drops to zero, join()
unblocks.
You're never calling q.task_done() inside your foo function. The foo function should be something like the example:
def worker():
while True:
item = q.get()
if item is None:
break
do_work(item)
q.task_done()

Resources