Why are we still stuck on this function despite threading? - multithreading

I want to make a function that waits for input, and if nothing is input in 2 seconds, skips the input and moves on to the rest of the function.
I tried this function from another thread:
import time
from threading import Thread
answer = None
def check():
time.sleep(2)
if answer != None:
return "ayy"
print("Too slow")
return "No input"
Thread(target = check).start()
answer = input("Input something: ")
print(answer)
This code asks for input, and if no input is added in 2 seconds it prints "too slow". However it never moves on to print(answer), I think it keeps waiting for user input.
I want to ask for user input and if it takes too long, it just takes input = None and moves on to the functions underneath it. I looked at timeout methods involving signal, but that's only for linux and im on a windows.

Your assumption is right. The input() call is waiting for the user to submit any input which in your case never happens.
A cross platform solution would be to make use of select():
import sys
import select
def timed_input(prompt, timeout=10):
"""
Wait ``timeout`` seconds for user input
Returns a tuple:
[0] -> Flag if timeout occured
[1] -> User input
"""
sys.stdout.write(prompt)
sys.stdout.flush()
input, output, error = select.select([sys.stdin], [], [], 2)
if input:
return True, sys.stdin.readline().strip()
else:
return False, None
timed_input('Input something: ', timeout=2)
That's a dirty prototype. I suggest to use exceptions for the timeout or a more intuitive return value for the function.

Related

Best way to keep creating threads on variable list argument

I have an event that I am listening to every minute that returns a list ; it could be empty, 1 element, or more. And with those elements in that list, I'd like to run a function that would monitor an event on that element every minute for 10 minute.
For that I wrote that script
from concurrent.futures import ThreadPoolExecutor
from time import sleep
import asyncio
import Client
client = Client()
def handle_event(event):
for i in range(10):
client.get_info(event)
sleep(60)
async def main():
while True:
entires = client.get_new_entry()
if len(entires) > 0:
with ThreadPoolExecutor(max_workers=len(entires)) as executor:
executor.map(handle_event, entires)
await asyncio.sleep(60)
if __name__ == "__main__":
loop = asyncio.new_event_loop()
loop.run_until_complete(main())
However, instead of keep monitoring the entries, it blocks while the previous entries are still being monitors.
Any idea how I could do that please?
First let me explain why your program doesn't work the way you want it to: It's because you use the ThreadPoolExecutor as a context manager, which will not close until all the threads started by the call to map are finished. So main() waits there, and the next iteration of the loop can't happen until all the work is finished.
There are ways around this. Since you are using asyncio already, one approach is to move the creation of the Executor to a separate task. Each iteration of the main loop starts one copy of this task, which runs as long as it takes to finish. It's a async def function so many copies of this task can run concurrently.
I changed a few things in your code. Instead of Client I just used some simple print statements. I pass a list of integers, of random length, to handle_event. I increment a counter each time through the while True: loop, and add 10 times the counter to every integer in the list. This makes it easy to see how old calls continue for a time, mixing with new calls. I also shortened your time delays. All of these changes were for convenience and are not important.
The important change is to move ThreadPoolExecutor creation into a task. To make it cooperate with other tasks, it must contain an await expression, and for that reason I use executor.submit rather than executor.map. submit returns a concurrent.futures.Future, which provides a convenient way to await the completion of all the calls. executor.map, on the other hand, returns an iterator; I couldn't think of any good way to convert it to an awaitable object.
To convert a concurrent.futures.Future to an asyncio.Future, an awaitable, there is a function asyncio.wrap_future. When all the futures are complete, I exit from the ThreadPoolExecutor context manager. That will be very fast since all of the Executor's work is finished, so it does not block other tasks.
import random
from concurrent.futures import ThreadPoolExecutor
from time import sleep
import asyncio
def handle_event(event):
for i in range(10):
print("Still here", event)
sleep(2)
async def process_entires(counter, entires):
print("Counter", counter, "Entires", entires)
x = [counter * 10 + a for a in entires]
with ThreadPoolExecutor(max_workers=len(entires)) as executor:
futs = []
for z in x:
futs.append(executor.submit(handle_event, z))
await asyncio.gather(*(asyncio.wrap_future(f) for f in futs))
async def main():
counter = 0
while True:
entires = [0, 1, 2, 3, 4][:random.randrange(5)]
if len(entires) > 0:
counter += 1
asyncio.create_task(process_entires(counter, entires))
await asyncio.sleep(3)
if __name__ == "__main__":
asyncio.run(main())

How to stop awaiting ainput funcion from another coroutine/task

I want to stop awaiting ainput function at 6th iteration of for loop
import asyncio
from aioconsole import ainput
class Test():
def __init__(self):
self.read = True
async def read_from_concole(self):
while self.read:
command = await ainput('$>')
if command == 'exit':
self.read = False
if command == 'greet':
print('greetings :J')
async def count(self):
console_task = asyncio.create_task(self.read_from_concole())
for c in range(10):
await asyncio.sleep(.5)
print(f'number: {c}')
if c == 5: # 6th iteration
# What shoud I do here?
# Following code doesn't meet my expectations
self.read = False
console_task.cancel()
await console_task
# async def run_both(self):
# await asyncio.gather(
# self.read_from_concole(),
# self.count()
# )
if __name__ == '__main__':
o1 = Test()
loop = asyncio.new_event_loop()
loop.run_until_complete(o1.count())
Of course, this code is simplified, but covers the idea: write a program where one coroutine can cancel another which is awaiting something (in this example ainput.
asyncio.Task.cancel() is not the solution because it won't make coroutine stop awaiting (so I need put arbitrary character into console and press enter, this is not what I want).
I don't even know whether my approach makes sense, I'm a fresh asyncio user, for now, I know only the basics. In my real project, the situation is very similar. I have a GUI application and a console window. By clicking 'X' button I want to close window and terminate ainput (which read commands from the console) to completely finish the program (the console part is working on a different thread, and because of that I can't close my program completely - this thread will run until ainput receive some input from a user).

Python: how to write to stdin of a subprocess and read its output in real time

I have 2 programs.
The first (which could be written in any language, actually and therefore cannot be altered at all) looks like this:
#!/bin/env python3
import random
while True:
s = input() # get input from stdin
i = random.randint(0, len(s)) # process the input
print(f"New output {i}", flush=True) # prints processed input to stdout
It runs forever, read something from stdin, processes it and writes the result to stdout.
I am trying to write a second program in Python using the asyncio library.
It executes the first program as a subprocess and attempt to feed it input via its stdin and retrieve the result from the its stdout.
Here is my code so far:
#!/bin/env python3
import asyncio
import asyncio.subprocess as asp
async def get_output(process, input):
out, err = await process.communicate(input)
print(err) # shows that the program crashes
return out
# other attempt to implement
process.stdin.write(input)
await process.stdin.drain() # flush input buffer
out = await process.stdout.read() # program is stuck here
return out
async def create_process(cmd):
process = await asp.create_subprocess_exec(
cmd, stdin=asp.PIPE, stdout=asp.PIPE, stderr=asp.PIPE)
return process
async def run():
process = await create_process("./test.py")
out = await get_output(process, b"input #1")
print(out) # b'New output 4'
out = await get_output(process, b"input #2")
print(out) # b''
out = await get_output(process, b"input #3")
print(out) # b''
out = await get_output(process, b"input #4")
print(out) # b''
async def main():
await asyncio.gather(run())
asyncio.run(main())
I struggle to implement the get_output function. It takes a bytestring (as needed by the input parameter of the .communicate() method) as parameter, writes it to the stdin of the program, reads the response from its stdout and returns it.
Right now, only the first call to get_output works properly. This is because the implementation of the .communicate() method calls the wait() method, effectively causing the program to terminate (which it isn't meant to). This can be verified by examining the value of err in the get_output function, which shows the first program reached EOF. And thus, the other calls to get_output return an empty bytestring.
I have tried another way, even less successful, since the program gets stuck at the line out = await process.stdout.read(). I haven't figured out why.
My question is how do I implement the get_output function to capture the program's output in (near) real time and keep it running ? It doesn't have to be using asyncio, but I have found this library to be the best one so far for that.
Thank you in advance !
If the first program is guaranteed to print only one line of output in response to the line of input that it has read, you can change await process.stdout.read() to await process.stdout.readline() and your second approach should work.
The reason it didn't work for you is that your run function has a bug: it never sends a newline to the child process. Because of that, the child process is stuck in input() and never responds. If you add \n at the end of the bytes literals you're passing to get_output, the code works correctly.

Python3 ZMQ, Interrupt function and calling another on each new message received

Here is my problem : I have 2 programs communicating thanks to zmq on an arbitrary tcp port.
When the #1 receives message from #2 he has to call some function.
If #1 receives a message before the current function ends, I'd like #1 to interrupt the current function and call the new one.
I tried to use threading.Event to interrupt function.
I don't know if zmq is the right option for my needs or if the socket types fine.
To simplify I show the simplest version possible,here is what I tried :
p1.py
import zmq
from threading import Event
port_p2 = "6655"
context = zmq.Context()
socket = context.socket(zmq.PAIR)
socket.connect("tcp://localhost:%s" % port_p2)
print("port 6655")
__exit1 = Event()
__exit2 = Event()
def action1():
__exit1.clear()
__exit2.set()
while not __exit1.is_set():
for i in range(1, 20):
print(i)
time.sleep(1)
__exit1.set()
def action2():
__exit2.clear()
__exit1.set()
while not __exit2.is_set():
for i in range(1, 20):
print(i * 100)
time.sleep(1)
__exit2.set()
if __name__ == "__main__":
try:
while True:
try:
string = socket.recv(flags=zmq.NOBLOCK)
# message received, process it
string = str(string, 'utf-8')
if "Action1" in string:
action1()
if "Action2" in string:
action2()
except zmq.Again as e:
# No messages waiting to be processed
pass
time.sleep(0.1)
except(KeyboardInterrupt, SystemExit):
print("exit")
and p2.py
import time
import random
port_p1 = "6655"
context = zmq.Context()
socket_p1 = context.socket(zmq.PAIR)
socket_p1.bind("tcp://*:%s" % port_p1)
print("port 6655")
if __name__ == "__main__":
while True:
i = random.choice(range(1, 10))
print(i)
try:
if random.choice([True, False]):
print("Action 1")
socket_p1.send(b'Action1')
else:
socket_p1.send(b'Action2')
print("Action 2")
except zmq.Again as e:
pass
time.sleep(i)
For my purpose I didn't want / can't use system signals
I'd appreciate any input and don't hesitate to ask for precision, I have to confess that I had trouble writing this down.
Thank you
Q : …like #1 to interrupt the current function…
Given you have forbidden to use signals, #1 can but passively signal (be it over the present ZeroMQ infrastructure or not) the function, not to continue further and return in a pre-mature fashion ( so the fun() has to get suitably modified for doing that active re-checking, best in some reasonably granular progressive fashion, regularly checking actively the #1 if did passively signal ( "tell" the fun() ) to RET early, due to whatever reason and way the #1 had and used to do that ).
The other chance is to extend the already present ZeroMQ infrastructure ( the Context()-instance(s) ) with a socket-monitor artillery and make the fun() .connect()-directly to the socket-monitor resources to actively learn about any new message arriving to #1 ( i.e. autonomously, without #1's initiative ) and deciding to return in a pre-mature fashion, in those cases, where feasible according to your application logic.
For the socket-monitor case, the API documentation has all details needed for implementation, which would otherwise go way beyond the scope of the Stack Overflow post.

Unittesting for correct 'continue' behaviour

I have a function that asks a user for confirmation via a prompt. It accepts y or n as answers, otherwise it asks again.
Now, I want to write a unittest for this function. I can test the correct behaviour for y or n just fine, but how do I test that my function correctly rejects inacceptable input?
Here's the code for foo.py:
def get_input(text):
"""gets console input and returns it; needed for mocking during unittest
"""
return input(text)
def confirm(message='Confirm?', default=False):
"""prompts for yes or no response from the user. Returns True for yes and
False for no.
'default' should be set to the default value assumed by the caller when
user simply types ENTER, and is marked in the prompt with square brackets.
"""
if default:
message = '%s [y]|n: ' % (message) # default answer = yes
else:
message = '%s y|[n]: ' % (message) # default answer = no
while True:
answer = get_input(message).lower()
if not answer:
return default
if answer not in ['y', 'n']:
print('Please enter y or n!')
continue
if answer == "y":
return True
if answer == 'n':
return False
answer = confirm()
print(answer)
And here is my Test class:
import unittest
import foo
class TestFoo_confirm(unittest.TestCase):
"""testing confirm function
"""
#unittest.mock.patch('foo.get_input', return_value='y')
def test_answer_yes(self, _):
self.assertEqual(foo.confirm(), True) # confirmed if 'y' was entered
So, how do I write a similar test for an input-value like '1' (or how do I need to adjust my confirm() function to make it testeable)?
Currently, if I call foo.confirm() from the unittest file, it just gets stuck in an infinite loop and it doesn't return anything. (I understand why this is happening, just not how to circumvent it.)
Any ideas?
You could try this:
import unittest, unittest.mock
import foo
class TestFoo_confirm(unittest.TestCase):
"""testing confirm function
"""
#unittest.mock.patch('foo.get_input', return_value='y')
def test_answer_yes(self, _):
self.assertEqual(foo.confirm(), True) # confirmed if 'y' was entered
#unittest.mock.patch('builtins.print')
#unittest.mock.patch('foo.get_input', side_effect=['1','yn','yes','y']) # this will make the mock return '1', 'yn' and so on in sequence
def test_invalid_answer(self, mock_input, mock_print):
self.assertEqual(foo.confirm(), True) # it should eventually return True
self.assertEqual(mock_input.call_count, 4) # input should be called four times
mock_print.assert_called_with('Please enter y or n!')
In the second test case, we imitate a user who enters three invalid inputs, and, after being prompted again, finally enters 'y'. So we patch foo.get_input in such a way that it returns 1 the first time it's called, then yn, then yes and finally y. The first three examples should cause the confirm function to prompt the user again. I also patched the print function, so that the 'Please enter y or n!' message wouldn't show up when testing. This isn't necessary.
Then we assert that our mock input was called four times, meaning that the first three times, the confirm function reprompted.
Finally we assert that the print function was called (at least once) with 'Please enter y or n!'.
This does not test if the correct number of print statements were made or if they were in correct order, but I suspect this would be possible too

Resources