I'm trying to learn/understand threading in python and found this.
I tried to use this with input() but it doesn't work the way I imagined.
import queue
import threading
def cin(el, q):
while input() != el:
print('no')
continue
q.put(el)
checks = ['one', 'two']
q = queue.Queue()
for el in checks:
t = threading.Thread(target = cin, args = (el, q))
t.daemon = True
t.start()
s = q.get()
print(s)
I'm trying to run two threads who each have a while loop to check if the input() from console matches with one of the elements from the list. If it doesn't it waits for the next input. What happens is that only 'one'-thread, so the first element in the list works. After a lot of trys 'two'-thread works but the other doesn't. Where is the mistake? Can only one thread use the input()? Is it not possible to run two while-loops at the same time?
Related
Im not very good with programming but im currently doing a multiplication learning programm for my brother and was wandering if there is any way to do it, so that he has to answer after a certain ammoun of time or else he fails the question. Here is my Code:
import random
F = 1
while F==1:
x = random.randint(1,10)
y = random.randint(1,10)
Result = y*x
print(y,"*",x)
Input = int(input())
if Result == Input:
print("correct")
else:
print("Wrong, correct result:",Result)
I hope this is good enough. I would appreciate any help! Thank a lot in advande
You can use threading module to create a thread and assign timer to that thread, if the timer runs out that means the sub thread is dead now the program will respond you got late.
Here's the solution:
import random
from threading import Thread
from time import sleep
def timer():
sleep(10) # wait for 10 seconds once the question is asked
return True
if __name__ == '__main__':
while True:
x = random.randint(1, 10)
y = random.randint(1, 10)
Result = y * x
print(y, "*", x)
time = Thread(target=timer) # Creating sub thread for timer processing
time.start() # starting the thread
Input = int(input())
if not time.isAlive(): # checking whether the timer is alive
print('You got late, Failed')
break
else:
pass
if Result == Input:
print("correct")
else:
print("Wrong, correct result:", Result)
if you use time.sleep() method on your main thread your program will hung up and so do your system as well for the time being, so instead of doing that I created a new thread which works completely independent of your main thread and your system will not hung up.
You can define your own using the Python's time module.
For example:
def timer(t):#t must be the time of the timer in seconds
while t:
mins,sec=divmod(t,60)
timer = '{:02d}:{:02d}'.format(mins, secs)
print(timer, end='\r')
time.sleep(1)
t=t-1
print("Time's Up")
I have a function in my code that asks the user for input:
def function_1():
...
x = input('Please provide input')
...
return something
I want to be able to run my code, and when the program eventually reaches function_1 and asks the user for input, automatically provide it with some specified input. When unittesting, I can use the mock library to simulate keyboard input as below
#mock.patch('builtins.input', side_effects=[1,2,3])
function_1()
function_1()
function_1()
This calls the function three times and provides the inputs {1, 2, 3}. I'm wondering if there is a way to do the same thing outside of unittesting.
I'm aware that I can rewrite the code, or use pipe in terminal. But I'm more curious about whether this can be solved in the manner described above.
One way is to overwrite sys.stdin:
import sys
from io import StringIO
oldstdin = sys.stdin
sys.stdin = StringIO("1\n2\n3\n")
assert input() == "1"
assert input() == "2"
assert input() == "3"
sys.stdin = oldstdin
The great thing about Python is that you can override just about any function, even built-ins.
def override():
from itertools import count
counter = count()
return lambda *args, **kwargs: next(counter)
input = override()
def x():
return input("Testing123")
print(x()) # 1
print(x()) # 2
print(x()) # 3
Though, this has to be done before your functions are called.
I'm parsing the last line of a continuously updating log file. If it matches, I want to return the match to a list and start another function using that data. I need to keep watching for new entries and parse them even while the new function continues.
I've been working this from a few different angles for about a week with varying success. I tried threading, but ran into issues getting the return value, I tried using a global var but couldn't get it working. I'm now trying asyncio, but having even more issues getting that to work.
def tail():
global match_list
f.seek(0, os.SEEK_END)
while True:
line = f.readline()
if not line:
time.sleep(0.1)
continue
yield line
def thread():
while True:
tail()
def somefun(list):
global match_list
#do things here
pass
def main():
match_list = []
f = open(r'file.txt')
thread=threading.Thread(target=thread, args=(f,))
thread.start()
while True:
if len(match_list) >= 1:
somefun(match_list)
if __name__ == '__main__':
main()
Wrote the above from memory..
I want tail() to return the line to a list that somefun() can use.
I'm having issues getting it to work, I will use threading or asyncio.. anything to get it running at this point.
In asyncio you might use two coroutines, one that reads from file, and the other that processes the file. Since they communicate using queue, they don't need the global variable. For example:
import os, asyncio
async def tail(f, queue):
f.seek(0, os.SEEK_END)
while True:
line = f.readline()
if not line:
await asyncio.sleep(0.1)
continue
await queue.put(line)
async def consume(queue):
lines = []
while True:
next_line = await queue.get()
lines.append(next_line)
# it is not clear if you want somefun to receive the next
# line or *all* lines, but it's easy to do either
somefun(next_line)
def somefun(line):
# do something with line
print(f'line: {line!r}')
async def main():
queue = asyncio.Queue()
with open('file.txt') as f:
await asyncio.gather(tail(f, queue), consume(queue))
if __name__ == '__main__':
asyncio.run(main())
# or, on Python older than 3.7:
#asyncio.get_event_loop().run_until_complete(main())
The beauty of an asyncio-based solution is that you can easily start an arbitrary number of such coroutines in parallel (e.g. you could start gather(main1(), main2()) in an outer coroutine, and run that), and have them all share the same thread.
with a few small fixes you almost run this :) (comments inside)
match_list # should be at the module scope
def tail():
# f = open(...) ???
f.seek(0, os.SEEK_END)
while True:
line = f.readline()
if not line:
time.sleep(0.1)
continue
yield line
def thread():
for line in tail():
match_list.append(line) # append line
print("thread DONE!")
def somefun(list):
#do things here
while match_list:
line = match_list.pop(0)
print(line)
def main():
match_list = []
f = open(r'file.txt')
thread=threading.Thread(target=thread, args=(f,))
thread.start()
while True:
if match_list:
somefun(match_list)
time.sleep(0.1) # <-- don't burn the CPU :)
I spent nearly the whole day with this and came to the end of my knowledge:
I want to change a shared multiprocessing.Value string in the subprocess, but python hangs as soon as the subprocess is trying to change the shared value.
Below an example code:
from multiprocessing import Process, Value, freeze_support
from ctypes import c_wchar_p
def test(x):
with x.get_lock():
x.value = 'THE TEST WORKED'
return
if __name__ == "__main__":
freeze_support()
value = Value(c_wchar_p, '')
p = Process(target=test, args = (value,))
p.start()
print(p.pid)
# this try block is to also allow p.run()
try:
p.join()
p.terminate()
except:
pass
print(value.value)
What I tried and does not work:
I tried ctypes c_wchar_p and c_char_p, but both result in the same freezing.
I tried also without x.get_lock()
I tried also without freeze_support()
What works (but does not help):
Using a float as the shared value (value = Value('d',0) and x.value = 1).
Running the Process without starting a subprocess (replace p.start() with p.run() )
I am using Windows 10 64 bit and Python 3.6.4 (Spyder, but also tried outside of Spyder).
Any help welcome!
A shared pointer won't work in another process because the pointer is only valid in the process in which it was created. Instead, use an array:
import multiprocessing as mp
def test(x):
x.value = b'Test worked!'
if __name__ == "__main__":
x = mp.Array('c',15)
p = mp.Process(target=test, args = (x,))
p.start()
p.join()
print(x.value)
Output:
b'Test worked!'
Note that array type 'c' is specialized and returns a SynchronizedString vs. other types that return SynchronizedArray. Here's how to use type 'u' for example:
import multiprocessing as mp
from ctypes import *
def test(x):
x.get_obj().value = 'Test worked!'
if __name__ == "__main__":
x = mp.Array('u',15)
p = mp.Process(target=test, args = (x,))
p.start()
p.join()
print(x.get_obj().value)
Output:
Test worked!
Note that operations on the wrapped value that are non-atomic such as += that do read/modify/write should be protected with a with x.get_lock(): context manager.
I am following the principles laid down in this post to safely output the results which will eventually be written to a file. Unfortunately, the code only print 1 and 2, and not 3 to 6.
import os
import argparse
import pandas as pd
import multiprocessing
from multiprocessing import Process, Queue
from time import sleep
def feed(queue, parlist):
for par in parlist:
queue.put(par)
print("Queue size", queue.qsize())
def calc(queueIn, queueOut):
while True:
try:
par=queueIn.get(block=False)
res=doCalculation(par)
queueOut.put((res))
queueIn.task_done()
except:
break
def doCalculation(par):
return par
def write(queue):
while True:
try:
par=queue.get(block=False)
print("response:",par)
except:
break
if __name__ == "__main__":
nthreads = 2
workerQueue = Queue()
writerQueue = Queue()
considerperiod=[1,2,3,4,5,6]
feedProc = Process(target=feed, args=(workerQueue, considerperiod))
calcProc = [Process(target=calc, args=(workerQueue, writerQueue)) for i in range(nthreads)]
writProc = Process(target=write, args=(writerQueue,))
feedProc.start()
feedProc.join()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
On running the code it prints,
$ python3 tst.py
Queue size 6
response: 1
response: 2
Also, is it possible to ensure that the write function always outputs 1,2,3,4,5,6 i.e. in the same order in which the data is fed into the feed queue?
The error is somehow with the task_done() call. If you remove that one, then it works, don't ask me why (IMO that's a bug). But the way it works then is that the queueIn.get(block=False) call throws an exception because the queue is empty. This might be just enough for your use case, a better way though would be to use sentinels (as suggested in the multiprocessing docs, see last example). Here's a little rewrite so your program uses sentinels:
import os
import argparse
import multiprocessing
from multiprocessing import Process, Queue
from time import sleep
def feed(queue, parlist, nthreads):
for par in parlist:
queue.put(par)
for i in range(nthreads):
queue.put(None)
print("Queue size", queue.qsize())
def calc(queueIn, queueOut):
while True:
par=queueIn.get()
if par is None:
break
res=doCalculation(par)
queueOut.put((res))
def doCalculation(par):
return par
def write(queue):
while not queue.empty():
par=queue.get()
print("response:",par)
if __name__ == "__main__":
nthreads = 2
workerQueue = Queue()
writerQueue = Queue()
considerperiod=[1,2,3,4,5,6]
feedProc = Process(target=feed, args=(workerQueue, considerperiod, nthreads))
calcProc = [Process(target=calc, args=(workerQueue, writerQueue)) for i in range(nthreads)]
writProc = Process(target=write, args=(writerQueue,))
feedProc.start()
feedProc.join()
for p in calcProc:
p.start()
for p in calcProc:
p.join()
writProc.start()
writProc.join()
A few things to note:
the sentinel is putting a None into the queue. Note that you need one sentinel for every worker process.
for the write function you don't need to do the sentinel handling as there's only one process and you don't need to handle concurrency (if you would do the empty() and then get() thingie in your calc function you would run into a problem if e.g. there's only one item left in the queue and both workers check empty() at the same time and then both want to do get() and then one of them is locked forever)
you don't need to put feed and write into processes, just put them into your main function as you don't want to run it in parallel anyway.
how can I have the same order in output as in input? [...] I guess multiprocessing.map can do this
Yes map keeps the order. Rewriting your program into something simpler (as you don't need the workerQueue and writerQueue and adding random sleeps to prove that the output is still in order:
from multiprocessing import Pool
import time
import random
def calc(val):
time.sleep(random.random())
return val
if __name__ == "__main__":
considerperiod=[1,2,3,4,5,6]
with Pool(processes=2) as pool:
print(pool.map(calc, considerperiod))