python tkinter non blocking information - python-3.x

I am trying to create non-blocking tkiner messagebox using multithreading. I know messagebox is not defined in this way, but I preferred the messagebox concept and functions. I am having actual main program which is quite large and complex so showing some example
program toto.py
import threading
from disp_message import disp_message
msg =" This is a test message"
msgtype = 1
t1 = threading.Thread(target=disp_message, args=(msg,msgtype,))
t1.start()
t1.join()
for i in range(100000):
print(i)
disp_message(msg,msgtype)
print("Done!")
disp_messagee python function code in another file
from tkinter import *
from tkinter import messagebox
def disp_message(msg,msgtype):
top = Tk()
top.withdraw()
if msgtype==1:
messagebox.showwarning("Warning",msg)
elif msgtype==2:
messagebox.showinfo("information",msg)
else:
messagebox.showerror("Error",msg)
When I run this program i am having 2 issues
1. Following error
Traceback (most recent call last):
File "toto.py", line 13, in <module>
disp_message(msg,msgtype)
File "c:\NSE\scripts\disp_message.py", line 8, in disp_message
messagebox.showwarning("Warning",msg)
File "C:\ProgramData\Anaconda\lib\tkinter\messagebox.py", line 87, in showwarning
return _show(title, message, WARNING, OK, **options)
File "C:\ProgramData\Anaconda\lib\tkinter\messagebox.py", line 72, in _show
res = Message(**options).show()
File "C:\ProgramData\Anaconda\lib\tkinter\commondialog.py", line 39, in show
w = Frame(self.master)
File "C:\ProgramData\Anaconda\lib\tkinter\__init__.py", line 2744, in __init__
Widget.__init__(self, master, 'frame', cnf, {}, extra)
File "C:\ProgramData\Anaconda\lib\tkinter\__init__.py", line 2299, in __init__
(widgetName, self._w) + extra + self._options(cnf))
RuntimeError: main thread is not in main loop
Secondly it displays messagebox and wait for acknowledgement.
While my objective is to display messagebox and parallel let program get executed and finished.
Can you pl help ?

Related

Why shouldn't Event instance be put into Queue in python multiple process?

Why shouldn't Event instance be put to Queue in pyton multiple process?
When I try to put Event instance into Queue, and then the python interceptor raise Runtime Error as below!
RuntimeError: Condition objects should only be shared between processes through inheritance
My Example Code:
import time
from multiprocessing import Process, Queue, Event
def slaver(q: Queue, e:Event):
while True:
print("do1", e)
_, _ = q.get(block=True)
time.sleep(3)
e.set()
print("do2")
def start():
q = Queue()
e = Event()
p = Process(target=slaver, args=(q, e))
p.start()
while True:
print("1")
q.put((1, e))
print("2", e)
wait = e.wait(timeout=1)
print("3", wait)
e.clear()
print("4")
time.sleep(5)
if __name__ == '__main__':
start()
Output
1
2 <multiprocessing.synchronize.Event object at 0x1028d8df0>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/synchronize.py", line 220, in __getstate__
context.assert_spawning(self)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 359, in assert_spawning
raise RuntimeError(
RuntimeError: Condition objects should only be shared between processes through inheritance
do1 <multiprocessing.synchronize.Event object at 0x1075b6eb0>
3 False
4
1
2 <multiprocessing.synchronize.Event object at 0x1028d8df0>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 239, in _feed
obj = _ForkingPickler.dumps(obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/synchronize.py", line 220, in __getstate__
context.assert_spawning(self)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 359, in assert_spawning
raise RuntimeError(
RuntimeError: Condition objects should only be shared between processes through inheritance
And if I replate q.put((1, e)) with q.put((1, 2)), the exception will disappear.
But in , there is an example for using Event in multiple threading. the different is my code is in process. Event in process is clone from threading, what's the different?

How to share the files (files object) between the various processes in python?

I'm implementing the multiprocessing, there are lot of JSON files, I want to read and write those files by various processes, I'm doing multiprocessing. I don't want the race condition in between so I also want the processes needs to be synchronised there.
I'm trying the following dummy code. But I don't know why it is not working I'm using multiprocess Queue to share the open file object. Could you guys suggest me is there anything wrong I'm doing, i'm getting error, I'm new to multiprocessing stuff.
Below is my code:
from multiprocessing import Queue, Process, Lock
def writeTofile(q, lock, i):
print(f'some work by {i}')
text = f" Process {i} -- "
ans =""
for i in range(10000):
ans += text
#critiacl section
lock.acquire()
file = q.get()
q.put(file)
file.write(ans)
lock.release()
print(f'updated by process {i}')
def main():
q = Queue()
lock = Lock()
jobs = []
with open("test.txt", mode = 'a') as file:
q.put(file)
for i in range(4):
process = Process(target = writeTofile, args = (q, lock, i))
jobs.append(process)
process.start()
for j in jobs:
j.join()
print('completed')
if __name__ == "__main__":
main()
This is the error I'm getting below:
Traceback (most recent call last):
File "/Users/akshaysingh/Desktop/ipnb/multi-processing.py", line 42, in <module>
main()
File "/Users/akshaysingh/Desktop/ipnb/multi-processing.py", line 27, in main
q.put(file)
File "<string>", line 2, in put
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/managers.py", line 808, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot pickle '_io.TextIOWrapper' object

How to run shell command from python subprocess module in tf.py_function in tf.data pipeline?

I have created tf.data.Dataset pipeline variable to which I pass a python function which is trying to run sox command through python's subprocess module. The code is running fine on cpu with windows os but is not able to run on Google Colab's GPU which has linux os. Here is the code -
from scipy.io import wavfile
import tensorflow as tf
def tf_func(inp_sample):
def func(inp_sample,<tf.string object>):
try:
fdesc, infile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
fdesc, outfile = tempfile.mkstemp(suffix=".wav")
os.close(fdesc)
wavfile.write(infile,<sample rate>,inp_sample.numpy()) //writes audio file to disk
arguments = ['sox',infile,outfile,'-q','compand',
*DRC_PRESET[<tf.string object>.numpy().decode('utf-8')]]
subprocess.check_call(arguments)
finally:
os.unlink(infile)
os.unlink(outfile)
return tf.convert_to_tensor(inp_sample,dtype=tf.float32)
drc = np.random.choice(<lis of string>)
[inp_sample,] = tf.py_function(dynamic_compression,
[inp_sample,tf.constant(drc)],[tf.float32])
inp_sample.set_shape(target_sample_size) //target_size=<some int>
return inp_sample
...
inp=tf.data.Dataset.from_tensor_slices(inp) // inp shape eg: (4, 500)
inp = inp.map(tf_dynamic_compression)
for i in inp:
print(i.numpy())
And the error it throws -
UnknownError: 2 root error(s) found.
(0) Unknown: CalledProcessError: Command 'None' died with <Signals.SIGINT: 2>.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/script_ops.py", line 233, in __call__
return func(device, token, args)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/script_ops.py", line 122, in __call__
ret = self._func(*args)
File "/tmp/tmpn_9q_jxm.py", line 24, in dynamic_compression
ag__.converted_call(subprocess.check_call, dynamic_compression_scope.callopts, (arguments,), None, dynamic_compression_scope)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py", line 541, in converted_call
result = converted_f(*effective_args)
File "/tmp/tmpozf4qyav.py", line 50, in tf__check_call
ag__.if_stmt(cond_1, if_true_1, if_false_1, get_state_1, set_state_1, (), ())
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/operators/control_flow.py", line 895, in if_stmt
return _py_if_stmt(cond, body, orelse)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/operators/control_flow.py", line 1004, in _py_if_stmt
return body() if cond else orelse()
File "/tmp/tmpozf4qyav.py", line 44, in if_true_1
raise ag__.converted_call(CalledProcessError, check_call_scope.callopts, (retcode, cmd), None, check_call_scope)
How to solve this problem?
The problem was coming due to sox not getting executed by subprocess module. It has been answered earlier. There are two solutions -
First change the arguments line
arguments = " ".join(['sox',infile,outfile,'-q','compand',*DRC_PRESET[<tf.string object>.numpy().decode('utf-8')]])
and then
os.system(arguments)
OR
subprocess.check_call(arguments, shell=True)
For me the second worked.

Does `concurrent.futures.ProcessPoolExecutor()` have any restriction on number of processes?

I wrote a simple code using concurrent.futures.ProcessPoolExecutor() whihc you can see in below. I'm using Python 3.7.4 on Windows 10 (64 bit) on a Core-i7 laptop.
import time
import concurrent.futures
def f(x):
lo = 0
for i in range(x):
lo += i
return(lo)
n = 7
if __name__ == '__main__':
t1 = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
Ans = [executor.submit(f, 10**7-i) for i in range(n)]
for f in concurrent.futures.as_completed(Ans):
print(f.result())
t2 = time.perf_counter()
print('completed at', t2-t1, 'seconds')
The variable n determines how many processes is going to execute. When I set n to 1, 2, 4, 7 everything work fine. For example the output for n=7 is
49999995000000
49999955000010
49999965000006
49999985000001
49999975000003
49999945000015
49999935000021
completed at 2.0607623 seconds
However for n=10 it gives the following error!
49999945000015
49999955000010
49999965000006
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "E:\Python37\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "E:\Python37\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function f at 0x00000285BFC4E0D8>: it's not the same object as __main__.f
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "e:/Python37/Python files/Parallel struggle/Python_20191219_parallel_3.py", line 23, in <module>
print(f.result())
File "E:\Python37\lib\concurrent\futures\_base.py", line 428, in result
return self.__get_result()
File "E:\Python37\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "E:\Python37\lib\multiprocessing\queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "E:\Python37\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function f at 0x00000285BFC4E0D8>: it's not the same object as __main__.f
Why some of the processes get done if something is wrong with the code? And what has happened that an error comes up? And is it specific to python on windows? Is it about the number of CPUs?

Optional user input with timeout for Python 3

I am trying to make a code that will ask for a user input and if that input is not given in a set amount of time will assume a default value and continue through the rest of the code without requiring the user to hit enter. I am running in Python 3.5.1 on Windows 10.
I have looked through: Keyboard input with timeout in Python, How to set time limit on raw_input, Timeout on a function call, and Python 3 Timed Input black boxing the answers but none of the answers are suitable as they are not usable on Windows (principally use of signal.SIGALRM which is only available on linux), or require a user to hit enter in order to exit the input.
Based upon the above answers however i have attempted to scrap together a solution using multiprocessing which (as i think it should work) creates one process to ask for the input and creates another process to terminate the first process after the timeout period.
import multiprocessing
from time import time,sleep
def wait(secs):
if secs == 0:
return
end = time()+secs
current = time()
while end>current:
current = time()
sleep(.1)
return
def delay_terminate_process(process,delay):
wait(delay)
process.terminate()
process.join()
def ask_input(prompt,term_queue,out_queue):
command = input(prompt)
process = term_queue.get()
process.terminate()
process.join()
out_queue.put(command)
##### this doesn't even remotly work.....
def input_with_timeout(prompt,timeout=15.0):
print(prompt)
astring = 'no input'
out_queue = multiprocessing.Queue()
term_queue = multiprocessing.Queue()
worker1 = multiprocessing.Process(target=ask_input,args=(prompt,term_queue,out_queue))
worker2 = multiprocessing.Process(target=delay_terminate_process,args=(worker1,timeout))
worker1.daemon = True
worker2.daemon = True
term_queue.put(worker2)
print('Through overhead')
if __name__ == '__main__':
print('I am in if statement')
worker2.start()
worker1.start()
astring = out_queue.get()
else:
print('I have no clue what happened that would cause this to print....')
return
print('returning')
return astring
please = input_with_timeout('Does this work?',timeout=10)
But this fails miserably and yields:
Does this work?
Through overhead
I am in if statement
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 347, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
Does this work?
Through overhead
I have no clue what happened that would cause this to print....
Does this work?Process Process-1:
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\process.py", line 287, in __reduce__
'Pickling an AuthenticationString object is '
TypeError: Pickling an AuthenticationString object is disallowed for security reasons
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda3\saved_programs\a_open_file4.py", line 20, in ask_input
command = input(prompt)
EOFError: EOF when reading a line
Does this work?
Through overhead
I have no clue what happened that would cause this to print....
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "C:\Anaconda3\lib\multiprocessing\queues.py", line 58, in __getstate__
context.assert_spawning(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 347, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through inheritance
Process Process-2:
Traceback (most recent call last):
File "C:\Anaconda3\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Anaconda3\saved_programs\a_open_file4.py", line 16, in delay_terminate_process
process.terminate()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 113, in terminate
self._popen.terminate()
AttributeError: 'NoneType' object has no attribute 'terminate'
I really don't understand the multiprocessing module well and although I have read the official docs am unsure why this error occurred or why it appears to have ran through the function call 3 times in the process. Any help on how to either resolve the error or achieve an optional user input in a cleaner manner will be much appreciated by a noob programmer. Thanks!

Resources