I'm trying to use a concurrent.futures.ProcessPoolExecutor() to run some tasks in different threads (on Windows, if that matters). I'm getting the error:
Traceback (most recent call last):
File "C:\Users\Matthew\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\Matthew\Anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object '_RunnerThread._make_new_task.<locals>.<lambda>'
I completely understand why this is happening, and I can fix it.
But how do I detect that it's happened?
Let me explain. I was initially testing this area of code using pytest and some unit tests, but what happened is that the unit test would just deadlock, taking forever to complete. No error raised, no output captured which I could see. It was only when I ran the application that I saw the above printed to the terminal. But it appears to be an exception thrown in a different thread. The Future I get back from the submit method is well-formed, and continues to claim that it's "running".
(Update: This is slightly incorrect. If I submit my task, wait a little time, and then deliberately raise an Exception, then pytest does capture the stderr output above. But if I just try to wait for my Future to complete, there is no error, just an infinite wait...)
So, from the perspective of my application, everything worked fine, just the task is taking forever to complete. This is obviously rather annoying: I'd really like to detect that I've made this mistake from a unit test. How can I do that?
Edit: A self-contained example:
import concurrent.futures
def func():
return 5
if __name__ == "__main__":
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = [executor.submit(lambda : func())]
for result in concurrent.futures.as_completed(futures):
print(result.result())
If you run this, you get a stack-trace printed to stderr, but the script itself just runs forever. I'd like to make it error out...
Related
I'm not using recursion but I think some third party code I'm using has to many nested function calls.
This is the code I used as an example to create my project
https://github.com/ehong-tl/micropySX126X
I could create a cut down example but I don't really see the point as you guys would need two of the Pico-Lora-SX126X hardware to execute it. (These are cool little gadgets, they can send text messages to each over very large distances)
The main difference in my code from the example is I'm running this code in a second thread. If run in the primary thread it works so I'm assuming there are less levels deep of function call available to the thread run on the second core.
Basically the second thread is waiting for an incoming Lora message while the main thread is interacting with the user. When a Lora message comes in it triggers the error below.
Here is my hardware and micropython version
MicroPython v1.19.1-746-gf2de289ef on 2022-12-13; Raspberry Pi Pico W with RP2040
Here is the error message
Traceback (most recent call last):
File "sx1262.py", line 275, in _onIRQ
File "subprocess.py", line 73, in cb
File "sx1262.py", line 187, in recv
File "sx1262.py", line 251, in _readData
File "sx126x.py", line 483, in startReceive
File "sx126x.py", line 540, in startReceiveCommon
File "sx126x.py", line 1133, in setPacketParams
File "sx126x.py", line 1228, in fixInvertedIQ
File "sx126x.py", line 1034, in writeRegister
File "sx126x.py", line 1274, in SPIwriteCommand
File "sx126x.py", line 1291, in SPItransfer
RuntimeError: maximum recursion depth exceeded
The function SPItransfer appears to be at or around the 10th level.
I have not modified any of these functions.
I have tried adding a garbage collection here and there but I was just guessing and it didn't make any difference.
Any ideas how I can increase the this depth to allow for more nested functions?
Thanks
David
Update
I found a little script that calls itself to test the possible recursion depth.
When run in the primary thread it allows for 39 function calls and 17 function calls when run in the second thread.
So this doesn't explain why my project is receiving this error after what appears like 10 levels of function calls.
# Based on an example found here
# https://forum.micropython.org/viewtopic.php?t=3091
import _thread
a = 0
fail = False
def recursionTest():
global a, fail
a += 1
print("loop count="+str(a))
if not fail:
try:
recursionTest()
except Exception as errorMsg:
print(errorMsg)
fail = True
# Runs in the primary thread
#print("Main thread")
#recursionTest()
# Runs in the second thread
print("Sub thread")
_thread.start_new_thread(recursionTest,())
Output
Sub thread
>loop count=1
loop count=2
loop count=3
loop count=4
loop count=5
loop count=6
loop count=7
loop count=8
loop count=9
loop count=10
loop count=11
loop count=12
loop count=13
loop count=14
loop count=15
loop count=16
loop count=17
maximum recursion depth exceeded
I'm not sure why but I needed to put my change to the stack size immediately before the call to start the second thread or it seemed to make no difference.
Like this
_thread.stack_size(5*1024)
_thread.start_new_thread(recursionTest,())
I only needed to increase by 1kb from the default 4kb for the second thread for my program to succeed.
Hope this helps someone else.
all. I currently meet a question. What I want to do is run a socket server by multiprocessing package. But once I run the program, there is an error will be reported, saying, TypeError: cannot pickle '_queue.SimpleQueue' object. I have no idea why this will happen. So, could anyone help me resolve it?
class xxx:
......
def test_connect(self, max_num=10, alive=True, mode='TCP', IP='127.0.0.1', PORT=8080):
p = multiprocessing.Process(
target=self.create_tcp_server, )
p.start()
p.join()
......
If not, I have another question with multi-thread question,
As the picture shows, I want to run comac_connect via thread pool and in the comac_connect it will call another function message_handle through the same thread pool. But the real situation is the function message_handle cannot be executed, and the program holds there. Is there any solution that I can resolve it?
It's my first time asking a question on here so bear with me.
I'm trying to make a python3 program that runs executable files for x amount of time and creates a log of all output in a text file. For some reason the code I have so far works only with some executables. I'm new to python and especially subprocess so any help is appreciated.
import time
import subprocess
def CreateLog(executable, timeout=5):
time_start = time.time()
process = subprocess.Popen(executable, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, text=True)
f = open("log.txt", "w")
while process.poll() is None:
output = process.stdout.readline()
if output:
f.write(output)
if time.time() > time_start + timeout:
process.kill()
break
I was recently experimenting with crypto mining and came across nanominer, I tried using this python code on nanominer and the log file was empty. I am aware that nanominer already logs its output, but the point is why does the python code fail.
You are interacting through .poll() (R U dead yet?) and .readline().
It's not clear you want to do that.
There seems to be two cases for your long-lived child:
it runs "too long" silently
it runs forever, regularly producing output text at e.g. one-second intervals
The 2nd case is the easy one.
Just use for line in process.stdout:, consume the line,
peek at the clock, and maybe send a .kill() just as you're already doing.
No need for .poll(), as child exiting will produce EOF on that pipe.
For the 1st case, you will want to set an alarm.
See https://docs.python.org/3/library/signal.html#example
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
After "too long", five seconds, your handler will run.
It can do anything you desire.
You'll want it to have access to the process handle,
which will let you send a .kill().
I'll explain my question as best as I can and please I really need your help especially to people who are an expert in Python Multiprocessing cause I love Multiprocessing & I'm just a beginner learning.
def __handleDoubleClick(self,item):
self.tmx_window.show()
processes = []
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
process_ft = Process(target=self.tmx_window.fill_table, args=(item.text(),self.language_code,self.xml_filepath.text()))
processes.append(process_ft)
process_ft.start()
for process in processes:
process.join()
Now I have here a function (__handleDoubleClick) & this function simply does something when you double click a widget from my PYQT5 GUI, as you can see this line of code right here self.tmx_window.show() this shows up the 2nd GUI that I have. If you are curious about the self.tmx_window object this is its class and it simply inherits a class QMainWindow & Ui_TmxWindow the Ui_TmxWindow comes from the generated .py file from QT Designer.
class TmxWindow(QMainWindow,Ui_TmxWindow):
def __init__(self):
super().__init__()
# Set up the user interface from Designer.
self.setupUi(self)
As you can also see again I have here a function that is called which is this code.
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
Now I have commented it out and I wanted to make it a Process object cause I want to apply multiprocessing and I need it to run along side with other process in the future... Now as you can see I have applied this
process_ft = Process(target=self.tmx_window.fill_table, args=(item.text(),self.language_code,self.xml_filepath.text()))
processes.append(process_ft)
process_ft.start()
for process in processes:
process.join()
The value of target there is a function which is this self.tmx_window.fill_table and that function as you can see is from another class which I created an object from and that object as you can see is self.tmx_window. Now without applying multi-processing everything works fine cause I do the function calling right... but when I apply the multiprocessing this error comes up. By the way you'll see there "TmxWindow object" from the error and TmxWindow is the class I am referring to where the function belongs
Traceback (most recent call last):
File "main.py", line 127, in __handleDoubleClick
process_ft.start()
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'TmxWindow' object
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Now I have thought of trying to do the same thing but with Threading and I tried this instead of Process and IT WORKED! I am familiar about the difference of threads and processes and based on what I've read that threads share on memory while processes don't cause they have their own (correct me if I'm wrong) so that's why I wanted to apply multiprocessing instead of multithreading.
So the question is I am worried about is the error I've provided... and why does it work with Threading tho and not with Process . I feel like there's something that I do not understand yet a lot of MultiProcessing and I am just curious I mean I just followed it and provided a function to the Process object and that function comes from a different class which I created an instance of an object with... So can someone help me... Pleaaaseee. Thank you!
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
thread_ft = threading.Thread(target=self.tmx_window.fill_table,args=[item.text(),self.language_code,self.xml_filepath.text()])
threads.append(thread_ft)
thread_ft.start()
The important aspect to consider is that no access to GUI elements (including creating new ones) is allowed from any external thread besides the main Qt thread.
When using multiprocessing, each process has its own threads (I'm simplifying a lot, be aware), so if you're using multiprocessing you're basically also using different threads. Also, communication between processing in python is done by pickling (which doesn't work very well with Qt objects).
While in some circumstances accessing UI elements from other threads might work, it's usually just by coincidence, as the current hw/sw configuration allows those threads to work synchronously, since they are probably very responsive and their communication queue isn't troubled by other events. But, again, that's mostly a coincidence. I'm pretty sure that under other conditions (slow CPU, other concurring processes, etc.) your code won't work even when using threading.
So, the only safe solution is to use threading, but only along with Qt's signals and slot as long as you need to access in any way with UI elements. Which means that you have to use a QThread subclass (since it inherits from QObject, it provides the signal/slot mechanism), do your "processing" there and communicate with the main Qt thread with your custom signals, which will be therefore connected to the function that actually updates your item view.
I'm totally new to python's asyncio. I understand the idea, but even the most simple task won't work due to a lack of understanding on my side.
Here's my code which tries to read a file (and ultimately process each line of it) reguarily:
#!/usr/bin/env python3
import asyncio
import aiofiles
async def main():
async def work():
while True:
async with aiofiles.open('../v2.rst', 'r') as f:
async for line in f:
# real work will happen here
pass
print('loop')
await asyncio.sleep(2)
tasks = asyncio.gather(
work(),
)
await asyncio.sleep(10)
# Cancel tasks
tasks.add_done_callback(lambda r: r.exception())
tasks.cancel()
if __name__ == '__main__':
asyncio.run(main())
The work-function should read a file, do some line-per-line processing and then wait 2 seconds.
What happens is, that the function does "nothing". It blocks, I never see loop printed.
Where is my error in understanding asyncio?
The code hides the exception because the callback installed with add_done_callback retrieves the exception, only to immediately discard it. This prevents the (effectively unhandled) exception from getting logged by asyncio, which happens if you comment out the line with add_done_callback.
Also:
the code calls gather without awaiting it, either immediately after the call or later.
it unnecessarily invokes gather with a single coroutine. If the idea is to run the coroutine in the background, the idiomatic way to do so is with asyncio.create_task(work()).