I'm not using recursion but I think some third party code I'm using has to many nested function calls.
This is the code I used as an example to create my project
https://github.com/ehong-tl/micropySX126X
I could create a cut down example but I don't really see the point as you guys would need two of the Pico-Lora-SX126X hardware to execute it. (These are cool little gadgets, they can send text messages to each over very large distances)
The main difference in my code from the example is I'm running this code in a second thread. If run in the primary thread it works so I'm assuming there are less levels deep of function call available to the thread run on the second core.
Basically the second thread is waiting for an incoming Lora message while the main thread is interacting with the user. When a Lora message comes in it triggers the error below.
Here is my hardware and micropython version
MicroPython v1.19.1-746-gf2de289ef on 2022-12-13; Raspberry Pi Pico W with RP2040
Here is the error message
Traceback (most recent call last):
File "sx1262.py", line 275, in _onIRQ
File "subprocess.py", line 73, in cb
File "sx1262.py", line 187, in recv
File "sx1262.py", line 251, in _readData
File "sx126x.py", line 483, in startReceive
File "sx126x.py", line 540, in startReceiveCommon
File "sx126x.py", line 1133, in setPacketParams
File "sx126x.py", line 1228, in fixInvertedIQ
File "sx126x.py", line 1034, in writeRegister
File "sx126x.py", line 1274, in SPIwriteCommand
File "sx126x.py", line 1291, in SPItransfer
RuntimeError: maximum recursion depth exceeded
The function SPItransfer appears to be at or around the 10th level.
I have not modified any of these functions.
I have tried adding a garbage collection here and there but I was just guessing and it didn't make any difference.
Any ideas how I can increase the this depth to allow for more nested functions?
Thanks
David
Update
I found a little script that calls itself to test the possible recursion depth.
When run in the primary thread it allows for 39 function calls and 17 function calls when run in the second thread.
So this doesn't explain why my project is receiving this error after what appears like 10 levels of function calls.
# Based on an example found here
# https://forum.micropython.org/viewtopic.php?t=3091
import _thread
a = 0
fail = False
def recursionTest():
global a, fail
a += 1
print("loop count="+str(a))
if not fail:
try:
recursionTest()
except Exception as errorMsg:
print(errorMsg)
fail = True
# Runs in the primary thread
#print("Main thread")
#recursionTest()
# Runs in the second thread
print("Sub thread")
_thread.start_new_thread(recursionTest,())
Output
Sub thread
>loop count=1
loop count=2
loop count=3
loop count=4
loop count=5
loop count=6
loop count=7
loop count=8
loop count=9
loop count=10
loop count=11
loop count=12
loop count=13
loop count=14
loop count=15
loop count=16
loop count=17
maximum recursion depth exceeded
I'm not sure why but I needed to put my change to the stack size immediately before the call to start the second thread or it seemed to make no difference.
Like this
_thread.stack_size(5*1024)
_thread.start_new_thread(recursionTest,())
I only needed to increase by 1kb from the default 4kb for the second thread for my program to succeed.
Hope this helps someone else.
Related
I'll explain my question as best as I can and please I really need your help especially to people who are an expert in Python Multiprocessing cause I love Multiprocessing & I'm just a beginner learning.
def __handleDoubleClick(self,item):
self.tmx_window.show()
processes = []
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
process_ft = Process(target=self.tmx_window.fill_table, args=(item.text(),self.language_code,self.xml_filepath.text()))
processes.append(process_ft)
process_ft.start()
for process in processes:
process.join()
Now I have here a function (__handleDoubleClick) & this function simply does something when you double click a widget from my PYQT5 GUI, as you can see this line of code right here self.tmx_window.show() this shows up the 2nd GUI that I have. If you are curious about the self.tmx_window object this is its class and it simply inherits a class QMainWindow & Ui_TmxWindow the Ui_TmxWindow comes from the generated .py file from QT Designer.
class TmxWindow(QMainWindow,Ui_TmxWindow):
def __init__(self):
super().__init__()
# Set up the user interface from Designer.
self.setupUi(self)
As you can also see again I have here a function that is called which is this code.
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
Now I have commented it out and I wanted to make it a Process object cause I want to apply multiprocessing and I need it to run along side with other process in the future... Now as you can see I have applied this
process_ft = Process(target=self.tmx_window.fill_table, args=(item.text(),self.language_code,self.xml_filepath.text()))
processes.append(process_ft)
process_ft.start()
for process in processes:
process.join()
The value of target there is a function which is this self.tmx_window.fill_table and that function as you can see is from another class which I created an object from and that object as you can see is self.tmx_window. Now without applying multi-processing everything works fine cause I do the function calling right... but when I apply the multiprocessing this error comes up. By the way you'll see there "TmxWindow object" from the error and TmxWindow is the class I am referring to where the function belongs
Traceback (most recent call last):
File "main.py", line 127, in __handleDoubleClick
process_ft.start()
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle 'TmxWindow' object
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\LENOVO\.conda\envs\USA24\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Now I have thought of trying to do the same thing but with Threading and I tried this instead of Process and IT WORKED! I am familiar about the difference of threads and processes and based on what I've read that threads share on memory while processes don't cause they have their own (correct me if I'm wrong) so that's why I wanted to apply multiprocessing instead of multithreading.
So the question is I am worried about is the error I've provided... and why does it work with Threading tho and not with Process . I feel like there's something that I do not understand yet a lot of MultiProcessing and I am just curious I mean I just followed it and provided a function to the Process object and that function comes from a different class which I created an instance of an object with... So can someone help me... Pleaaaseee. Thank you!
#self.tmx_window.fill_table(item.text(),self.language_code,self.xml_filepath.text())
thread_ft = threading.Thread(target=self.tmx_window.fill_table,args=[item.text(),self.language_code,self.xml_filepath.text()])
threads.append(thread_ft)
thread_ft.start()
The important aspect to consider is that no access to GUI elements (including creating new ones) is allowed from any external thread besides the main Qt thread.
When using multiprocessing, each process has its own threads (I'm simplifying a lot, be aware), so if you're using multiprocessing you're basically also using different threads. Also, communication between processing in python is done by pickling (which doesn't work very well with Qt objects).
While in some circumstances accessing UI elements from other threads might work, it's usually just by coincidence, as the current hw/sw configuration allows those threads to work synchronously, since they are probably very responsive and their communication queue isn't troubled by other events. But, again, that's mostly a coincidence. I'm pretty sure that under other conditions (slow CPU, other concurring processes, etc.) your code won't work even when using threading.
So, the only safe solution is to use threading, but only along with Qt's signals and slot as long as you need to access in any way with UI elements. Which means that you have to use a QThread subclass (since it inherits from QObject, it provides the signal/slot mechanism), do your "processing" there and communicate with the main Qt thread with your custom signals, which will be therefore connected to the function that actually updates your item view.
I writer a method to tail log
eg
def getTailLog(self):
with open(self.strFileName, 'rb') as fileObj:
pos = fileObj.seek(0, os.SEEK_END)
try:
while True:
if self.booleanGetTailExit:
break
strLineContent = fileObj.readline()
if not strLineContent:
continue
else:
yield strLineContent.decode('utf-8').strip('\n')
except KeyboardInterrupt:
pass
this method can tail log, but will delay even stuck when massive data writer into log file in one second
so how can i repair
thanks a lot
To be honest I do not fully understand what you mean by delay even stuck when massive data writer into log file in one second.
Your code contains while loop which can potientially run forever. Looks like your code waits for the line to be appended to the end of the file self.strFileName. The problem is that it does not just wait. It continuously checks the content of the file. This is so called CPU bound operation which may cause huge delays in reading/writing within the same process (up to 10 seconds for 100 KB binary file from my experience). Python has this behavior because of GIL (global interpreter lock).
To solve your problem you should replace while loop implementation with another one - you may use schedule (at least pauses between consecutive checks) or event driven approach (if you know when new lines are added to the file).
I'm trying to use a concurrent.futures.ProcessPoolExecutor() to run some tasks in different threads (on Windows, if that matters). I'm getting the error:
Traceback (most recent call last):
File "C:\Users\Matthew\Anaconda3\lib\multiprocessing\queues.py", line 241, in _feed
obj = _ForkingPickler.dumps(obj)
File "C:\Users\Matthew\Anaconda3\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object '_RunnerThread._make_new_task.<locals>.<lambda>'
I completely understand why this is happening, and I can fix it.
But how do I detect that it's happened?
Let me explain. I was initially testing this area of code using pytest and some unit tests, but what happened is that the unit test would just deadlock, taking forever to complete. No error raised, no output captured which I could see. It was only when I ran the application that I saw the above printed to the terminal. But it appears to be an exception thrown in a different thread. The Future I get back from the submit method is well-formed, and continues to claim that it's "running".
(Update: This is slightly incorrect. If I submit my task, wait a little time, and then deliberately raise an Exception, then pytest does capture the stderr output above. But if I just try to wait for my Future to complete, there is no error, just an infinite wait...)
So, from the perspective of my application, everything worked fine, just the task is taking forever to complete. This is obviously rather annoying: I'd really like to detect that I've made this mistake from a unit test. How can I do that?
Edit: A self-contained example:
import concurrent.futures
def func():
return 5
if __name__ == "__main__":
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = [executor.submit(lambda : func())]
for result in concurrent.futures.as_completed(futures):
print(result.result())
If you run this, you get a stack-trace printed to stderr, but the script itself just runs forever. I'd like to make it error out...
I am new to Python threading and I have researched about it. I would like to implement the following functionality as written in pseudo code:
while True:
while(file size < 1 GB):
sleep for 1 minute
process(file)
file = next file
I want the process() function to be a daemon thread. The next time line 4 is reached in the code (next file is 1 GB), it should create a new thread if the previous thread is running. The maximum such threads are 3 and they loop one after the other. At any time there will be at least 2 threads free. So basically any one free thread should be called to do the job once the code reaches line 4.
Are daemon threads and thread queue the things I need to look at? Or is there some other way to solve this?
I need to run a classifier on multiple cores. I am using scikit-learn and Python 2.7.
The GridSearchCV module from scikit-learn has a parameter called n_jobs that will allow you run the Grid search on multiple cores. When I set this parameter to 10, I get the memory allocation error shown below. Any thoughts? My machine has 32 cores.
Traceback (most recent call last):
...
w.start()
File "../anaconda/lib/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/home/nhailu/anaconda/lib/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
I believe the most likely culprit is that you have a very large grid to search through. Considering setting pre_dispatch to a multiple of your number of jobs.
Also, the way joblib works, you need to make sure your master script is guarded by the 'if main' idiom:
if __name__ == "__main__":
# your script here, otherwise every core will execute all of your code
From the GridSearchCV documentation:
pre_dispatch : int, or string, optional
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be:
None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs
An int, giving the exact number of total jobs that are spawned
A string, giving an expression as a function of n_jobs, as in ‘2*n_jobs’