Performance difference between Linux and Windows when using Python Process - linux

I’m trying to speed up some code that should run fast in both Linux and Windows. However, the same code run in a Fedora 25 takes 131 seconds, while in a Windows 7 only 90 seconds (both computers with 8Gb of RAM and i7 and i5 processor, respectively). I’m using Python 3.5 in Fedora and 3.6 in Windows.
The code is the following:
nprocs = cpu_count()
chunksize = ceil(nrFrames / nprocs)
queue = Queue()
jobs = []
for i in range(nprocs):
start = chunksize * i
if i == nprocs - 1:
end = nrFrames
else:
end = chunksize * (i + 1)
trjCoordsProcess = DAH_Coords[start:end]
p = Process(target=is_hbond, args=(queue, trjCoordsProcess, distCutOff,
angleCutOff, AList, DList, HList))
jobs.append(p)
HbondFreqMatrix = queue.get()
for k in range(nprocs-1):
HbondFreqMatrix = np.add(HbondFreqMatrix, queue.get())
for proc in jobs:
proc.join()
def is_hbond(queue, processCoords, distCutOff, angleCutOff,
possibleAPosList, donorsList, HCovBoundPosList):
for frame in range(len(processCoords)):
# do stuff
queue.put(HbondProcessFreqMatrix)
The start method of each process is actually considerably faster in Linux than in Windows. However, each iteration inside the is_hbond function takes 2.5 times longer in Linux (0.5 vs 0.2s).
The profiler gives the following information:
Windows
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.167 0.167 84.139 84.139 calculateHbonds
4 0.000 0.000 52.039 13.010 \Python36\lib\multiprocessing\ queues.py:91(get)
4 0.000 0.000 51.928 12.982 \Python36\lib\multiprocessing\ connection.py:208(recv_bytes)
4 0.018 0.004 51.928 12.982 \Python36\lib\multiprocessing\ connection.py:294(_recv_bytes)
4 51.713 12.928 51.713 12.928 {built-in method _winapi.WaitForMultipleObjects}
4 0.000 0.000 30.811 7.703 \Python36\lib\multiprocessing\ process.py:95(start)
4 0.000 0.000 30.811 7.703 \Python36\lib\multiprocessing\ context.py:221(_Popen)
4 0.000 0.000 30.811 7.703 \Python36\lib\multiprocessing\ context.py:319(_Popen)
4 0.000 0.000 30.809 7.702 popen_spawn_win32.py:32(__init__)
8 1.958 0.245 30.804 3.851 \Python36\lib\multiprocessing\ reduction.py:58(dump)
8 28.846 3.606 28.846 3.606 {method 'dump' of '_pickle.Pickler' objects}
Linux
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.203 0.203 123.169 123.169 calculateHbonds
4 0.000 0.000 121.450 30.362 /python3.5/multiprocessing/ queues.py:91(get)
4 0.000 0.000 121.300 30.325 /python3.5/multiprocessing/ connection.py:208(recv_bytes)
4 0.019 0.005 121.300 30.325 /python3.5/multiprocessing/ connection.py:406(_recv_bytes)
8 0.000 0.000 121.281 15.160 /python3.5/multiprocessing/ connection.py:374(_recv)
8 121.088 15.136 121.088 15.136 {built-in method posix.read}
1 0.000 0.000 0.082 0.082 /python3.5/multiprocessing/ context.py:98(Queue)
17/4 0.000 0.000 0.082 0.021 <frozen importlib._bootstrap>: 939(_find_and_load_unlocked)
16/4 0.000 0.000 0.082 0.020 <frozen importlib._bootstrap>: 659(_load_unlocked)
4 0.000 0.000 0.052 0.013 /python3.5/multiprocessing/ process.py:95(start)
4 0.000 0.000 0.052 0.013 /python3.5/multiprocessing/ context.py:210(_Popen)
4 0.000 0.000 0.052 0.013 /python3.5/multiprocessing/ context.py:264(_Popen)
4 0.000 0.000 0.051 0.013 /python3.5/multiprocessing/ popen_fork.py:16(__init__)
4 0.000 0.000 0.051 0.013 /python3.5/multiprocessing/ popen_fork.py:64(_launch)
4 0.050 0.013 0.050 0.013 {built-in method posix.fork}
Is there a reason why this might be the case? I know the multiprocessing module works differently in Linux and Windows due to the lack of os.fork in Windows, but I thought Linux should be faster.
Any ideas on how to make it faster in Linux?
Thank you!

Related

How to interpret python cProfile output

I am running cProfile on my python script in order to see where i can improve performance and using snakeviz to visualize. The results are pretty vague however; how do I interpret them? Here are the first few lines of it:
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
242059 0.626 0.000 0.914 0.000 pulp.py:585(__init__)
1 0.413 0.413 0.413 0.413 {built-in method _winapi.WaitForSingleObject}
343978/302 0.293 0.000 0.557 0.002 pulp.py:750(addInPlace)
4159617 0.288 0.000 0.288 0.000 pulp.py:165(__hash__)
112 0.282 0.003 0.282 0.003 {method 'read' of '_ssl._SSLSocket' objects}
1913398 0.172 0.000 0.245 0.000 {built-in method builtins.isinstance}
1 0.171 0.171 0.185 0.185 Betfair_Run_Sheet.pyx:243(betfairFinalArray)
377866 0.168 0.000 0.293 0.000 pulp.py:637(addterm)
2255 0.161 0.000 0.161 0.000 mps_lp.py:249(<listcomp>)
1 0.148 0.148 0.570 0.570 mps_lp.py:174(writeMPS)
117214 0.139 0.000 0.444 0.000 pulp.py:820(__mul__)
2 0.136 0.068 0.196 0.098 pulp.py:1465(variablesDict)
5 0.135 0.027 0.135 0.027 {method 'do_handshake' of '_ssl._SSLSocket' objects}
427 0.111 0.000 0.129 0.000 <frozen importlib._bootstrap_external>:914(get_data)
71 0.108 0.002 0.108 0.002 {built-in method _imp.create_dynamic}
2093 0.102 0.000 0.102 0.000 {built-in method nt.stat}
I am using Pulp so aware that takes the lion's share of the time, but the specifics of the setup is not clear from the above, e.g. for the first line of output it seems to be alluding to a line 585 in my script but that is not where I have called or set up the PULP part in it at all.
Same with the <listcomp> 9th one down, there is no list comprehension on that line of my script.
Other things like {method 'do_handshake' of '_ssl._SSLSocket' objects} I don't have a clue what they mean.

Reduce number of calls for the {method 'acquire' of '_thread.lock' objects} python

Hi There i'm struggling with my I/O bound app to make it fast enough for potential users
im fetching an X number of urls say 10 for example, using MULTI THREADING with 1 thread for each URL
but that takes too long i've ran Cprofile on my code and i see that the bottleneck is in
"{method 'acquire' of '_thread.lock' objects} "
in the Cprofile result i noticed that the method 'acquire' is called 9 Times per Thread
Can anybody please shed some light on how i can reduce the number of calls per Thread
here is a sample code:
url_to_get = ["https://api.myip.com","https://api.myip.com","https://api.myip.com","https://api.myip.com",
"https://api.myip.com","https://api.myip.com","https://api.myip.com","https://api.myip.com",
"https://api.myip.com","https://api.myip.com"]
def fetch(url):
with requests.get(url,proxies=proxy) as response:
print(response.text)
def main():
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(fetch, url_to_get)
if __name__ == '__main__':
import cProfile, pstats
profiler = cProfile.Profile()
profiler.enable()
main()
profiler.disable()
stats = pstats.Stats(profiler).sort_stats('tottime')
stats.print_stats(10)
Cprofile Results:
ncalls tottime percall cumtime percall filename:lineno(function)
90 3.581 0.040 3.581 0.040 {method 'acquire' of '_thread.lock' objects}
10 0.001 0.000 0.001 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:1177(_make_invoke_excepthook)
10 0.001 0.000 0.001 0.000 {built-in method _thread.start_new_thread}
10 0.000 0.000 0.028 0.003 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\concurrent\futures\thread.py:193(_adjust_thread_count)
20 0.000 0.000 0.025 0.001 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:270(wait)
21 0.000 0.000 0.000 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:222(__init__)
10 0.000 0.000 0.028 0.003 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\concurrent\futures\thread.py:158(submit)
32 0.000 0.000 0.000 0.000 {built-in method _thread.allocate_lock}
10 0.000 0.000 0.001 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:761(__init__)
10 0.000 0.000 0.025 0.002 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:540(wait)
Thank you so much

Waitforsingleobject is decreasing my program perfomance

I have tested the same code in python in two diferent computers. In the first one the code is 9s longer and in the second one(a more powerfull machine with 16MRAM x 8MRAM of first one) is 185s longer. Analising in cProfile, the most critical process in both case is the waitforsingleobject. Analisyng a specific function, i can see that the critical part is the OCR with tesserecat. why so diferrent perfomance in this two machines?
The main lines from cProfile of this specific function is:
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.002 0.002 115.398 115.398 bpl-Redonda4.py:261(pega_stack_nome_jogadores)
18 0.000 0.000 0.001 0.000 pytesseract.py:106(prepare)
18 0.000 0.000 0.118 0.007 pytesseract.py:116(save_image)
18 0.000 0.000 0.000 0.000 pytesseract.py:140(subprocess_args)
18 0.000 0.000 115.186 6.399 pytesseract.py:162(run_tesseract)
18 0.001 0.000 115.373 6.410 pytesseract.py:199(run_and_get_output)
12 0.000 0.000 76.954 6.413 pytesseract.py:295(image_to_string)
12 0.000 0.000 76.954 6.413 pytesseract.py:308()
6 0.000 0.000 38.419 6.403 pytesseract.py:328(image_to_boxes)
6 0.000 0.000 38.419 6.403 pytesseract.py:345()
18 0.000 0.000 0.060 0.003 pytesseract.py:97(cleanup)
18 0.000 0.000 115.096 6.394 subprocess.py:979(wait)
18 115.096 6.394 115.096 6.394 {built-in method_winapi.WaitForSingleObject}

ANTLR4 very slow, the SLL trick didn't change anything

I have a grammar that is an extension of Python grammar. And small programs parse about 2 seconds on a Macbook Pro. I have taken the SLL trick and applied it:
# Set up the lexer
inputStream = InputStream(s)
lexer = CustomLexer(inputStream)
stream = CommonTokenStream(lexer)
# Set up the error handling stuff
error_handler = CustomErrorStrategy()
error_listener = CustomErrorListener()
buffered_errors = BufferedErrorListener()
error_listener.addDelegatee(buffered_errors)
# Set up the fast parser
parser = PythonQLParser(stream)
parser._interp.predictionMode = PredictionMode.SLL
parser.removeErrorListeners()
parser.errHandler = BailErrorStrategy()
try:
tree = parser.file_input()
return (tree,parser)
But it didn't do the trick, the time didn't change significantly. Any hints on what to do?
I'm using Python3 with antlr4-python3-runtime-4.5.3
The grammar file is here: Grammar File
And the project github page is here: Github
I have also ran a profiler, here are significant entries from the parser:
ncalls tottime percall cumtime percall filename:lineno(function)
21 0.000 0.000 0.094 0.004 PythonQLParser.py:7483(argument)
8 0.000 0.000 0.195 0.024 PythonQLParser.py:7379(arglist)
9 0.000 0.000 0.196 0.022 PythonQLParser.py:6836(trailer)
5/3 0.000 0.000 0.132 0.044 PythonQLParser.py:6765(testlist_comp)
1 0.000 0.000 0.012 0.012 PythonQLParser.py:6154(window_end_cond)
1 0.000 0.000 0.057 0.057 PythonQLParser.py:6058(sliding_window)
1 0.000 0.000 0.057 0.057 PythonQLParser.py:5941(window_clause)
1 0.000 0.000 0.004 0.004 PythonQLParser.py:5807(for_clause_entry)
1 0.000 0.000 0.020 0.020 PythonQLParser.py:5752(for_clause)
2/1 0.000 0.000 0.068 0.068 PythonQLParser.py:5553(query_expression)
48/10 0.000 0.000 0.133 0.013 PythonQLParser.py:5370(atom)
48/7 0.000 0.000 0.315 0.045 PythonQLParser.py:5283(power)
48/7 0.000 0.000 0.315 0.045 PythonQLParser.py:5212(factor)
48/7 0.000 0.000 0.331 0.047 PythonQLParser.py:5132(term)
47/7 0.000 0.000 0.346 0.049 PythonQLParser.py:5071(arith_expr)
47/7 0.000 0.000 0.361 0.052 PythonQLParser.py:5010(shift_expr)
47/7 0.000 0.000 0.376 0.054 PythonQLParser.py:4962(and_expr)
47/7 0.000 0.000 0.390 0.056 PythonQLParser.py:4914(xor_expr)
47/7 0.000 0.000 0.405 0.058 PythonQLParser.py:4866(expr)
44/7 0.000 0.000 0.405 0.058 PythonQLParser.py:4823(star_expr)
43/7 0.000 0.000 0.422 0.060 PythonQLParser.py:4615(not_test)
43/7 0.000 0.000 0.438 0.063 PythonQLParser.py:4563(and_test)
43/7 0.000 0.000 0.453 0.065 PythonQLParser.py:4509(or_test)
43/7 0.000 0.000 0.467 0.067 PythonQLParser.py:4293(old_test)
43/7 0.000 0.000 0.467 0.067 PythonQLParser.py:4179(try_catch_expr)
43/7 0.000 0.000 0.482 0.069 PythonQLParser.py:3978(test)
1 0.000 0.000 0.048 0.048 PythonQLParser.py:2793(import_from)
1 0.000 0.000 0.048 0.048 PythonQLParser.py:2702(import_stmt)
7 0.000 0.000 1.728 0.247 PythonQLParser.py:2251(testlist_star_expr)
4 0.000 0.000 1.770 0.443 PythonQLParser.py:2161(expr_stmt)
5 0.000 0.000 1.822 0.364 PythonQLParser.py:2063(small_stmt)
5 0.000 0.000 1.855 0.371 PythonQLParser.py:1980(simple_stmt)
5 0.000 0.000 1.859 0.372 PythonQLParser.py:1930(stmt)
1 0.000 0.000 1.898 1.898 PythonQLParser.py:1085(file_input)
176 0.002 0.000 0.993 0.006 Lexer.py:127(nextToken)
420 0.000 0.000 0.535 0.001 ParserATNSimulator.py:1120(closure)
705 0.003 0.000 1.642 0.002 ParserATNSimulator.py:315(adaptivePredict)
The PythonQL program that I was parsing is this one:
# This example illustrates the window query in PythonQL
from collections import namedtuple
trade = namedtuple('Trade', ['day','ammount', 'stock_id'])
trades = [ trade(1, 15.34, 'APPL'),
trade(2, 13.45, 'APPL'),
trade(3, 8.34, 'APPL'),
trade(4, 9.87, 'APPL'),
trade(5, 10.99, 'APPL'),
trade(6, 76.16, 'APPL') ]
# Maximum 3-day sum
res = (select win
for sliding window win in ( select t.ammount for t in trades )
start at s when True
only end at e when (e-s == 2))
print (res)

Pygame simple loop runs very slowly on Mac

E: After testing the same on OS X and Linux, I can confirm that the following only happens on OS X. On Linux it literally runs at a thousand fps, as I happened to wonder. Any explanation? I would much prefer developing on Mac, thanks to TextMate.
Here's a simple loop that does almost nothing, and still runs very slowly. Can anyone explain why? FPS averages at little over 30, it takes a little over 30ms for each pass over the loop. Window size does not seem to affect this at all, as even setting a tiny window size like (50,50) has the same fps.
I find this weird, I would expect that any contemporary hardware could do a thousand fps for such a simple loop, even when we update every pixel every time. From the profile I can see that {built-in method get} and {built-in method update} combined seem to take around 30ms of time per call, is that really the best we can get out without using dirty rects?
pygame.init()
clock = pygame.time.Clock()
fps = 1000
#milliseconds from last frame
new_time, old_time = None, None
done = False
while not done:
clock.tick(fps)
for event in pygame.event.get():
if event.type == pygame.QUIT:
done = True
# show fps and milliseconds
if new_time:
old_time = new_time
new_time = pygame.time.get_ticks()
if new_time and old_time:
pygame.display.set_caption("fps: " + str(int(clock.get_fps())) + " ms: " + str(new_time-old_time))
pygame.display.update()
Here's the beginning of a cProfile of the main function.
94503 function calls (92211 primitive calls) in 21.011 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.026 0.026 21.011 21.011 new_main.py:34(main)
652 14.048 0.022 14.048 0.022 {built-in method get}
652 5.864 0.009 5.864 0.009 {built-in method update}
1 0.444 0.444 0.634 0.634 {built-in method init}
651 0.278 0.000 0.278 0.000 {built-in method set_caption}
72/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:2234(_find_and_load)
72/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:2207(_find_and_load_unlocked)
71/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:1186(_load_unlocked)
46/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:1122(_exec)
46/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:1465(exec_module)
74/1 0.000 0.000 0.151 0.151 <frozen importlib._bootstrap>:313(_call_with_frames_removed)
54/1 0.004 0.000 0.151 0.151 {built-in method exec}
1 0.000 0.000 0.151 0.151 macosx.py:1(<module>)
1 0.000 0.000 0.150 0.150 pkgdata.py:18(<module>)
25/3 0.000 0.000 0.122 0.041 <frozen importlib._bootstrap>:1156(_load_backward_compatible)
8/1 0.026 0.003 0.121 0.121 {method 'load_module' of 'zipimport.zipimporter' objects}
1 0.000 0.000 0.101 0.101 __init__.py:15(<module>)
1 0.000 0.000 0.079 0.079 config_reader.py:115(build_from_config)
2 0.000 0.000 0.056 0.028 common.py:43(reset_screen)
2 0.055 0.027 0.055 0.027 {built-in method set_mode}
72/71 0.001 0.000 0.045 0.001 <frozen importlib._bootstrap>:2147(_find_spec)
70/69 0.000 0.000 0.043 0.001 <frozen importlib._bootstrap>:1934(find_spec)
70/69 0.001 0.000 0.043 0.001 <frozen importlib._bootstrap>:1902(_get_spec)
92 0.041 0.000 0.041 0.000 {built-in method load_extended}
6 0.000 0.000 0.041 0.007 new_map.py:74(add_character)
6 0.000 0.000 0.041 0.007 new_character.py:32(added_to_map)
6 0.001 0.000 0.041 0.007 new_character.py:265(__init__)
1 0.000 0.000 0.038 0.038 macosx.py:14(Video_AutoInit)
1 0.038 0.038 0.038 0.038 {built-in method InstallNSApplication}
1 0.036 0.036 0.036 0.036 {built-in method quit}
65 0.001 0.000 0.036 0.001 re.py:277(_compile)
49 0.000 0.000 0.036 0.001 re.py:221(compile)
The answer to this ended up being that the retina display under OS X is the differentiating factor. Running it even on an external display on the same Mac works fine. But moving the window to the retina display makes it sluggish. With or without an external monitor connected.
On the other hand, it runs just fine on the same retina display under Linux. It is unclear what the difference in the display managers / rendering is that causes this, but I doubt there is anything one could do about it.
Changing the game resolution to fullscreen helped me.
Try this:
window = pygame.display.set_mode((0, 0), pygame.FULLSCREEN)
instead of:
window = pygame.display.set_mode((winx, winy))

Resources