cProfile not showing any (real) results--all 000's--in Python - python-3.x

In running cProfile on anything (for example a mergeSort) I'm getting all 000's in the runtimes, and key lines/vars/methods not listed/tested in the process. Only seems to test under methods, internals. Please advise.
below are my results for a mergeSort, I've tried running the
python -m cProfile [mergeSort(lst)] w/ and w/out brackets--saw both in documentation.
Only version I can get to work is the:
import cProfile
cProfile.run(mergeSort(lst))
or the enable() disable() method shown.
formatting doesn't turn out well, so attached image.
cProfile Results
results:
'''
[17, 20, 26, 31, 44, 54, 55, 77, 93]
127 function calls (111 primitive calls) in 0.000 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
17/1 0.000 0.000 0.000 0.000 :1(mergeSort)
1 0.000 0.000 0.000 0.000 :36()
1 0.000 0.000 0.000 0.000 :37()
2 0.000 0.000 0.000 0.000 codeop.py:132(call)
2 0.000 0.000 0.000 0.000 hooks.py:142(call)
2 0.000 0.000 0.000 0.000 hooks.py:207(pre_run_code_hook)
2 0.000 0.000 0.000 0.000 interactiveshell.py:1104(user_global_ns)
2 0.000 0.000 0.000 0.000 interactiveshell.py:2933(run_code)
2 0.000 0.000 0.000 0.000 ipstruct.py:125(getattr)
2 0.000 0.000 0.000 0.000 {built-in method builtins.compile}
2 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
91 0.000 0.000 0.000 0.000 {built-in method builtins.len}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
'''

Related

Python3 list comprehension from two generators processes only the 1st inner loop

I'm trying to make a list comprehension from two generators
g1 = (a for a in range(3))
g2 = (b for b in range(5))
l = [(i, j) for i in g1 for j in g2]
print(l)
but instead of a list of 15 tuples it only returns results of the first inner loop.
[(0, 0), (0, 1), (0, 2), (0, 3), (0, 4)]
However, if to put the generators into the comprehension itself, the result is as expected.
l = [(i, j)
for i in (a for a in range(3))
for j in (b for b in range(5))]
print(l)
Do I miss something, or is it just a kind of a "feature"?
Here are cProfile outputs for both versions.
ncalls tottime percall cumtime percall filename:lineno(function)
4 0.000 0.000 0.000 0.000 t.py:1(<genexpr>)
1 0.000 0.000 0.000 0.000 t.py:1(<module>)
6 0.000 0.000 0.000 0.000 t.py:2(<genexpr>)
1 0.000 0.000 0.000 0.000 t.py:3(<listcomp>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.000 0.000 t.py:1(<listcomp>)
1 0.000 0.000 0.000 0.000 t.py:1(<module>)
4 0.000 0.000 0.000 0.000 t.py:2(<genexpr>)
18 0.000 0.000 0.000 0.000 t.py:3(<genexpr>)
1 0.000 0.000 0.000 0.000 {built-in method builtins.exec}
1 0.000 0.000 0.000 0.000 {built-in method builtins.print}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
The generator can only be consumed ones. After that it is empty.
In your first example g1 is used for the first item of g2. For each further item of g2 it is already "used up" and returns nothing.
In your second example for each iteration of the outer loop a new fresh generator is created.
Further reading: Resetting generator object in Python

Reduce number of calls for the {method 'acquire' of '_thread.lock' objects} python

Hi There i'm struggling with my I/O bound app to make it fast enough for potential users
im fetching an X number of urls say 10 for example, using MULTI THREADING with 1 thread for each URL
but that takes too long i've ran Cprofile on my code and i see that the bottleneck is in
"{method 'acquire' of '_thread.lock' objects} "
in the Cprofile result i noticed that the method 'acquire' is called 9 Times per Thread
Can anybody please shed some light on how i can reduce the number of calls per Thread
here is a sample code:
url_to_get = ["https://api.myip.com","https://api.myip.com","https://api.myip.com","https://api.myip.com",
"https://api.myip.com","https://api.myip.com","https://api.myip.com","https://api.myip.com",
"https://api.myip.com","https://api.myip.com"]
def fetch(url):
with requests.get(url,proxies=proxy) as response:
print(response.text)
def main():
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(fetch, url_to_get)
if __name__ == '__main__':
import cProfile, pstats
profiler = cProfile.Profile()
profiler.enable()
main()
profiler.disable()
stats = pstats.Stats(profiler).sort_stats('tottime')
stats.print_stats(10)
Cprofile Results:
ncalls tottime percall cumtime percall filename:lineno(function)
90 3.581 0.040 3.581 0.040 {method 'acquire' of '_thread.lock' objects}
10 0.001 0.000 0.001 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:1177(_make_invoke_excepthook)
10 0.001 0.000 0.001 0.000 {built-in method _thread.start_new_thread}
10 0.000 0.000 0.028 0.003 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\concurrent\futures\thread.py:193(_adjust_thread_count)
20 0.000 0.000 0.025 0.001 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:270(wait)
21 0.000 0.000 0.000 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:222(__init__)
10 0.000 0.000 0.028 0.003 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\concurrent\futures\thread.py:158(submit)
32 0.000 0.000 0.000 0.000 {built-in method _thread.allocate_lock}
10 0.000 0.000 0.001 0.000 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:761(__init__)
10 0.000 0.000 0.025 0.002 C:\Users\MINOUSAT\AppData\Local\Programs\Python\Python38-32\lib\threading.py:540(wait)
Thank you so much

Waitforsingleobject is decreasing my program perfomance

I have tested the same code in python in two diferent computers. In the first one the code is 9s longer and in the second one(a more powerfull machine with 16MRAM x 8MRAM of first one) is 185s longer. Analising in cProfile, the most critical process in both case is the waitforsingleobject. Analisyng a specific function, i can see that the critical part is the OCR with tesserecat. why so diferrent perfomance in this two machines?
The main lines from cProfile of this specific function is:
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.002 0.002 115.398 115.398 bpl-Redonda4.py:261(pega_stack_nome_jogadores)
18 0.000 0.000 0.001 0.000 pytesseract.py:106(prepare)
18 0.000 0.000 0.118 0.007 pytesseract.py:116(save_image)
18 0.000 0.000 0.000 0.000 pytesseract.py:140(subprocess_args)
18 0.000 0.000 115.186 6.399 pytesseract.py:162(run_tesseract)
18 0.001 0.000 115.373 6.410 pytesseract.py:199(run_and_get_output)
12 0.000 0.000 76.954 6.413 pytesseract.py:295(image_to_string)
12 0.000 0.000 76.954 6.413 pytesseract.py:308()
6 0.000 0.000 38.419 6.403 pytesseract.py:328(image_to_boxes)
6 0.000 0.000 38.419 6.403 pytesseract.py:345()
18 0.000 0.000 0.060 0.003 pytesseract.py:97(cleanup)
18 0.000 0.000 115.096 6.394 subprocess.py:979(wait)
18 115.096 6.394 115.096 6.394 {built-in method_winapi.WaitForSingleObject}

verctorizing loop of single array numpy

Hello I have an (numpy) optimizing problem.
Below i have writen an piece of code that's quite common for my type of calculations.
The caclulation take always some time that i think should be shorter.
I think the problem is the loop. I have looked at the linalg part of numpy but i can't find an solution there. I also searched for a method vectorize the data but since i haven't much experience with that... i can't find any solution...
I hope somebody can help me...
import numpy as np
from scipy import signal
from scipy.fftpack import fft
fs = 44100 # frequency sample
T = 5 # time max
t = np.arange(0, T*fs)/fs # time array
x = np.sin(2 * np.pi * 100 * t) + 0.7 * np.sin(2 * np.pi * 880 * t) + 0.2 * np.sin(2 * np.pi * 2400 * t)
# Define Window length and window:
wl = 4 # window lenght
overlap = 0.5
W = signal.get_window('hanning', wl) # window
Wx = np.zeros(len(x))
ul = wl
# loop added for window
if (len(x) / wl) % wl == 0:
while ul <= len(Wx):
Wx[ul-wl:ul] += x[ul-wl:ul] * W
ul += wl * overlap
else:
dsample = (len(x)/wl) % wl # delta in samples between mod (x/windw length)
x = np.append(x, np.zeros(wl - dsample))
while ul <= len(Wx):
Wx[ul-wl:ul] += x[ul-wl:ul] * W
ul += wl * overlap
NFFT = np.int(2 ** np.ceil(np.log2(len(x))))
NFFW = np.int(2 ** np.ceil(np.log2(len(Wx))))
# Frequency spectrums
X = fft(x, NFFT)
WX = fft(Wx, NFFW)
Profiler:
%run -p example.py
110367 function calls (110366 primitive calls) in 19.998 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
1 19.561 19.561 19.994 19.994 example.py:6(<module>)
110258 0.233 0.000 0.233 0.000 {built-in method len}
2 0.181 0.091 0.189 0.095 basic.py:169(fft)
2 0.008 0.004 0.008 0.004 basic.py:131(_fix_shape)
2 0.008 0.004 0.008 0.004 {built-in method concatenate}
1 0.003 0.003 0.003 0.003 {built-in method compile}
2 0.002 0.001 0.002 0.001 {built-in method arange}
2 0.001 0.000 0.001 0.000 {built-in method open}
4 0.000 0.000 0.000 0.000 {built-in method zeros}
1 0.000 0.000 19.998 19.998 interactiveshell.py:2496(safe_execfile)
2/1 0.000 0.000 19.998 19.998 {built-in method exec}
1 0.000 0.000 0.000 0.000 windows.py:615(hann)
1 0.000 0.000 19.997 19.997 py3compat.py:108(execfile)
1 0.000 0.000 0.000 0.000 {method 'read' of '_io.BufferedReader' objects}
2 0.000 0.000 0.008 0.004 function_base.py:3503(append)
1 0.000 0.000 0.000 0.000 posixpath.py:318(normpath)
1 0.000 0.000 0.000 0.000 windows.py:1380(get_window)
1 0.000 0.000 0.000 0.000 posixpath.py:145(dirname)
4 0.000 0.000 0.000 0.000 {built-in method array}
2 0.000 0.000 0.000 0.000 {built-in method round}
1 0.000 0.000 0.000 0.000 {built-in method getcwd}
2 0.000 0.000 0.000 0.000 <frozen importlib._bootstrap>:2264(_handle_fromlist)
2 0.000 0.000 0.000 0.000 basic.py:116(_asfarray)
4 0.000 0.000 0.000 0.000 basic.py:24(istype)
2 0.000 0.000 0.000 0.000 fromnumeric.py:1281(ravel)
8 0.000 0.000 0.000 0.000 {built-in method isinstance}
1 0.000 0.000 0.000 0.000 posixpath.py:70(join)
2 0.000 0.000 0.000 0.000 numeric.py:462(asanyarray)
1 0.000 0.000 0.000 0.000 posixpath.py:355(abspath)
8 0.000 0.000 0.000 0.000 {built-in method hasattr}
1 0.000 0.000 19.998 19.998 <string>:1(<module>)
1 0.000 0.000 0.000 0.000 syspathcontext.py:64(__exit__)
1 0.000 0.000 0.000 0.000 posixpath.py:221(expanduser)
1 0.000 0.000 0.000 0.000 _bootlocale.py:23(getpreferredencoding)
1 0.000 0.000 0.000 0.000 syspathcontext.py:57(__enter__)
1 0.000 0.000 0.000 0.000 syspathcontext.py:54(__init__)
4 0.000 0.000 0.000 0.000 {built-in method issubclass}
3 0.000 0.000 0.000 0.000 posixpath.py:38(_get_sep)
2 0.000 0.000 0.000 0.000 {method 'ravel' of 'numpy.ndarray' objects}
2 0.000 0.000 0.000 0.000 numeric.py:392(asarray)
7 0.000 0.000 0.000 0.000 {method 'append' of 'list' objects}
1 0.000 0.000 0.000 0.000 {built-in method nl_langinfo}
5 0.000 0.000 0.000 0.000 {method 'startswith' of 'str' objects}
1 0.000 0.000 0.000 0.000 codecs.py:306(__init__)
1 0.000 0.000 0.000 0.000 posixpath.py:60(isabs)
1 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects}
1 0.000 0.000 0.000 0.000 codecs.py:257(__init__)
2 0.000 0.000 0.000 0.000 {method 'setdefault' of 'dict' objects}
1 0.000 0.000 0.000 0.000 {method 'rfind' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'remove' of 'list' objects}
1 0.000 0.000 0.000 0.000 {method 'join' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'rstrip' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'endswith' of 'str' objects}
1 0.000 0.000 0.000 0.000 {method 'insert' of 'list' objects}
1 0.000 0.000 0.000 0.000 {built-in method getdefaultencoding}
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
1 0.000 0.000 0.000 0.000 py3compat.py:13(no_code)
Precalculating static values shortens my loop from ~4s to 0.7s execution time:
nEntries = len(Wx)
step = int(wl * overlap)
while ul <= nEntries:
Wx[ul-wl:ul] += x[ul-wl:ul] * W
ul += step

ANTLR4 very slow, the SLL trick didn't change anything

I have a grammar that is an extension of Python grammar. And small programs parse about 2 seconds on a Macbook Pro. I have taken the SLL trick and applied it:
# Set up the lexer
inputStream = InputStream(s)
lexer = CustomLexer(inputStream)
stream = CommonTokenStream(lexer)
# Set up the error handling stuff
error_handler = CustomErrorStrategy()
error_listener = CustomErrorListener()
buffered_errors = BufferedErrorListener()
error_listener.addDelegatee(buffered_errors)
# Set up the fast parser
parser = PythonQLParser(stream)
parser._interp.predictionMode = PredictionMode.SLL
parser.removeErrorListeners()
parser.errHandler = BailErrorStrategy()
try:
tree = parser.file_input()
return (tree,parser)
But it didn't do the trick, the time didn't change significantly. Any hints on what to do?
I'm using Python3 with antlr4-python3-runtime-4.5.3
The grammar file is here: Grammar File
And the project github page is here: Github
I have also ran a profiler, here are significant entries from the parser:
ncalls tottime percall cumtime percall filename:lineno(function)
21 0.000 0.000 0.094 0.004 PythonQLParser.py:7483(argument)
8 0.000 0.000 0.195 0.024 PythonQLParser.py:7379(arglist)
9 0.000 0.000 0.196 0.022 PythonQLParser.py:6836(trailer)
5/3 0.000 0.000 0.132 0.044 PythonQLParser.py:6765(testlist_comp)
1 0.000 0.000 0.012 0.012 PythonQLParser.py:6154(window_end_cond)
1 0.000 0.000 0.057 0.057 PythonQLParser.py:6058(sliding_window)
1 0.000 0.000 0.057 0.057 PythonQLParser.py:5941(window_clause)
1 0.000 0.000 0.004 0.004 PythonQLParser.py:5807(for_clause_entry)
1 0.000 0.000 0.020 0.020 PythonQLParser.py:5752(for_clause)
2/1 0.000 0.000 0.068 0.068 PythonQLParser.py:5553(query_expression)
48/10 0.000 0.000 0.133 0.013 PythonQLParser.py:5370(atom)
48/7 0.000 0.000 0.315 0.045 PythonQLParser.py:5283(power)
48/7 0.000 0.000 0.315 0.045 PythonQLParser.py:5212(factor)
48/7 0.000 0.000 0.331 0.047 PythonQLParser.py:5132(term)
47/7 0.000 0.000 0.346 0.049 PythonQLParser.py:5071(arith_expr)
47/7 0.000 0.000 0.361 0.052 PythonQLParser.py:5010(shift_expr)
47/7 0.000 0.000 0.376 0.054 PythonQLParser.py:4962(and_expr)
47/7 0.000 0.000 0.390 0.056 PythonQLParser.py:4914(xor_expr)
47/7 0.000 0.000 0.405 0.058 PythonQLParser.py:4866(expr)
44/7 0.000 0.000 0.405 0.058 PythonQLParser.py:4823(star_expr)
43/7 0.000 0.000 0.422 0.060 PythonQLParser.py:4615(not_test)
43/7 0.000 0.000 0.438 0.063 PythonQLParser.py:4563(and_test)
43/7 0.000 0.000 0.453 0.065 PythonQLParser.py:4509(or_test)
43/7 0.000 0.000 0.467 0.067 PythonQLParser.py:4293(old_test)
43/7 0.000 0.000 0.467 0.067 PythonQLParser.py:4179(try_catch_expr)
43/7 0.000 0.000 0.482 0.069 PythonQLParser.py:3978(test)
1 0.000 0.000 0.048 0.048 PythonQLParser.py:2793(import_from)
1 0.000 0.000 0.048 0.048 PythonQLParser.py:2702(import_stmt)
7 0.000 0.000 1.728 0.247 PythonQLParser.py:2251(testlist_star_expr)
4 0.000 0.000 1.770 0.443 PythonQLParser.py:2161(expr_stmt)
5 0.000 0.000 1.822 0.364 PythonQLParser.py:2063(small_stmt)
5 0.000 0.000 1.855 0.371 PythonQLParser.py:1980(simple_stmt)
5 0.000 0.000 1.859 0.372 PythonQLParser.py:1930(stmt)
1 0.000 0.000 1.898 1.898 PythonQLParser.py:1085(file_input)
176 0.002 0.000 0.993 0.006 Lexer.py:127(nextToken)
420 0.000 0.000 0.535 0.001 ParserATNSimulator.py:1120(closure)
705 0.003 0.000 1.642 0.002 ParserATNSimulator.py:315(adaptivePredict)
The PythonQL program that I was parsing is this one:
# This example illustrates the window query in PythonQL
from collections import namedtuple
trade = namedtuple('Trade', ['day','ammount', 'stock_id'])
trades = [ trade(1, 15.34, 'APPL'),
trade(2, 13.45, 'APPL'),
trade(3, 8.34, 'APPL'),
trade(4, 9.87, 'APPL'),
trade(5, 10.99, 'APPL'),
trade(6, 76.16, 'APPL') ]
# Maximum 3-day sum
res = (select win
for sliding window win in ( select t.ammount for t in trades )
start at s when True
only end at e when (e-s == 2))
print (res)

Resources