Counting number of operations in Python - python-3.x

What I am doing wrong in the following code:
import dis
def count_operations(f):
operations = 0
for op in dis.get_instructions(f):
if op.opname in ('ADD', 'SUB', 'MULT', 'DIV', 'MOD'):
operations += 1
return operations
def solve_system(A, b):
x = np.linalg.solve(A, b)
return x
A = np.array([[2, 3],
[3, 4]])
b = np.array([8, 11])
operations = count_operations(solve_system)
print(f'Number of operations: {operations}')
I wrote two functions, one for counting operations and one for solving a system.

Numpy does a lot of the heavy lifting with (compiled) library routines written in C or Fortran. You won't see those in dis output:
In [1]: import numpy as np
In [2]: import dis
In [3]: def solve_system(A, b):
...: x = np.linalg.solve(A, b)
...: return x
...:
In [4]: dis.dis(solve_system)
2 0 LOAD_GLOBAL 0 (np)
2 LOAD_ATTR 1 (linalg)
4 LOAD_METHOD 2 (solve)
6 LOAD_FAST 0 (A)
8 LOAD_FAST 1 (b)
10 CALL_METHOD 2
12 STORE_FAST 2 (x)
3 14 LOAD_FAST 2 (x)
16 RETURN_VALUE
In [5]: dis.dis(np.linalg.solve)
179 0 LOAD_DEREF 0 (dispatcher)
2 LOAD_FAST 0 (args)
4 BUILD_MAP 0
6 LOAD_FAST 1 (kwargs)
8 DICT_MERGE 1
10 CALL_FUNCTION_EX 1
12 STORE_FAST 2 (relevant_args)
180 14 LOAD_GLOBAL 0 (implement_array_function)
181 16 LOAD_DEREF 1 (implementation)
18 LOAD_DEREF 2 (public_api)
20 LOAD_FAST 2 (relevant_args)
22 LOAD_FAST 0 (args)
24 LOAD_FAST 1 (kwargs)
180 26 CALL_FUNCTION 5
28 RETURN_VALUE
From the numpy.linalg.solve documentation:
The solutions are computed using LAPACK routine _gesv.
Those routines (sgesv and dgesv for single and double precision) are written in Fortran.
See e.g. the documentation for dgesv.

This is actually an interesting question. So you are making a wrapper function to show the amount of operations that are in a given tuple.
On inspection, the if statement is failing and you can verify this by including an else.
So i do this modification to verify:
import dis
import numpy as np
def count_operations(f):
operations = 0
not_operations = 0 # <---added this
for op in dis.get_instructions(f):
if op.opname in ('ADD', 'SUB', 'MULT', 'DIV', 'MOD'):
operations += 1
else:
not_operations += 1
return operations, not_operations
def solve_system(A, b):
x = np.linalg.solve(A, b)
return x
A = np.array([[2, 3],
[3, 4]])
b = np.array([8, 11])
operations = count_operations(solve_system)
print(f'Number of operations: {operations}')
The above code now returns a operations as a tuple (of ops and not ops).
When i run the modified code i get this:
Number of operations: (0, 9)
You can also add a print statement to expose what you get in each loop. An example of the first operation in the list is this:
Instruction(opname='LOAD_GLOBAL', opcode=116, arg=0, argval='np', argrepr='np', offset=0, starts_line=18, is_jump_target=False)
Which is <class 'dis.Instruction'>
So you could expose that...
Finally, i can show that the code does infact work, but adding LOAD_FAST into the tuple which then returns (3,6) as it occurs 3 times...

Related

How many ways to get the change?

While practicing the following dynamic programming question on HackerRank, I got the 'timeout' error. The code run successfully on some test examples, but got 'timeout' errors on others. I'm wondering how can I further improve the code.
The question is
Given an amount and the denominations of coins available, determine how many ways change can be made for amount. There is a limitless supply of each coin type.
Example:
n = 3
c = [8, 3, 1, 2]
There are 3 ways to make change for n=3 : {1, 1, 1}, {1, 2}, and {3}.
My current code is
import math
import os
import random
import re
import sys
from functools import lru_cache
#
# Complete the 'getWays' function below.
#
# The function is expected to return a LONG_INTEGER.
# The function accepts following parameters:
# 1. INTEGER n
# 2. LONG_INTEGER_ARRAY c
#
def getWays(n, c):
# Write your code here
#c = sorted(c)
#lru_cache
def get_ways_recursive(n, cur_idx):
cur_denom = c[cur_idx]
n_ways = 0
if n == 0:
return 1
if cur_idx == 0:
return 1 if n % cur_denom == 0 else 0
for k in range(n // cur_denom + 1):
n_ways += get_ways_recursive(n - k * cur_denom,
cur_idx - 1)
return n_ways
return get_ways_recursive(n, len(c) - 1)
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
first_multiple_input = input().rstrip().split()
n = int(first_multiple_input[0])
m = int(first_multiple_input[1])
c = list(map(int, input().rstrip().split()))
# Print the number of ways of making change for 'n' units using coins having the values given by 'c'
ways = getWays(n, c)
fptr.write(str(ways) + '\n')
fptr.close()
It timed out on the following test example
166 23 # 23 is the number of coins below.
5 37 8 39 33 17 22 32 13 7 10 35 40 2 43 49 46 19 41 1 12 11 28

How to get sum of probabilities of rolling w dice where one is fair and the other one is unfair?

I am writing a little program and wanted to ask how I can add the logic of having an unfair dice in the game? Right now, my code produces the sum of probabilities of rolling 2 dices with 6 faces for i times. However, it is treating the dices with a 1/6 probability of rolling a given number. How do I tweak it, so that the unfair dice ONLY shows up in the range of 2-5 but never as 1 or 6? The output should the sum of probs for all numbers in range 2-12 given the fair and unfair dice.
import random
from collections import defaultdict
def main():
dice = 2
sides = 6
rolls = int(input("Enter the number of rolls to simulate: "))
result = roll(dice, sides, rolls)
maxH = 0
for i in range(dice, dice * sides + 1):
if result[i] / rolls > maxH: maxH = result[i] / rolls
for i in range(dice, dice * sides + 1):
print('{:2d}{:10d}{:8.2%} {}'.format(i, result[i], result[i] / rolls, '#' * int(result[i] / rolls / maxH * 40)))
def roll(dice, sides, rolls):
d = defaultdict(int)
for _ in range(rolls):
d[sum(random.randint(1, sides) for _ in range(dice))] += 1
return d
main()
Output
Enter the number of rolls to simulate: 10000
2 265 2.65% ######
3 567 5.67% #############
4 846 8.46% ####################
5 1166 11.66% ############################
6 1346 13.46% ################################
7 1635 16.35% ########################################
8 1397 13.97% ##################################
9 1130 11.30% ###########################
10 849 8.49% ####################
11 520 5.20% ############
12 279 2.79% ######
Given that the logic of which results are possible is currently being controlled by the line
random.randint(1, sides)
that's the line to change if you want to roll with different bounds. For example, to get 2-5, you could generalize the function:
def main():
dice = 2
sides = 6
unfair_min = 2
unfair_max = 5
rolls = int(input("Enter the number of rolls to simulate: "))
result_unfair = roll_biased(dice, sides, rolls, min_roll=unfair_min, max_roll=unfair_max)
maxH = max(result_unfair.values()) / rolls
for i in range(dice, dice * sides + 1):
print('{:2d}{:10d}{:8.2%} {}'.format(i, result_unfair[i], result_unfair[i] / rolls,
'#' * int(result_unfair[i] / rolls / maxH * 40)))
def roll_biased(dice, sides, rolls, min_roll=1, max_roll=None):
if max_roll is None:
max_roll = sides
d = defaultdict(int)
for _ in range(rolls):
d[sum(random.randint(min_roll, max_roll) for _ in range(dice))] += 1
return d
Which could print:
Enter the number of rolls to simulate: 10000
2 0 0.00%
3 0 0.00%
4 632 6.32% ##########
5 1231 12.31% ###################
6 1851 18.51% #############################
7 2480 24.80% ########################################
8 1873 18.73% ##############################
9 1296 12.96% ####################
10 637 6.37% ##########
11 0 0.00%
12 0 0.00%
You could also generalize this to arbitrary choices (or arbitrary weights) using random.choices() as such:
def roll_from_choices(dice, sides, rolls, allowed_rolls=None):
if allowed_rolls is None:
allowed_rolls = list(range(1, sides+1))
d = defaultdict(int)
for _ in range(rolls):
d[sum(random.choices(allowed_rolls, k=dice))] += 1
return d
which you can call as:
result_unfair = roll_from_choices(dice, sides, rolls, allowed_rolls=[2, 3, 4, 5])
I would start with a single function that returns the result of a die(a) roll, where the options available can be tailored to exclude the impossible, something like:
import random
def throw_die(options):
return random.choice(options)
Then I would code for the generalised case where you can have any number of dice, each of varying abilities (to be passed as the options when throwing the die). In your particular case, that would be two dice with the second excluding 1 and 6(b):
dice = [
[1, 2, 3, 4, 5, 6],
[ 2, 3, 4, 5 ]
]
Then allocate enough storage for the results (I've wasted a small amount of space here to ensure code for collecting data is much simpler):
min_res = sum([min(die) for die in dice]) # Smallest possible result,
max_res = sum([max(die) for die in dice]) # and largest.
count = [0] * (max_res + 1) # Allocate space + extra.
Your data collection is then the relatively simple (I've hard-coded the roll count here rather than use input, but you can put that back in):
rolls = 10000 # rolls = int(input("How many rolls? "))
for _ in range(rolls):
# Throw each die, sum the results, then increment correct count.
result = sum([throw_die(die) for die in dice])
count[result] += 1
And the data output can be done as (rounding rather than truncating so that highest count has forty hashes - that's just my CDO(c) nature kicking in):
hash_mult = 40 / max(count)
for i in range(min_res, max_res + 1):
per_unit = count[i] / rolls
hashes = "#" * int(count[i] * hash_mult + 0.5)
print(f"{i:2d}{count[i]:10d}{per_unit:8.2%} {hashes}")
The complete program then becomes:
import random
# Throw a single die.
def throw_die(options):
return random.choice(options)
# Define all dice here as list of lists.
# No zero/negative number allowed, will
# probably break code :-)
dice = [
[1, 2, 3, 4, 5, 6],
[ 2, 3, 4, 5 ]
]
# Get smallest/largest possible result.
min_res = sum([min(die) for die in dice])
max_res = sum([max(die) for die in dice])
# Some elements wasted (always zero) to simplify later code.
# Example: throwing three normal dice cannot give 0, 1, or 2.
count = [0] * (max_res + 1)
# Do the rolls and collect results.
rolls = 10000
for _ in range(rolls):
result = sum([throw_die(die) for die in dice])
count[result] += 1
# Output all possible results.
hash_mult = 40 / max(count)
for i in range(min_res, max_res + 1):
per_unit = count[i] / rolls
hashes = "#" * int(count[i] * hash_mult + 0.5)
print(f"{i:2d}{count[i]:10d}{per_unit:8.2%} {hashes}")
and a few sample runs to see it in action:
pax:/mnt/c/Users/Pax/Documents/wsl> python prog.py
3 418 4.18% #########
4 851 8.51% ####################
5 1266 12.66% ##############################
6 1681 16.81% ########################################
7 1606 16.06% ######################################
8 1669 16.69% #######################################
9 1228 12.28% #############################
10 867 8.67% ####################
11 414 4.14% #########
pax:/mnt/c/Users/Pax/Documents/wsl> python prog.py
3 450 4.50% ##########
4 825 8.25% ###################
5 1206 12.06% ############################
6 1655 16.55% #######################################
7 1679 16.79% ########################################
8 1657 16.57% #######################################
9 1304 13.04% ###############################
10 826 8.26% ###################
11 398 3.98% #########
pax:/mnt/c/Users/Pax/Documents/wsl> python prog.py
3 394 3.94% #########
4 838 8.38% ####################
5 1271 12.71% ##############################
6 1617 16.17% ######################################
7 1656 16.56% #######################################
8 1669 16.69% ########################################
9 1255 12.55% ##############################
10 835 8.35% ####################
11 465 4.65% ###########
Footnotes:
(a) Using the correct nomenclature for singular die and multiple dice, in case any non-English speakers are confused.
(b) You could also handle cases like [1, 2, 3, 4, 4, 5, 6] where you're twice as likely to get a 4 as any other numbers. Anything much more complex than that would probably better be handled with a tuple representing each possible result and its relative likelihood. Probably a little too complex to put in a footnote (given it's not a requirement of the question) but you can always ask about this in a separate question if you're interested.
(c) Just like OCD but in the Correct Damn Order :-)

Recursive memoization solutio to solve "count changes"

I am trying to solving the "Counting Change" problem with memorization.
Consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters, dimes, nickels, and pennies? More generally, can we write a function to compute the number of ways to change any given amount of money using any set of currency denominations?
And the intuitive solution with recursoin.
The number of ways to change an amount a using n kinds of coins equals
the number of ways to change a using all but the first kind of coin, plus
the number of ways to change the smaller amount a - d using all n kinds of coins, where d is the denomination of the first kind of coin.
#+BEGIN_SRC python :results output
# cache = {} # add cache
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0] # d for digit
return count_change(a, kinds[1:]) + count_change(a - d, kinds)
print(count_change(100))
#+END_SRC
#+RESULTS:
: 292
I try to take advantage of memorization,
Signature: count_change(a, kinds=(50, 25, 10, 5, 1))
Source:
def count_change(a, kinds=(50, 25, 10, 5, 1)):
"""Return the number of ways to change amount a using coin kinds."""
if a == 0:
return 1
if a < 0 or len(kinds) == 0:
return 0
d = kinds[0]
cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
return cache[a]
It works properly for small number like
In [17]: count_change(120)
Out[17]: 494
work on big numbers
In [18]: count_change(11000)
---------------------------------------------------------------------------
RecursionError Traceback (most recent call last)
<ipython-input-18-52ba30c71509> in <module>
----> 1 count_change(11000)
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
... last 1 frames repeated, from the frame below ...
/tmp/ipython_edit_h0rppahk/ipython_edit_uxh2u429.py in count_change(a, kinds)
9 return 0
10 d = kinds[0]
---> 11 cache[a] = count_change(a, kinds[1:]) + count_change(a - d, kinds)
12 return cache[a]
RecursionError: maximum recursion depth exceeded in comparison
What's the problem with memorization solution?
In the memoized version, the count_change function has to take into account the highest index of coin you can use when you make the recursive call, so that you can use the already calculated values ...
def count_change(n, k, kinds):
if n < 0:
return 0
if (n, k) in cache:
return cache[n,k]
if k == 0:
v = 1
else:
v = count_change(n-kinds[k], k, kinds) + count_change(n, k-1, kinds)
cache[n,k] = v
return v
You can try :
cache = {}
count_change(120,4, [1, 5, 10, 25, 50])
gives 494
while :
cache = {}
count_change(11000,4, [1, 5, 10, 25, 50])
outputs: 9930221951

Iterate over list in random sequential multiples of n Python

I have the following example code:
l = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
def batch_gen(data, batch_size):
for i in range(0, len(data), batch_size):
yield data[i:i+batch_size]
batch_size = randint(1,4)
n = 0
print("batch_size is {}".format(batch_size))
for i in batch_gen(l, batch_size):
for x in i:
print(x)
print("loop {}".format(n))
n += 1
batch_size = randint(1,4)
Which gives me the output of:
batch_size is 3
1
2
3
loop 0
4
5
6
loop 1
7
8
9
loop 2
10
loop 3
This is the output I'm looking for however batch_size is always set from batch_size = randint(1,4) outside of the for loop.
I'm looking at having a random batch_size for each iteration of the loop, and each next loop picking up where the last loop left off until completion of a random length list.
Any help would be greatly appreciated!
Example code taken from Iterate over a python sequence in multiples of n?

Keras aggregated objective function

How to add aggregated error to keras model?
Having table:
g x y
0 1 1 1
1 1 2 2
2 1 3 3
3 2 1 2
4 2 2 1
I would like to be able to minimize sum((y - y_pred) ** 2) error along with
sum((sum(y) - sum(y_pred)) ** 2) per group.
I'm fine to have bigger individual sample errors, but it is crucial for me to have right totals.
SciPy example:
import pandas as pd
from scipy.optimize import differential_evolution
df = pd.DataFrame({'g': [1, 1, 1, 2, 2], 'x': [1, 2, 3, 1, 2], 'y': [1, 2, 3, 2, 1]})
g = df.groupby('g')
def linear(pars, fit=False):
a, b = pars
df['y_pred'] = a + b * df['x']
if fit:
sample_errors = sum((df['y'] - df['y_pred']) ** 2)
group_errors = sum((g['y'].sum() - g['y_pred'].sum()) ** 2)
total_error = sum(df['y'] - df['y_pred']) ** 2
return sample_errors + group_errors + total_error
else:
return df['y_pred']
pars = differential_evolution(linear, [[0, 10]] * 2, args=[('fit', True)])['x']
print('SAMPLES:\n', df, '\nGROUPS:\n', g.sum(), '\nTOTALS:\n', df.sum())
Output:
SAMPLES:
g x y y_pred
0 1 1 1 1.232
1 1 2 2 1.947
2 1 3 3 2.662
3 2 1 2 1.232
4 2 2 1 1.947
GROUPS:
x y y_pred
g
1 6 6 5.841
2 3 3 3.179
TOTALS:
g 7.000
x 9.000
y 9.000
y_pred 9.020
For grouping, as long as you keep the same groups throughout training, your loss function will not have problems about being not differentiable.
As a naive form of grouping, you can simply separate the batches.
I suggest a generator for that.
#suppose you have these three numpy arrays:
gTrain
xTrain
yTrain
#create this generator
def grouper(g,x,y):
while True:
for gr in range(1,g.max()+1):
indices = g == gr
yield (x[indices],y[indices])
For the loss function, you can make your own:
import keras.backend as K
def customLoss(yTrue,yPred):
return K.sum(K.square(yTrue-yPred)) + K.sum(K.sum(yTrue) - K.sum(yPred))
model.compile(loss=customLoss, ....)
Just be careful with the second term if you have negative values.
Now you train using the method fit_generator:
model.fit_generator(grouper(gTrain,xTrain, yTrain), steps_per_epoch=gTrain.max(), epochs=...)

Resources