Python: Why my module didn't instantiate an object via exec() statement? - python-3.x

I created my own version of GridSearchCV module from sklearn.model_selection library. My version includes iterating through each parameter one by one instead of looking for all possible combinations. For example for a SVR model, if we have three parameters defined as follows:
{
'gamma' : np.arange(0.0, 1.0, 0.1),
'C': np.arange(1, 10, 1),
'epsilon': np.arange(0.0, 1.0, 0.1)
}
The algorithm would in the first turn find one best gamma coefficient (out of ten). Then it moves to assigning C parameter with given value of gamma. After ten iterations it moves to epsilon and assigns optimal epsilon value with given set of [gamma, C] parameters. This gives us in total 30 combinations to check instead of 1000 (10*10*10).
I'd like to import my opt_grid_search object into my projects, like below:
from own_udf_functions import show_description, opt_grid_search
The code of the object begins with dynamic statement that creates object that is going to be optimized:
exec_string = 'opt_object = ' + object_name + '(' + def_params + ')'
which returns for example:
opt_object = SVR(kernel = 'rbf')
However, when I try to use the code in another script as below:
opt_grid_search(object_name, params_set, X_train, y_train, X_test, y_test,
cross_val = 2, def_params = def_params)
following error appears:
*File "C:\Users\Marek\Desktop\Python\Github\Kernele\Kaggle Competitions\own-udf-
functions\own_udf_functions.py", line 40, in opt_grid_search
opt_object.fit(X_train,y_train)
NameError: name 'opt_object' is not defined*
It seems that opt_grid_search function didn't execute the following line of code:
opt_object = SVR(kernel = 'rbf')
and the object named opt_object wasn't actually created.
I think it has to do with classes, but I would like to ask you to help me better understand what actually happened in this error. I think it is a crucial knowledge that would help me a lot write more 'pythonic' codes instead of defining all of the functions in every single code.
Secondly, please let me know if such optimization makes sense as well or is it needed for the GridSearch to go through all possible combinations.
I tried to keep this description as short as possible, however if you would like to see / need it for the reference, my code is accessible below:
https://github.com/markoo26/own-udf-functions

The issue here is the exec function and the namespace/scope in which it operates. I'm struggling to get my head around this myself but essentially exec() doesn't work for assignment used inside a function in this way. The easiest workaround is to use eval() instead which explcitly returns an object. So end up with something like:
exec_string = object_name + '(' + def_params + ')'
opt_object = eval(exec_string)

Related

Absolute value function not recognized as Disciplined Convex Program (CVXPY)

I am trying to run the following optimization using CVXPY:
import cvxpy as cp
import numpy as np
weights_vec = cp.Variable(10)
er_vec = cp.Parameter(10, value=np.random.randn(10))
prev_h_vec = cp.Parameter(10, value=np.ones(10))
tcost_vec = cp.Parameter(10, value=[0.03]*10)
objective = cp.Maximize(weights_vec # er_vec - tcost_vec # cp.abs(weights_vec - prev_h_vec))
prob = cp.Problem(objective)
prob.solve()
However, I get the following error:
cvxpy.error.DCPError: Problem does not follow DCP rules. Specifically:
The objective is not DCP. Its following subexpressions are not:
param516 # abs(var513 + -param515)
The absolute function is convex. Hence, I am not quite sure why CVX is throwing an error for the absolute value function in the objective.
DCP-ness depends on the sign of tcost_vec.
As this is a (unconstrained) parameter it's not okay.
Both of the following will work:
# we promise it's nonnegative
tcost_vec = cp.Parameter(10, value=[0.03]*10, nonneg=True)
# it's fixed and can by analyzed
tcost_vec = np.array([-0.03]*10)
Given the code as posted, there is not reason to use parameters (yet).

Python Surprise package gives different predictions for predict method vs manual compute using latent factors

I am using the surprise package for matrix factorization. Below is the code for the tutorial:
from surprise import SVD
from surprise import Dataset
from surprise import accuracy
from surprise.model_selection import train_test_split
# Load the movielens-100k dataset (download it if needed),
data = Dataset.load_builtin('ml-100k')
trainset = data.build_full_trainset()
algo = SVD()
algo.fit(trainset)
algo.predict(str(196), str(302))
Out:
Prediction(uid='196', iid='301', r_ui=4, est=3.0740854315737174, details={'was_impossible': False})
However, when I use the SVD equation from its documentation and source code to manually compute the r_hat (r prediction):
algo.trainset.global_mean + algo.bi[301] + algo.bu[196] + np.dot(algo.qi[301], algo.pu[196])
Out:
2.817335384596893
The predictions does not match at all. Am I doing anything wrong or missing something?
I managed to figure it out. There's a difference between raw users/items and inner users/items. The former refers to the actual names of the users and items (e.g., user = John or a number like 10; items = Avengers or a number like 20) while the latter I assume to be the label encoded values given to the original users/items.
The hidden attributes of the trainset contain 4 attributes, _inner2raw_id_items, _inner2raw_id_users, _raw2inner_id_items, _raw2inner_id_users, which are dicts containing the conversion from one to the other.
If we call trainset._raw2inner_id_users and trainset._raw2inner_id_items, we get:
_raw2inner_id_users
{'196': 0,
'186': 1,
'22': 2, ...}
_raw2inner_id_items
{'242': 0,
'302': 1,
'377': 2, ...
'301': 404, ...}
Therefore, when we call:
algo.predict(str(196), str(302))
Out:
# different from original post as the prediction changes from run to run
Prediction(uid='196', iid='301', r_ui=None, est=3.2072618383879736, details={'was_impossible': False})
We are actually referring to the 0th user and 1st item. So when we use the manual computation using the latent factors, bias, and global mean according to the SVD equation, we should use these numbers instead:
algo.trainset.global_mean + algo.bi[404] + algo.bu[0] + np.dot(algo.qi[404], algo.pu[0])
Output:
3.2072618383879736

What initializer does 'uniform' use?

Keras offers a variety of initializers for weights and biases. Which one does 'uniform' use?
I would think it would be RandomUniform, but this is not confirmed in the documentation, and I reached a dead-end in the source-code: the key 'uniform' is used as a global variable within the module, and I cannot find where the variable uniform is set.
One other way to confirm this is to look at the initializers source code:
# Compatibility aliases
zero = zeros = Zeros
one = ones = Ones
constant = Constant
uniform = random_uniform = RandomUniform
normal = random_normal = RandomNormal
truncated_normal = TruncatedNormal
identity = Identity
orthogonal = Orthogonal
I think today's answer is better, though.
Simpler solution:
From the interactive prompt,
import keras
keras.initializers.normal
# Out[3]: keras.initializers.RandomNormal
keras.initializers.uniform
# Out[4]: keras.initializers.RandomUniform
Original post:
Running the debugger to the deserialize method in initializers.py
and examining
globals()['uniform']
Shows that the value is indeed
<class 'keras.initializers.RandomUniform'>
Similarly, 'normal' is shown in the debugger to be <class 'keras.initializers.RandomNormal'>.
Note that uniform often works better than normal, and the theoretical advantages of one over the other is not clear.

Fitting multiple gaussian using **curve_fit** function from scipy using python 3.x

I am trying to trimodal gaussian functions using scipy and python 3.x. I think I'm really almost there but I'm scratching my head here because I can't quite figure out what is going wrong with it.
data =np.loadtxt('mock.txt')
my_x=data[:,0]
my_y=data[:,1]
def gauss(x,mu,sigma,A):
return A*np.exp(-(x-mu)**2/2/sigma**2)
def trimodal_gauss(x,mu1,sigma1,A1,mu2,sigma2,A2,mu3,sigma3,A3):
return gauss(x,mu1,sigma1,A1)+gauss(x,mu2,sigma2,A2)+gauss(x,mu3,sigma3,A3)
"""""
Gaussian fitting parameters recognized in each file
"""""
first_centroid=(10180.4*2+9)/9
second_centroid=(10180.4*2+(58.6934*1)+7)/9
third_centroid=(10180.4*2+(58.6934*2)+5)/9
centroid=[]
centroid+=(first_centroid,second_centroid,third_centroid)
apparent_resolving_power=1200
sigma=[]
for i in range(len(centroid)):
sigma.append(centroid[i]/((apparent_resolving_power)*2.355))
height=[1,1,1]
p=[]
p = [list(t) for t in zip(centroid, sigma, height)]
for i in range(9):
popt, pcov = curve_fit(trimodal_gauss,my_x,my_y,p0=p[i])
Using this code, I get the following error.
TypeError: trimodal_gauss() missing 6 required positional arguments: 'mu2', 'sigma2', 'A2', 'mu3', 'sigma3', and 'A3'
I understand what the error message is saying but I don't think I understand how I'm not providing the 6 initial guesses.
I appreciate your input!
It looks like you are trying to call curve_fit nine separate times, and give it a different initial parameter guess by specifying p0=p[i] (which is probably not what your code does, because p is a nested list).
You should make sure that p is a one-dimensional array with 9 elements, and call curve_fit only once. Something like
p = np.array([list(t) for t in zip(centroid, sigma, height)]).flatten()
popt, pcov = curve_fit(trimodal_gauss,my_x,my_y,p0=p])
might work.

Calling seperate theano functions with same inputs?

I have something like this:
x=T.matrix('x')
params = [self.W, self.b1, self.b2]
hidden = self.activation_function(T.dot(x, self.W)+self.b1)
output = T.dot(hidden,T.transpose(self.W))+self.b2
output = self.output_function(output)
L = -T.sum(x*T.log(output) + (1-x)*T.log(1-output), axis=1)
cost=L.mean()
th_train = th.function(inputs=[index], outputs=[cost], updates=updates,
givens={x:self.X[index:index+mini_batch_size,:]})
This is working fine. I would now like to see what the mean of the hidden units is. I tried adding this before the line where L = -T.sum... is declared:
hm = T.mean(hidden)
hidden_mean_func = th.function(inputs=[hm], outputs=[hm], name="hidden_mean_function_printer")
print hidden_mean_func(hm)
I get the following error:
TypeError: ('Bad input argument to theano function with name "hidden_mean_function_printer" at index 0(0-based)', 'Expected an array-like object, but found a Variable: maybe you are trying to call a function on a (possibly shared) variable instead of a numeric array?')
I really have two questions: 1) Why can't I do this? 2) What is the correct way to achieve what I want?
Thank you
As far as I can see, you are giving him the function as input. If you use your array/matrix of hidden units instead the code should work.
hidden_mean_func = th.function(inputs=[hidden], outputs=[hm], name="hidden_mean_function_printer")
print hidden_mean_func(hidden)

Resources