Suppose I´ve got three tensors shape (n,1) T,r,E and I wanted to implement a function that calculates: sum(i,j) (T[j] < T[i]) (r[j] > r[i]) E[j].
How can I proceed?
This is what I´ve got
#tensor examples
T=K.constant([1,4,5])
r=K.constant([.8,.3,.7])
E=K.constant([1,0,1])
# cartesian product of T to compare element wise
c = tf.stack(tf.meshgrid(T, T, indexing='ij'), axis=-1)
cartesian_T = tf.reshape(c, (-1, 2))
# cartesian product of r to compare elemento wise
r = tf.stack(tf.meshgrid(r, r, indexing='ij'), axis=-1)
cartesian_r = tf.reshape(r, (-1, 2))
# TO DO:
# compare the two columns in cartesian T and cast to integer 1/0 if
# second column in T less/greater than first column in T => return tensor
# compare the two columns in cartesian E and cast to integer 1/0 if
# second column in r greater/less than first column in r => return tensor
# multiply previous tensors by a broadcasted version of E, then do K.sum()
Do you think I'm on the right track? What would you suggest to implement this?
Try:
import keras.backend as K
import tensorflow as tf
T=K.constant([1,4,5])
r=K.constant([.8,.3,.7])
E=K.constant([1,0,1])
T_new = tf.less(T[tf.newaxis,:],T[:,tf.newaxis])
r_new = tf.greater(r[tf.newaxis,:],r[:,tf.newaxis])
E_row,_ = tf.meshgrid(E, E)
result = tf.reduce_sum(tf.boolean_mask(E_row,tf.logical_and(T_new,r_new)))
with tf.Session() as sess:
print(sess.run(result))
#print
2.0
Related
I tried using Numpy, Scipy and Scikitlearn, but couldn't find what I need in any of them, basically I need to fit a curve to a dataset, but restricting some of the coefficients to known values, I found how to do it in MATLAB, using fittype, but couldn't do it in python.
In my case I have a dataset of X and Y and I need to find the best fitting curve, I know it's a polynomial of second degree (ax^2 + bx + c) and I know it's values of b and c, so I just needed it to find the value of a.
The solution I found in MATLAB was https://www.mathworks.com/matlabcentral/answers/216688-constraining-polyfit-with-known-coefficients which is the same problem as mine, but with the difference that their polynomial was of degree 5th, how could I do something similar in python?
To add some info: I need to fit a curve to a dataset, so things like scipy.optimize.curve_fit that expects a function won't work (at least as far as I tried).
The tools you have available usually expect functions only inputting their parameters (a being the only unknown in your case), or inputting their parameters and some data (a, x, and y in your case).
Scipy's curve-fit handles that use-case just fine, so long as we hand it a function that it understands. It expects x first and all your parameters as the remaining arguments:
from scipy.optimize import curve_fit
import numpy as np
b = 0
c = 0
def f(x, a):
return c+x*(b+x*a)
x = np.linspace(-5, 5)
y = x**2
# params == [1.]
params, _ = curve_fit(f, x, y)
Alternatively you can reach for your favorite minimization routine. The difference here is that you manually construct the error function so that it only inputs the parameters you care about, and then you don't need to provide that data to scipy.
from scipy.optimize import minimize
import numpy as np
b = 0
c = 0
x = np.linspace(-5, 5)
y = x**2
def error(a):
prediction = c+x*(b+x*a)
return np.linalg.norm(prediction-y)/len(prediction)**.5
result = minimize(error, np.array([42.]))
assert result.success
# params == [1.]
params = result.x
I don't think scipy has a partially applied polynomial fit function built-in, but you could use either of the above ideas to easily build one yourself if you do that kind of thing a lot.
from scipy.optimize import curve_fit
import numpy as np
def polyfit(coefs, x, y):
# build a mapping from null coefficient locations to locations in the function
# coefficients we're passing to curve_fit
#
# idx[j]==i means that unknown_coefs[i] belongs in coefs[j]
_tmp = [i for i,c in enumerate(coefs) if c is None]
idx = {j:i for i,j in enumerate(_tmp)}
def f(x, *unknown_coefs):
# create the entire polynomial's coefficients by filling in the unknown
# values in the right places, using the aforementioned mapping
p = [(unknown_coefs[idx[i]] if c is None else c) for i,c in enumerate(coefs)]
return np.polyval(p, x)
# we're passing an initial value just so that scipy knows how many parameters
# to use
params, _ = curve_fit(f, x, y, np.zeros((sum(c is None for c in coefs),)))
# return all the polynomial's coefficients, not just the few we just discovered
return np.array([(params[idx[i]] if c is None else c) for i,c in enumerate(coefs)])
x = np.linspace(-5, 5)
y = x**2
# (unknown)x^2 + 1x + 0
# params == [1, 0, 0.]
params = fit([None, 0, 0], x, y)
Similar features exist in nearly every mainstream scientific library; you just might need to reshape your problem a bit to frame it in terms of the available primitives.
I created an activation function class Threshold that should operate on one-hot-encoded image tensors.
The function performs min-max feature scaling on each channel followed by thresholding.
class Threshold(nn.Module):
def __init__(self, threshold=.5):
super().__init__()
if threshold < 0.0 or threshold > 1.0:
raise ValueError("Threshold value must be in [0,1]")
else:
self.threshold = threshold
def min_max_fscale(self, input):
r"""
applies min max feature scaling to input. Each channel is treated individually.
input is assumed to be N x C x H x W (one-hot-encoded prediction)
"""
for i in range(input.shape[0]):
# N
for j in range(input.shape[1]):
# C
min = torch.min(input[i][j])
max = torch.max(input[i][j])
input[i][j] = (input[i][j] - min) / (max - min)
return input
def forward(self, input):
assert (len(input.shape) == 4), f"input has wrong number of dims. Must have dim = 4 but has dim {input.shape}"
input = self.min_max_fscale(input)
return (input >= self.threshold) * 1.0
When I use the function I get the following error, since the gradients are not calculated automatically I assume.
Variable._execution_engine.run_backward(RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I already had a look at How to properly update the weights in PyTorch? but could not get a clue how to apply it to my case.
How is it possible to calculate the gradients for this function?
Thanks for your help.
The issue is you are manipulating and overwriting elements, this time of operation can't be tracked by autograd. Instead, you should stick with built-in functions. You example is not that tricky to tackle: you are looking to retrieve the minimum and maximum values along input.shape[0] x input.shape[1]. Then you will scale your whole tensor in one go i.e. in vectorized form. No for loops involved!
One way to compute min/max along multiple axes is to flatten those:
>>> x_f = x.flatten(2)
Then, find the min-max on the flattened axis while retaining all shapes:
>>> x_min = x_f.min(axis=-1, keepdim=True).values
>>> x_max = x_f.max(axis=-1, keepdim=True).values
The resulting min_max_fscale function would look something like:
class Threshold(nn.Module):
def min_max_fscale(self, x):
r"""
Applies min max feature scaling to input. Each channel is treated individually.
Input is assumed to be N x C x H x W (one-hot-encoded prediction)
"""
x_f = x.flatten(2)
x_min, x_max = x_f.min(-1, True).values, x_f.max(-1, True).values
x_f = (x_f - x_min) / (x_max - x_min)
return x_f.reshape_as(x)
Important note:
You would notice that you can now backpropagate on min_max_fscale... but not on forward. This is because you are applying a boolean condition which is not a differentiable operation.
I have a tensor a, I'd like to loop over the rows and index values based on another tensor l. i.e. l suggests the length of the vector I need.
sess = tf.InteractiveSession()
a = tf.constant(np.random.rand(3,4)) # shape=(3,4)
a.eval()
Out:
array([[0.35879311, 0.35347166, 0.31525201, 0.24089784],
[0.47296348, 0.96773956, 0.61336239, 0.6093023 ],
[0.42492552, 0.2556728 , 0.86135674, 0.86679779]])
l = tf.constant(np.array([3,2,4])) # shape=(3,)
l.eval()
Out:
array([3, 2, 4])
Expected output:
[array([0.35879311, 0.35347166, 0.31525201]),
array([0.47296348, 0.96773956]),
array([0.42492552, 0.2556728 , 0.86135674, 0.86679779])]
The tricky part is the fact that a could have None as first dimension since it's what is usually defined as batch size through placeholder.
I can not just use mask and condition as below since I need to compute the variance of each row individually.
condition = tf.sequence_mask(l, tf.reduce_max(l))
a_true = tf.boolean_mask(a, condition)
a_true
Out:
array([0.35879311, 0.35347166, 0.31525201, 0.47296348, 0.96773956,
0.42492552, 0.2556728 , 0.86135674, 0.86679779])
I also tried to use tf.map_fn but can't get it to work.
elems = (a, l)
tf.map_fn(lambda x: x[0][:x[1]], elems)
Any help will be highly appreciated!
TensorArray object can store tensors of different shapes. However, it is still not that simple. Take a look at this example that does what you want using tf.while_loop() with tf.TensorArray and tf.slice() function:
import tensorflow as tf
import numpy as np
batch_data = np.array([[0.35879311, 0.35347166, 0.31525201, 0.24089784],
[0.47296348, 0.96773956, 0.61336239, 0.6093023 ],
[0.42492552, 0.2556728 , 0.86135674, 0.86679779]])
batch_idx = np.array([3, 2, 4]).reshape(-1, 1)
x = tf.placeholder(tf.float32, shape=(None, 4))
idx = tf.placeholder(tf.int32, shape=(None, 1))
n_items = tf.shape(x)[0]
init_ary = tf.TensorArray(dtype=tf.float32,
size=n_items,
infer_shape=False)
def _first_n(i, ta):
ta = ta.write(i, tf.slice(input_=x[i],
begin=tf.convert_to_tensor([0], tf.int32),
size=idx[i]))
return i+1, ta
_, first_n = tf.while_loop(lambda i, ta: i < n_items,
_first_n,
[0, init_ary])
first_n = [first_n.read(i) # <-- extracts the tensors
for i in range(batch_data.shape[0])] # that you're looking for
with tf.Session() as sess:
res = sess.run(first_n, feed_dict={x:batch_data, idx:batch_idx})
print(res)
# [array([0.3587931 , 0.35347167, 0.315252 ], dtype=float32),
# array([0.47296348, 0.9677396 ], dtype=float32),
# array([0.4249255 , 0.2556728 , 0.86135674, 0.8667978 ], dtype=float32)]
Note
We still had to use batch_size to extract elements one by one from first_n TensorArray using read() method. We can't use any other method that returns Tensor because we have rows of different sizes (except TensorArray.concat method but it will return all elements stacked in one dimension).
If TensorArray will have less elements than index you pass to TensorArray.read(index) you will get InvalidArgumentError.
You can't use tf.map_fn because it returns a tensor that must have all elements of the same shape.
The task is simpler if you only need to compute variances of the first n elements of each row (without actually gather elements of different sizes together). In this case we could directly compute variance of sliced tensor, put it to TensorArray and then stack it to tensor:
n_items = tf.shape(x)[0]
init_ary = tf.TensorArray(dtype=tf.float32,
size=n_items,
infer_shape=False)
def _variances(i, ta, begin=tf.convert_to_tensor([0], tf.int32)):
mean, varian = tf.nn.moments(
tf.slice(input_=x[i], begin=begin, size=idx[i]),
axes=[0]) # <-- compute variance
ta = ta.write(i, varian) # <-- write variance of each row to `TensorArray`
return i+1, ta
_, variances = tf.while_loop(lambda i, ta: i < n_items,
_variances,
[ 0, init_ary])
variances = variances.stack() # <-- read from `TensorArray` to `Tensor`
with tf.Session() as sess:
res = sess.run(variances, feed_dict={x:batch_data, idx:batch_idx})
print(res) # [0.0003761 0.06120085 0.07217039]
I have a custom Keras layer and i use Theano as the backend and i want to do the following operation:
Suppose we have a tensor with shape (N,). I want to set the K first values to a fixed value x (3 or whatever...). How do i do that? I assume i have to use argsort but i don't know how to implement it using Theano.
For example in a simple FF layer, how can i set the first N values of the tensor a to a value x?
def call(self, x, mask=None):
a = K.dot(x, self.W)
if self.bias:
a += self.b
return a
For example using numpy i can set the 10 first values of a to 1 like this:
a[a.argsort() <= 10] = 1
I want to do the same using Keras backend functions or Theano functions.
For a[a.argsort() <= 10] = 1 the equivalent code will be:
import numpy as np
from keras import backend as K
a = np.asarray([[3,4,2,44,22,4,5,6,77,86,3,2,3,23,44,21],
[3,4,22,44,2,4,54,6,77,8,3,2,36,23,4,2]], dtype=np.float)
a_t = K.variable(a)
a[a.argsort() <= 10] = 1
print a
arg_sort = K.T.argsort(a_t)
_cond = K.lesser_equal(arg_sort,10)
a_new = K.switch(_cond, 1, a_t)
print K.eval(a_new)
In a single statement it will be:
a_new = K.switch(K.lesser_equal(K.T.argsort(a_t),10), 1, a_t)
print K.eval(a_new)
I am not sure if this is exactly what you need.
I would like to get the normalized dot product of 2 matrices using Theano. By normalized dot product of 2 matrices, I define the normalized inner product of 2 vectors as the following:
Take v_a from matrix A, and vector v_b from matrix B. AB__dot_norm = v_a * v_b / |v_a| |v_b|.
I can get the norm of v_a and v_b with the following code. I am not sure how to normalize the dot_product matrix with the normalized vectors.
import theano
from theano import tensor
dot_product = tensor.dot(in_tensor, w_tensor)
in_normalized = in_tensor / in_tensor.norm (2, axis = 1).reshape(in_tensor.shape[0],1)
w_normalized = w_tensor / w_tensor.norm (2, axis = 0).reshape(1, w_tensor.shape[1])
I have figured out a solution, I thought it might be useful to post it here.
in_norm = in_tensor / in_tensor.norm(2, axis =1 ).reshape((in_tensor.shape[0],1))
w_norm = w_tensor / w_tensor.norm(2,axis = 0).reshape((1, w_tensor.shape[1]))
output = theano.tensor.dot (in_norm, w_norm)