I recently asked one part of this question. I am building a chatbot, and there is a function that makes the problems. The function is given below:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = Variable(torch.stack(torch.Tensor(inputs),1).cuda())
else:
batch_inputs = Variable(torch.stack(torch.Tensor(inputs),1))
return batch_inputs, lengths_inputs
But when I try to run the chatbot code , it gives me this error:
stack(): argument 'tensors' (position 1) must be tuple of Tensors, not tensor
For that reason, I fixed the function like this:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = torch.stack(inputs, 1).cuda()
else:
batch_inputs = torch.stack(inputs, 1)
return batch_inputs, lengths_inputs
But it still gives me error, and the error is like this:
TypeError: expected Tensor as element 0 in argument 0, but got list
What should I do now in this situation?
Since the vec and length are both integers, you can use torch.tensor directly:
def variable_from_sentence(sentence):
vec, length = indexes_from_sentence(sentence)
inputs = [vec]
lengths_inputs = [length]
if hp.cuda:
batch_inputs = torch.tensor(inputs, device='cuda')
else:
batch_inputs = torch.tensor(inputs)
return batch_inputs, lengths_inputs
Related
full code: https://gist.github.com/QuantVI/79a1c164f3017c6a7a2d860e55cf5d5b
TLDR: sum(a3) gives a number like 770, when it should be more like 270 - as in 270 of 1000 trials where the results of drawing 4 contained (at least) 2 blue and 1 green ball.
I've rewritten both my way of creating the sample output, and my way of comparing the results twice already. Python as a syntax `all(x in a for x n b)` which I used initially, then change to something more deliberate to see if there was a change. I still have 750+ `True` evaluations of each trial. This is why I reassessed how I was selecting without replacement.
I've tested the draw function on its own with different Hats and was sure it worked.
The expected probability when drawing 4balls, without replacement, from a hat containing (blue=3,red=2,green=6), and having the outcome contain (blue=2,green=1) or ['blue','blue','green']
is around 27.2%. In my 1000 trials, I get higher then 700, repeatedly.
Is the error in Hat.draw() or is it in experiment()?
Note: Certain things are commented out, because I am debugging. Thus use sum(a3) as experiment is commented out to return things other than the probability right now.
import copy
import random
# Consider using the modules imported above.
class Hat:
def __init__(self, **kwargs):
self.d = kwargs
self.contents = [
key for key, val in kwargs.items() for num in range(val)
]
def draw(self, num: int) -> list:
if num >= len(self.contents):
return self.contents
else:
indices = random.sample(range(len(self.contents)), num)
chosen = [self.contents[idx] for idx in indices]
#new_contents = [ v for i, v in enumerate(self.contents) if i not in indices]
new_contents = [pair[1] for pair in enumerate(self.contents)
if pair[0] not in indices]
self.contents = new_contents
return chosen
def __repr__(self): return str(self.contents)
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
trials =[]
for n in range(num_experiments):
copyn = copy.deepcopy(hat)
result = copyn.draw(num_balls_drawn)
trials.append(result)
#trials = [ copy.deepcopy(hat).draw(num_balls_drawn) for n in range(num_experiments) ]
expected_contents = [key for key, val in expected_balls.items() for num in range(val)]
temp_eval = [[o for o in expected_contents if o in trial] for trial in trials]
temp_compare = [ evaled == expected_contents for evaled in temp_eval]
return expected_contents,temp_eval,temp_compare, trials
#evaluations = [ all(x in trial for x in expected_contents) for trial in trials ]
#if evaluations: prob = sum(evaluations)/len(evaluations)
#else: prob = 0
#return prob, expected_contents
#hat3 = Hat(red=5, orange=4, black=1, blue=0, pink=2, striped=9)
#hat4 = Hat(red=1, orange=2, black=3, blue=2)
hat1 = Hat(blue=3,red=2,green=6)
a1,a2,a3,a4 = experiment(hat=hat1, expected_balls={"blue":2,"green":1}, num_balls_drawn=4, num_experiments=1000)
#actual = probability
#expected = 0.272
#self.assertAlmostEqual(actual, expected, delta = 0.01, msg = 'Expected experiment method to return a different probability.')
hat2 = Hat(yellow=5,red=1,green=3,blue=9,test=1)
b1,b2,b3,b4 = experiment(hat=hat2, expected_balls={"yellow":2,"blue":3,"test":1}, num_balls_drawn=20, num_experiments=100)
#actual = probability
#expected = 1.0
#self.assertAlmostEqual(actual, expected, delta = 0.01, msg = 'Expected experiment method to return a different probability.')
The issue is temp_eval = [[o for o in expected_contents if o in trial] for trial in trials]. It will always ad both blue to the list even if only one blue exists in the results of one trial.
However, I couldn't fix the error in a straight-forward way. Instead, my fix created a much lower answer, something less than 0.1, when around 0.27 is (270 of 1000 trials) is what I need.
The roundabout solution was to convert lists like ['red', 'green', 'blue', 'green'] into dictionaries using list on collections.Counter of that list. Then do a key-wose comparison of the values, such as [y[key]<= x.get(key,0) for key in y.keys()]). In this comparison y is the expected_balls variable, and x is the list of the counter object. If x doesn't have one of the keys, we get 0. Zero will be less than the value of any key in expected_balls.
From here we use functols.reduce to turn the output into a single True or False value. Then we map that functionality (compare all keys and get one T/F value) across all trials.
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
trials =[]
trials = [ copy.deepcopy(hat).draw(num_balls_drawn)
for n in range(num_experiments) ]
trials_kvpairs = [dict(collections.Counter(trial)) for trial in trials]
def contains(contained:dict , container:dict):
each = [container.get(key,0) >= contained[key]
for key in contained.keys()]
return reduce(lambda item0,item1: item0 and item1, each)
trials_success = list(map(lambda t: contains(expected_balls,t), trials_kvpairs))
# expected_contents = [pair[0] for pair in expected_balls.items() for num in range(pair[1])]
# temp_eval = [[o for o in trial if o in expected_contents] for trial in trials]
# temp_compare = [ evaled == expected_contents for evaled in temp_eval]
# if temp_compare: prob = sum(temp_compare)/len(trials)
# else: prob = 0
return 'prob', trials_kvpairs, trials_success
When run using the this experiment(hat=hat1, expected_balls={"blue":2,"green":1}, num_balls_drawn=4, num_experiments=1000) the sum of the third part of the output was 276.
I have a Python function which accepts vector like parameter, but I would like that if somebody calls the function with a no iterable parameter, the function accepts and treat it like an one-element vector.
For example, a function which returns the size of a vector:
def longitud(v):
return len(v)
y = [1,2]
print(longitud(y)) # it will return 2, OK
x = 1
print(longitud(x)) # ERROR
It will produce an error because x is no iterable. I would like that longitud function could accept both parameters without problems, and in the 2nd case, treat x like an one-element vector. Is there any elegant way to do this?
Do you want this?
def longitud(*v):
return len(v)
y = [1,2]
print(longitud(*y)) # it will return 2, OK
x = 1
print(longitud(x)) # No ERROR
Alternative(check if the parameter is iterable or not if not then return 1) -
from collections.abc import Iterable
def longitud(v):
if isinstance(v, Iterable):
return len(v)
return 1
y = [1,2]
print(longitud(y)) # it will return 2, OK
x = 1
print(longitud(x)) # NO ERROR
I have a custom Keras layer that I want to return specific output from specific inputs. I don't want it to be trainable.
The layer should do the following
if input = [1,0] then output = 1
if input = [0,1] then output = 0
Instead, it always outputs -1, the value I set if there's problem.
I think the line that is not behaving as I expect it would is:
if(test_mask_1_result_count == 2)
Here's the custom layer:
class my_custom_layer(layers.Layer):
def __init__(self, **kwargs):
super(my_custom_layer, self).__init__(**kwargs)
def call(self, inputs,training=None):
def encode():
# set up the test mask:
test_mask_1 = np.array([0,1],dtype=np.int32)
k_test_mask_1 = backend.variable(value=test_mask_1)
# test if the input is equal to the test mask
test_mask_1_result = backend.equal(inputs,k_test_mask_1)
# add up all the trues
test_mask_1_result_count = tf.reduce_sum(tf.cast(test_mask_1_result, tf.int32))
# return if we've found the right mask
if(test_mask_1_result_count == 2):
res = np.array([0]).reshape((1,)) #top left
k_res = backend.variable(value=res)
return k_res
# set up to test the second mask
test_mask_2 = np.array([1,0],dtype=np.int32)
k_test_mask_2 = backend.variable(value=test_mask_2)
# test if the input is equal to the test mask
test_mask_2_result = backend.equal(inputs,k_test_mask_2)
# add up all the trues
test_mask_2_result_count = tf.reduce_sum(tf.cast(test_mask_2_result, tf.int32))
# return if we've found the right mask
if(test_mask_2_result_count == 2):
res = np.array([1]).reshape((1,)) #top left
k_res = backend.variable(value=res)
return k_res
# if we've got here we're in trouble:
res = np.array([-1]).reshape((1,)) #top left
k_res = backend.variable(value=res)
return k_res
return encode()
def compute_output_shape(self, input_shape):
return (input_shape[0],1)
Why doesn't the if ever trigger?
I also produced a MWE using keras outside of a network. This seems to work as intended:
mask_1 = np.array([1,0],dtype=np.int32)
k_mask_1 = backend.variable(value=mask_1)
input_1 = np.array([1,0],dtype=np.int32)
k_input_1 = backend.variable(value=input_1)
mask_eq = backend.equal(k_input_1,k_mask_1)
mask_eq_sum = tf.reduce_sum(tf.cast(mask_eq, tf.int32))
# keras backend
sess = backend.get_session()
print(sess.run(mask_eq_sum))
Outputs 2
I suspect there is something fundamental that I don't understand.
I'm not sure what the problem is with your code, but your layer seems to be much more complicated than necessary. For instance,
my_custom_layer = layers.Lambda(lambda x: x[0])
should meet your specs. If you want it to be more robust, you could use
my_custom_layer = layers.Lambda(lambda x: 1 if x == [1,0] else 0 if x == [0,1] else -1)
or
def mask_func(in_t):
if in_t == [1,0]:
return 1
elif in_t == [0,1]:
return 0
else:
return -1
my_custom_layer = layers.Lambda(mask_func)
instead. Assuming you're using TF2.0, custom layers are pretty lenient. Obviously, if you're using this to process batches, you'll need to modify it a little bit, but hopefully you get the point.
I have an optimization problem and I'm solving it with scipy and the minimization module. I uses SLSQP as method, because it is the only one, which fits to my problem. The function to optimize is a cost function with 'x' as a list of percentages. I have some constraints which has to be respected:
At first, the sum of the percentages should be 1 (PercentSum(x)) This constrain is added as 'eg' (equal) as you can see in the code.
The second constraint is about a physical value which must be less then 'proberty1Max '. This constrain is added as 'ineq' (inequal). So if 'proberty1 < proberty1Max ' the function should be bigger than 0. Otherwise the function should be 0. The functions is differentiable.
Below you can see a model of my try. The problem is the 'constrain' function. I get solutions, where the sum of 'prop' is bigger than 'probertyMax'.
import numpy as np
from scipy.optimize import minimize
class objects:
def __init__(self, percentOfInput, min, max, cost, proberty1, proberty2):
self.percentOfInput = percentOfInput
self.min = min
self.max = max
self.cost = cost
self.proberty1 = proberty1
self.proberty2 = proberty2
class data:
def __init__(self):
self.objectList = list()
self.objectList.append(objects(10, 0, 20, 200, 2, 7))
self.objectList.append(objects(20, 5, 30, 230, 4, 2))
self.objectList.append(objects(30, 10, 40, 270, 5, 9))
self.objectList.append(objects(15, 0, 30, 120, 2, 2))
self.objectList.append(objects(25, 10, 40, 160, 3, 5))
self.proberty1Max = 1
self.proberty2Max = 6
D = data()
def optiFunction(x):
for index, obj in enumerate(D.objectList):
obj.percentOfInput = x[1]
costSum = 0
for obj in D.objectList:
costSum += obj.cost * obj.percentOfInput
return costSum
def PercentSum(x):
y = np.sum(x) -100
return y
def constraint(x, val):
for index, obj in enumerate(D.objectList):
obj.percentOfInput = x[1]
prop = 0
if val == 1:
for obj in D.objectList:
prop += obj.proberty1 * obj.percentOfInput
return D.proberty1Max -prop
else:
for obj in D.objectList:
prop += obj.proberty2 * obj.percentOfInput
return D.proberty2Max -prop
def checkConstrainOK(cons, x):
for con in cons:
y = con['fun'](x)
if con['type'] == 'eq' and y != 0:
print("eq constrain not respected y= ", y)
return False
elif con['type'] == 'ineq' and y <0:
print("ineq constrain not respected y= ", y)
return False
return True
initialGuess = []
b = []
for obj in D.objectList:
initialGuess.append(obj.percentOfInput)
b.append((obj.min, obj.max))
bnds = tuple(b)
cons = list()
cons.append({'type': 'eq', 'fun': PercentSum})
cons.append({'type': 'ineq', 'fun': lambda x, val=1 :constraint(x, val) })
cons.append({'type': 'ineq', 'fun': lambda x, val=2 :constraint(x, val) })
solution = minimize(optiFunction,initialGuess,method='SLSQP',\
bounds=bnds,constraints=cons,options={'eps':0.001,'disp':True})
print('status ' + str(solution.status))
print('message ' + str(solution.message))
checkConstrainOK(cons, solution.x)
There is no way to find a solution, but the output is this:
Positive directional derivative for linesearch (Exit mode 8)
Current function value: 4900.000012746761
Iterations: 7
Function evaluations: 21
Gradient evaluations: 3
status 8
message Positive directional derivative for linesearch
Where is my fault? In this case it ends with mode 8, because the example is very small. With bigger data the algorithm ends with mode 0. But I think it should ends with a hint that an constraint couldn't be hold.
It doesn't make a difference, if proberty1Max is set to 4 or to 1. But in the case it is 1, there could not be a valid solution.
PS: I changed a lot in this question... Now the code is executable.
EDIT:
1.Okay, I learned, an inequal constrain is accepted if the output is positiv (>0). In the past I think <0 would also be accepted. Because of this the constrain function is now a little bit shorter.
What about the constrain. In my real solution I add some constrains using a loop. In this case it is nice to feed a function with an index of the loop and in the function this index is used to choose an element of an array. In my example here, the "val" decides if the constrain is for proberty1 oder property2. What the constrain mean is, how much of a property is in the hole mix. So I'm calculating the property multiplied with the percentOfInput. "prop" is the sum of this over all objects.
I think there might be a connection to the issue tux007 mentioned in the comments. link to the issue
I think the optimizer doesn't work correct, if the initial guess is not a valid solution.
Linear programming is not good for overdetermined equations. My problem doesn't have a unique solution, its an approximation.
As mentioned in the comment I think this is the problem:
Misleading output from....
If you have a look at the latest changes, the constrain is not satisfied, but the algorithm says: "Positive directional derivative for linesearch"
I am doing word-level language modelling with a vanilla rnn, I am able to train the model but for some weird reasons I am not able to get any samples/predictions from the model; here is the relevant part of the code:
train_set_x, train_set_y, voc = load_data(dataset, vocab, vocab_enc) # just load all data as shared variables
index = T.lscalar('index')
x = T.fmatrix('x')
y = T.ivector('y')
n_x = len(vocab)
n_h = 100
n_y = len(vocab)
rnn = Rnn(input=x, input_dim=n_x, hidden_dim=n_h, output_dim=n_y)
cost = rnn.negative_log_likelihood(y)
updates = get_optimizer(optimizer, cost, rnn.params, learning_rate)
train_model = theano.function(
inputs=[index],
outputs=cost,
givens={
x: train_set_x[index],
y: train_set_y[index]
},
updates=updates
)
predict_model = theano.function(
inputs=[index],
outputs=rnn.y,
givens={
x: voc[index]
}
)
sampling_freq = 2
sample_length = 10
n_train_examples = train_set_x.get_value(borrow=True).shape[0]
train_cost = 0.
for i in xrange(n_train_examples):
train_cost += train_model(i)
train_cost /= n_train_examples
if i % sampling_freq == 0:
# sample from the model
seed = randint(0, len(vocab)-1)
idxes = []
for j in xrange(sample_length):
p = predict_model(seed)
seed = p
idxes.append(p)
# sample = ''.join(ix_to_words[ix] for ix in idxes)
# print(sample)
I get the error: "TypeError: ('Bad input argument to theano function with name "train.py:94" at index 0(0-based)', 'Wrong number of dimensions: expected 0, got 1 with shape (1,).')"
Now this corresponds to the following line (in the predict_model):
givens={ x: voc[index] }
Even after spending hours I am not able to comprehend how could there be a dimension mis-match when:
train_set_x has shape: (42, 4, 109)
voc has shape: (109, 1, 109)
And when I do train_set_x[index], I am getting (4, 109) which 'x' Tensor of type fmatrix can hold (this is what happens in train_model) but when I do voc[index], I am getting (1, 109), which is also a matrix but 'x' cannot hold this, why ? !
Any help will be much appreciated.
Thanks !
The error message refers to the definition of the whole Theano function named predict_model, not the specific line where the substitution with givens occurs.
The issue seems to be that predict_model gets called with an argument that is a vector of length 1 instead of a scalar. The initial seed sampled from randint is actually a scalar, but I would guess that the output p of predict_model(seed) is a vector and not a scalar.
In that case, you could either return rnn.y[0] in predict_model, or replace seed = p with seed = p[0] in the loop over j.