RuntimeError: Type ‘Tuple when using Deep Markov Model - python-3.x

I am trying to use deep Markov Model on my dataset. However, when I use it I get the following run time error:
enter image description here
My code is exactly similar to:
https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm/dmm.py
I don't even understand what this error means. Has anyone seen this error before? Insights would be appreciated. This error occurs in guide function (line # 225) I think. But I don't know what is triggering this error.

The error says that types for 2 last elements in your Tuple are int and float, while everything must be Tensor.
Probably the simplest way to fix is to make the int and the float (mini_batch_seq_lengths and annealing_factor, I believe) Tensors: change model to accept Tensor, and convert to/from scalars as needed.

Related

Shape mismatch in tuple component 16. Expected [1,?,?,3], got [1,1,242,640,3]

I'm trying to train an object detection model from tensorflow 1 detection model zoo using python, version 3.7, and when I'm executing it throws all these errors. I'm still learning about this so I haven't any idea about how to solve this problem. Have anyone had the same issue about the different dimensions?
I have checked different questions from this web, like looking for 0 height or width into my csv files and things like that, but It seems like that is not the problem.
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape
mismatch in tuple component 16. Expected [1,?,?,3], got
[1,1,242,640,3]
Finally I checked my dataset and my csv file. The problem was almost all of the images had the same shape, but 3 of them not. So I changed those images and regenerate the csv files and the tfrecord files and now is running. The thing with the shape mismatch threw me off, but finally that was the problem

Keras save_weights/load_weights round trip failing. How to save and load weights?

I am using the class api to subclass a model based on keras.models.Model.
Is there some trick to getting the save_weights working?
Am seeing errors like
ValueError: Layer #0 (named "dense_S") expects 0 weight(s), but the saved weights have 2 element(s).
Am trying by_name=True and False.
EDIT: it seems that calling predict once, with ANY data, is needed to build the layers for some reason. It would be interesting to here a proper explanation from anyone who knows more.

Keras Bidirectional "RuntimeError: You must compile your model before using it." after compilation completed

I'm trying to create a small Bidirectional recurrent NN. The model itself compiles without error, but when trying to fit the model I get the error stating I should compile first. Please see the code snippet below:
# fourth recurrent model, bidirectional
bidirectional_recurrent = Sequential()
bidirectional_recurrent.add(Bidirectional(GRU(32, input_shape=(int(lookback/steps), data_scaled.shape[-1]))))
bidirectional_recurrent.add(Dense(1))
bidirectional_recurrent.compile(optimizer='rmsprop', loss='mae')
bidirectional_recurrent_history = bidirectional_recurrent.fit_generator(train_gen, steps_per_epoch=500, epochs=40,
validation_data=val_gen, validation_steps=val_steps)
RuntimeError: You must compile your model before using it.
I've used the same setup to train unidirectional RNN's which worked fine. Any tips to help solve the run-time error are appreciated. (restarting the kernel did not help)
Maybe I did not instantiate 'Bidirectional' correctly?
Please note: This question is different from Do I need to compile before 'X' type of questions
Note2: R examples of the same code can be found here
Found it,
When using Bidirectional, it should be treated as a layer, shifting the input_shape to be contained in Bidirectional() instead of in the GRU() object solved the problem
so
bidirectional_recurrent.add(Bidirectional(GRU(32, input_shape=(int(lookback/steps),
data_scaled.shape[-1]))))
becomes
bidirectional_recurrent.add(Bidirectional(GRU(32), input_shape=(int(lookback/steps),
data_scaled.shape[-1])))

How to stop training some specific weights in TensorFlow

I'm just beginning to learn TensorFlow and I have some problems with it.In training loop I want to ignore the small weights and stop training them. I've assigned these small weights to zero. I searched the tf API and found tf.Variable(weight,trainable=False) can stop training the weight. If the value of the weight is equal to zero I will use this function. I tried to use .eval() but there occurred an exception ValueError("Cannot evaluate tensor using eval(): No default ". I have no idea how to get the value of the variable when in training loop. Another way is to modify the tf.train.GradientDescentOptimizer(), but I don't know how to do it. Has anyone implemented this code yet or any other methods suggested? Thanks in advance!
Are you looking to apply regularization to the weights?
There is an apply_regularization method in the API that you can use to accomplish that.
See: How to exactly add L1 regularisation to tensorflow error function
I don't know any use-case for stopping training of some variables, probably it's not what you should do.
Anyway, calling tf.Variable() (if I got you right) is not going to help you, because it's called just once when the graph is defined. The first argument is initial_value: as the name suggests, it's assigned only during initialization.
Instead, you can use tf.assign like this:
with tf.Session() as session:
assign_op = var.assign(0)
session.run(assign_op)
It will update the variable during the session, which is what you're asking for.

Does Theano support variable split?

In my Theano program, I want to split the tensor matrix into two parts, with each of them making different contributions to the error function. Can anyone tell me whether automatic differentiation support this?
For example, for a tensor matrix variable M, I want to split it into M1=M[:300,] and M2=M[300:,], then the cost function is defined as 0.5* M1 * w + 0.8*M2*w. Is it still possible to get the gradient with T.grad(cost,w)?
Or more specifically, I want to construct an Autoencoder with different features having different weights in contribution to the total cost.
Thanks for anyone who answers my question.
Theano support this out of the box. You have nothing particular to do. If Theano don't support something in the crash, it should raise an error. But you won't have it for this, if there isn't problem in the way you call it. But the current pseudo-code should work.

Resources