Error: The tuning parameter grid should have columns - search

For the training of the GBM model I use the defined grid with the parameters. However r constantly tells me that the parameters are not defined, even though I did it.
enter image description here
Does anyone know how to fix this, help is much appreciated!

Related

How to calculate impact of quantitative x variables on a y variable

I have been trying to share some contribution analysis with management using quantitative variable. However I am struggling to reconcile my y% increase to my x's. I have tried linear regression but dont think that will help immensely here. Please help...
Here is the data and below that is the template I need to submit

RuntimeError: Type ‘Tuple when using Deep Markov Model

I am trying to use deep Markov Model on my dataset. However, when I use it I get the following run time error:
enter image description here
My code is exactly similar to:
https://github.com/pyro-ppl/pyro/blob/dev/examples/dmm/dmm.py
I don't even understand what this error means. Has anyone seen this error before? Insights would be appreciated. This error occurs in guide function (line # 225) I think. But I don't know what is triggering this error.
The error says that types for 2 last elements in your Tuple are int and float, while everything must be Tensor.
Probably the simplest way to fix is to make the int and the float (mini_batch_seq_lengths and annealing_factor, I believe) Tensors: change model to accept Tensor, and convert to/from scalars as needed.

Finding the overall contribution of each original descriptor in a PLS model

New to scikit-learn. I am using v 20.2. I am developing PLS regression models.I would like to know how important each of the original predictors/descriptors are in predicting the response. The diffferent matrices returned by scikit-learn for the learned PLS model(X_loadings, X_weights,etc) are giving descriptor-related values for each PLS component. But I am looking for a way to calculate/visualize the overall importance/contribution of each feature in the model. Can someone help me out here?
Also, what of the matrices shows me the coefficient assigned to each PLS component in the final linear model?
Thanks,
Yannick
The coef_ function of the model should give contribution of each descriptor to the response variable.

How to stop training some specific weights in TensorFlow

I'm just beginning to learn TensorFlow and I have some problems with it.In training loop I want to ignore the small weights and stop training them. I've assigned these small weights to zero. I searched the tf API and found tf.Variable(weight,trainable=False) can stop training the weight. If the value of the weight is equal to zero I will use this function. I tried to use .eval() but there occurred an exception ValueError("Cannot evaluate tensor using eval(): No default ". I have no idea how to get the value of the variable when in training loop. Another way is to modify the tf.train.GradientDescentOptimizer(), but I don't know how to do it. Has anyone implemented this code yet or any other methods suggested? Thanks in advance!
Are you looking to apply regularization to the weights?
There is an apply_regularization method in the API that you can use to accomplish that.
See: How to exactly add L1 regularisation to tensorflow error function
I don't know any use-case for stopping training of some variables, probably it's not what you should do.
Anyway, calling tf.Variable() (if I got you right) is not going to help you, because it's called just once when the graph is defined. The first argument is initial_value: as the name suggests, it's assigned only during initialization.
Instead, you can use tf.assign like this:
with tf.Session() as session:
assign_op = var.assign(0)
session.run(assign_op)
It will update the variable during the session, which is what you're asking for.

Does Theano support variable split?

In my Theano program, I want to split the tensor matrix into two parts, with each of them making different contributions to the error function. Can anyone tell me whether automatic differentiation support this?
For example, for a tensor matrix variable M, I want to split it into M1=M[:300,] and M2=M[300:,], then the cost function is defined as 0.5* M1 * w + 0.8*M2*w. Is it still possible to get the gradient with T.grad(cost,w)?
Or more specifically, I want to construct an Autoencoder with different features having different weights in contribution to the total cost.
Thanks for anyone who answers my question.
Theano support this out of the box. You have nothing particular to do. If Theano don't support something in the crash, it should raise an error. But you won't have it for this, if there isn't problem in the way you call it. But the current pseudo-code should work.

Resources