How to get a value out of tensor? - pytorch

My python function is returning tensor([102],cuda::0) like structure. How do I obtain the first value from this?

Found this on pytorch discussion forums:
a.data.cpu().numpy()[0]
where a is the tensor.

Related

PyTorch method for returning element with highest count in a 1D tensor?

Is there a built-in PyTorch method that takes a 1D tensor, and returns the element in the tensor with the highest count?
For example, if we input torch.tensor([2,2,2,3,4,5]), the method should return 2 as it occurs the most. In case of a tie in frequency, the element with the lower value should be returned; inputting torch.tensor([1,1,2,2,4,5]) should return 1.
Just to be clear, I only wish to know if there's an existing built-in PyTorch method that does exactly this. If there's no such method, please refrain from posting the solution, as I'd like to try solving it on my own.
yes torch.mode() is builtin function(read here) which handles both of your conditions.
torch.mode(alpha,0) #alpha being the name of tensor

Computing values of feature importances

Where does scikit-learn compute the values of sklearn.ensemble.RandomForestClassifier.feature_importances_?
The corresponding code should be within https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/_forest.py but I do not find sklearn.ensemble.RandomForestClassifier.feature_importances there, however there is a class called RandomForestClassifier.
The relevant section of the source code is here: https://github.com/scikit-learn/scikit-learn/blob/b194674c4/sklearn/ensemble/_forest.py#L415

what is the equivalent of theano.tensor.clip in pytorch?

I want to clip my tensor (not gradient) values to some range. Is there any function in pytorch like there is a function theano.tensor.clip() in theano?
The function you are searching for is called torch.clamp. You can find the documentation here

How to stop training some specific weights in TensorFlow

I'm just beginning to learn TensorFlow and I have some problems with it.In training loop I want to ignore the small weights and stop training them. I've assigned these small weights to zero. I searched the tf API and found tf.Variable(weight,trainable=False) can stop training the weight. If the value of the weight is equal to zero I will use this function. I tried to use .eval() but there occurred an exception ValueError("Cannot evaluate tensor using eval(): No default ". I have no idea how to get the value of the variable when in training loop. Another way is to modify the tf.train.GradientDescentOptimizer(), but I don't know how to do it. Has anyone implemented this code yet or any other methods suggested? Thanks in advance!
Are you looking to apply regularization to the weights?
There is an apply_regularization method in the API that you can use to accomplish that.
See: How to exactly add L1 regularisation to tensorflow error function
I don't know any use-case for stopping training of some variables, probably it's not what you should do.
Anyway, calling tf.Variable() (if I got you right) is not going to help you, because it's called just once when the graph is defined. The first argument is initial_value: as the name suggests, it's assigned only during initialization.
Instead, you can use tf.assign like this:
with tf.Session() as session:
assign_op = var.assign(0)
session.run(assign_op)
It will update the variable during the session, which is what you're asking for.

Nested mixed-model in Python?

Does anyone know how to do a nested random-effects model in Python? Using statsmodels MixedLM, it gives me a singular matrix error.
the issue arises when you have a column with values 0 throughout
So, you can check the variables you're using in mixedlm model and find out which ones are zero.
Hope it helps!

Resources