This question already has answers here:
numpy: what is the logic of the argmin() and argmax() functions?
(5 answers)
Closed 2 years ago.
I am a beginner in Deep Learning and while performing a practical assignment, came across the Keras documentation on keras.backend.
I went through the explanation a number of times. however, i cannot exactly understand the difference between max and argmax function.
argmax is the index of maximum in an array and max is maximum value in that array. Please check the example given below
import tensorflow as tf
x = tf.constant([1,10,2,4,15])
print(tf.keras.backend.argmax(x, axis=-1).numpy()) # output 4 (index of max value 15, which is 4)
print(tf.keras.backend.max(x, axis=-1).numpy()) # output 15
Related
This question already has answers here:
sklearn: Found arrays with inconsistent numbers of samples when calling LinearRegression.fit()
(10 answers)
Closed 2 years ago.
Supervised learning with numeric data applying KNN classifier giving an error while I am calling metrics.confusion_matrix(y_test, y_pred). The conducting error message is: Found input variables with inconsistent numbers of samples: [40000, 2000]. Thanks in advance for showing lights.
The confusion matrix compares the values in both arrays and basically tells how many samples were labelled same and how many differed in both of them. For this, the number of elements in both arrays should be the same. This is what the error says.
So make sure that both of them have same number of elements.
Perhaps if you include previous code that deals with y_test and y_pred, it would be easier to see why the sizes are different.
This question already has answers here:
Tensorflow "map operation" for tensor?
(3 answers)
Closed 4 years ago.
I have a 4D tensor and if I run it, all values expect for 1/100 part are rather close to what I need, so I want to apply a map function which will set all bad numbers to some fixed value. And the question is: how can I apply a map function in terms of tensors, i.e. BEFORE the Session()'s run function (as I need to calculate a loss function and backpropagate the result, these operations are made before sess.run).
You can't work with python's map function because tensorflow works with symbolic tensors which will only be filled the moment you run session.run(). But there is a map function for tensors provided by tensorflow, called tf.map_fn see https://www.tensorflow.org/api_docs/python/tf/map_fn
This question already has answers here:
PySpark & MLLib: Random Forest Feature Importances
(5 answers)
Closed 5 years ago.
How do I get the corresponding feature importance of every variable in a GBT Classifier model in pyspark
From spark 2.0+ (here) You have the attribute:
model.featureImportances
This will give a sparse vector of feature importance for each column/ attribute
This question already has an answer here:
Classifying sequences of different lengths [duplicate]
(1 answer)
Closed 5 years ago.
Suppose the input to the model is a series of vectors, each with equal length. However, the number of vectors in each input can change. I want to make an LSTM model using Keras, but if I were to write
input = keras.layers.input(dims)
img_out = keras.layers.recurrent.LSTM(16)
Then what would I put for "dims"? Thanks so much.
You can fix an upper bound for dims. When the input is less than dims, you can pad the rest with zero vector.
This question already has an answer here:
Spark ALS recommendation system have value prediction greater than 1
(1 answer)
Closed 3 years ago.
Experimenting with Spark mllib ALS("trainImplicit") for a while now.
Would like to understand
1.why Im getting ratings value more than 1 in the predictions?
2.Is there any need for normalizing the user-product input?
sample result:
[Rating(user=316017, product=114019, rating=3.1923),
Rating(user=316017, product=41930, rating=2.0146997092620897)
]
In the documentation, it is mentioned that the predicted rating values will be somewhere around 0-1.
I know that the ratings values can still be used in recommendations but it would be great if I know the reason.
The cost function in ALS trainImplicit() doesn't impose any condition on predicted rating values as it takes the magnitude of difference from 0/1. So, you may also find some negative values there. That is why it says the predicted values are around [0,1] not necessarily in it.
There is one option to set non-negative factorization only, so that you never get a negative value in predicted rating or feature matrices, but that seemed to drop the performance for our case.