Decimal to binary tensor conversion - python-3.x

I'm trying to convert a one-hot tensor to a 8-binary tensor array. I'm using tf.keras.backend.argmax to find the decimal index which I'd like to convert to a binary representation.
When using tf.keras.backend.constant list, the following code works perfectly.
import tensorflow as tf
from keras import backend as K
sess = tf.Session()
with sess.as_default():
a_dec = tf.constant([0,1,2], dtype=tf.int32)
a_bin = tf.mod(tf.bitwise.right_shift(tf.expand_dims(a_dec,1), tf.range(8)), 2)
out = sess.run(a_bin)
print(out)
output:
[[0 0 0 0 0 0 0 0]
[1 0 0 0 0 0 0 0]
[0 1 0 0 0 0 0 0]]
However, when I try something like the following
sess = tf.Session()
with sess.as_default():
a_dec = K.constant([0,1,2], dtype=tf.int32)
index = K.constant([K.argmax(a_dec)], dtype=tf.int32)
a_bin = tf.mod(tf.bitwise.right_shift(tf.expand_dims(index,1), tf.range(8)), 2)
out = sess.run(a_bin)
print(out)
I get the following error:
TypeError: Expected float32, got list containing Tensors of type '_Message' instead.
So, how do I make an argmax tensor to work just like a constant tensor?

Related

Change Values in One Array, Based On Value from Column of Second Array

I have a one dimensional array called Y_train that contains a series of 1's and 0's. I have another array called sample_weight that is an array of all 1's that has the shape of Y_train, defined as:
sample_weight = np.ones(Y_train.shape, dtype=int)
I'm trying to change the values in sample_weight to a 2, where the corresponding value in Y_train == 0. So initially side by side it looks like:
Y_train sample_weight
0 1
0 1
1 1
1 1
0 1
1 1
and I'd like it to look like this after the transformation:
Y_train sample_weight
0 2
0 2
1 1
1 1
0 2
1 1
What I tried was to use a for loop (shown below) but none of the 1's are changing to 2's in sample_weight. I'd like to somehow use the np.where() function if possible, but it's not crucial, just would like to avoid a for loop:
sample_weight = np.ones(Y_train.shape, dtype=int)
for num, i in enumerate(Y_train):
if i == 0:
sample_weight[num] == 2
I tried using the solution shown here but to no success with the second array. Any ideas??? Thanks!
import numpy as np
Y_train = np.array([0,0,1,1,0,1])
sample_weight = np.where(Y_train == 0, 2, Y_train)
>> print(sample_weight)
[2 2 1 1 2 1]
The np.where basically works just like Excel's "IF":
np.where(condition, then, else)
Works for transposed arrays, too:
Y_train = np.array([[0,0,1,1,0,1]]).T
sample_weight = np.where(Y_train == 0, 2, Y_train)
>> print(sample_weight)
[[2]
[2]
[1]
[1]
[2]
[1]]

EDITED Learning data not correctly

I'm studying deep-learning.
I'm making figure classifier: circle, rectangle, triangle, pentagon, star. And one-hot-encoded into label2idx = dict(rectangle=0, circle=1, pentagon=2, star=3, triangle=4)
But every learning rates per epoch are same and it do not learn about the image.
I made a Layer with using Relu function for activation function, Affine for each layer, Softmax for the last layer, and using Adam to optimizing the gradients.
I have totally 234 RGB images to learn, which has created on window paint 2D tool and it is 128 * 128 size but not using the whole canvas to draw the figure.
And the picture looks like:
The train result. left [] is predict, and the right [] is answer lable(I picked random images to print predict value and answer lable).:
epoch: 0.49572649572649574
[ 0.3149641 -0.01454905 -0.23183 -0.2493432 0.11655246] [0 0 0 0 1]
epoch: 0.6837606837606838
[ 1.67341673 0.27887525 -1.09800398 -1.12649948 -0.39533065] [1 0 0 0 0]
epoch: 0.7094017094017094
[ 0.93106499 1.49599772 -0.98549052 -1.20471573 -0.24997779] [0 1 0 0 0]
epoch: 0.7905982905982906
[ 0.48447043 -0.05460748 -0.23526179 -0.22869489 0.05468969] [1 0 0 0 0]
...
epoch: 0.9230769230769231
[14.13835867 0.32432293 -5.01623202 -6.62469261 -3.21594355] [1 0 0 0 0]
epoch: 0.9529914529914529
[ 1.61248239 -0.47768294 -0.41580036 -0.71899219 -0.0901478 ] [1 0 0 0 0]
epoch: 0.9572649572649573
[ 5.93142154 -1.16719891 -1.3656573 -2.19785097 -1.31258801] [1 0 0 0 0]
epoch: 0.9700854700854701
[ 7.42198941 -0.85870225 -2.12027192 -2.81081263 -1.83810873] [1 0 0 0 0]
I think the more it learn, prediction should like [ 0.00143 0.09357 0.352 0.3 0.253 ] [ 1 0 0 0 0 ], which means answer index should be close to 0, but it does not.
Even the train accuracy sometimes goes to 1.0 ( 100% ).
I'm loading and normalizing the images with below codes.
#data_list = data_list = glob('dataset\\training\\*\\*.jpg')
dataset['train_img'] = _load_img()
def _load_img():
data = [np.array(Image.open(v)) for v in data_list]
a = np.array(data)
a = a.reshape(-1, img_size * 3)
return a
#normalize
for v in dataset:
dataset['train_img'] = dataset['train_img'].astype(np.float32)
dataset['train_img'] /= dataset['train_img'].max()
dataset['train_img'] -= dataset['train_img'].mean(axis=1).reshape(len(dataset['train_img']), 1)
EDIT
I let the images to gray scale with Image.open(v).convert('LA')
and checking my prediction value, and it's example:
[-3.98576886e-04 3.41216374e-05] [1 0]
[ 0.00698861 -0.01111879] [1 0]
[-0.42003415 0.42222863] [0 1]
still not learning about the images. I removed 3 figures to test it, so I just have rectangle, and triangle total 252 images ( I drew more imges. )
And the prediction value is usually like opposite value( 3.1323, -3.1323 or 3.1323, -3.1303 ), I cannot figure out the reason.
Not just increasing numerical accuracy, when I use SGD for optimizer, the accuracy do not increase. Just same accuracy.
[ 0.02090227 -0.02085848] [1 0]
epoch: 0.5873015873015873
[ 0.03058879 -0.03086193] [0 1]
epoch: 0.5873015873015873
[ 0.04006064 -0.04004988] [1 0]
[ 0.04545139 -0.04547538] [1 0]
epoch: 0.5873015873015873
[ 0.05605123 -0.05595288] [0 1]
epoch: 0.5873015873015873
[ 0.06495255 -0.06500597] [1 0]
epoch: 0.5873015873015873
Yes. Your model is performing pretty well. The problem is not related to normalization(not even a problem). The model actually predicted outside of 0,1 which means the model is really confident.
The model will not try to optimize towards [1,0,0,0] because when it calculates the loss, it will firstly clip the values.
Hope this helps!

Size of objects in Numpy Array: Sympy Points example

Here is attached some code in which it is attempted to instantiate a Numpy Array of Sympy Points given their coordinates.
from sympy.geometry import Point
import numpy as np
coordinates = np.array([2,0, 1,-1, 0,0, -2,3, -3,-2, 0.01,-0.01]).reshape(6,2)
v1 = np.empty((coordinates.shape[0],),dtype=np.dtype(Point))
for index, value in enumerate(coordinates):
v1[index] = Point(value)
print(v1)
# [Point2D(2, 0) Point2D(1, -1) Point2D(0, 0) Point2D(-2, 3) Point2D(-3, -2) Point2D(1/100, -1/100)]
v2 = np.empty((coordinates.shape[0],2),dtype=np.dtype(Point))
for index, value in enumerate(coordinates):
v2[index] = Point(value)
print(v2)
# [[2 0]
# [1 -1]
# [0 0]
# [-2 3]
# [-3 -2]
# [1/100 -1/100]]
v3 = np.empty((coordinates.shape[0],1),dtype=np.dtype(Point))
for index, value in enumerate(coordinates):
v3[index] = Point(value)
print(v3)
# ValueError: cannot copy sequence with size 2 to array axis with dimension 1
I would have expected that size 1 for the second index of the result should be correct way to do it (v3), because a Point should be 1 object but it gives an error.
v1 is what I would have expected to get doing it the v3 way. Why does it have to be done the v1 way? Why is v1 wrong? Why is v2 not a vector of Sympy Points?

Ensemble model in H2O with fold_column argument

I am new to H2O in python. I am trying to model my data using ensemble model following the example codes from H2O's web site. (http://docs.h2o.ai/h2o/latest-stable/h2o-docs/data-science/stacked-ensembles.html)
I have applied GBM and RF as base models. And then using stacking, I tried to merge them in ensemble model. In addition, in my training data I created one additional column named 'fold' to be used in fold_column = "fold"
I applied 10 fold cv and I observed that I received results from cv1. However, all the predictions coming from other 9 cvs, they are empty. What am I missing here?
Here is my sample data:
code:
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
from __future__ import print_function
h2o.init(port=23, nthreads=6)
train = h2o.H2OFrame(ens_df)
test = h2o.H2OFrame(test_ens_eq)
x = train.drop(['Date','EQUITY','fold'],axis=1).columns
y = 'EQUITY'
cat_cols = ['A','B','C','D']
train[cat_cols] = train[cat_cols].asfactor()
test[cat_cols] = test[cat_cols].asfactor()
my_gbm = H2OGradientBoostingEstimator(distribution="gaussian",
ntrees=10,
max_depth=3,
min_rows=2,
learn_rate=0.2,
keep_cross_validation_predictions=True,
seed=1)
my_gbm.train(x=x, y=y, training_frame=train, fold_column = "fold")
Then when I check cv results with
my_gbm.cross_validation_predictions():
Plus when I try the ensemble in the test set I get the warning below:
# Train a stacked ensemble using the GBM and GLM above
ensemble = H2OStackedEnsembleEstimator(model_id="mlee_ensemble",
base_models=[my_gbm, my_rf])
ensemble.train(x=x, y=y, training_frame=train)
# Eval ensemble performance on the test data
perf_stack_test = ensemble.model_performance(test)
pred = ensemble.predict(test)
pred
/mgmt/data/conda/envs/python3.6_4.4/lib/python3.6/site-packages/h2o/job.py:69: UserWarning: Test/Validation dataset is missing column 'fold': substituting in a column of NaN
warnings.warn(w)
Am I missing something about fold_column?
Here is an example of how to use a custom fold column (created from a list). This is a modified version of the example Python code in the Stacked Ensemble page in the H2O User Guide.
import h2o
from h2o.estimators.random_forest import H2ORandomForestEstimator
from h2o.estimators.gbm import H2OGradientBoostingEstimator
from h2o.estimators.stackedensemble import H2OStackedEnsembleEstimator
from h2o.grid.grid_search import H2OGridSearch
from __future__ import print_function
h2o.init()
# Import a sample binary outcome training set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
# Add a fold column, generate from a list
# The list has 10 unique values, so there will be 10 folds
fold_list = list(range(10)) * 1000
train['fold_id'] = h2o.H2OFrame(fold_list)
# Train and cross-validate a GBM
my_gbm = H2OGradientBoostingEstimator(distribution="bernoulli",
ntrees=10,
keep_cross_validation_predictions=True,
seed=1)
my_gbm.train(x=x, y=y, training_frame=train, fold_column="fold_id")
# Train and cross-validate a RF
my_rf = H2ORandomForestEstimator(ntrees=50,
keep_cross_validation_predictions=True,
seed=1)
my_rf.train(x=x, y=y, training_frame=train, fold_column="fold_id")
# Train a stacked ensemble using the GBM and RF above
ensemble = H2OStackedEnsembleEstimator(base_models=[my_gbm, my_rf])
ensemble.train(x=x, y=y, training_frame=train)
To answer your second question about how to view the cross-validated predictions in a model. They are stored in two places, however, the method that you probably want to use is: .cross_validation_holdout_predictions() This method returns a single H2OFrame of the cross-validated predictions, in the original order of the training observations:
In [11]: my_gbm.cross_validation_holdout_predictions()
Out[11]:
predict p0 p1
--------- -------- --------
1 0.323155 0.676845
1 0.248131 0.751869
1 0.288241 0.711759
1 0.407768 0.592232
1 0.507294 0.492706
0 0.6417 0.3583
1 0.253329 0.746671
1 0.289916 0.710084
1 0.524328 0.475672
1 0.252006 0.747994
[10000 rows x 3 columns]
The second method, .cross_validation_predictions() is a list which stores the predictions from each fold in an H2OFrame that has the same number of rows as the original training frame, but the rows that are not active in that fold have a value of zero. This is not usually the format that people find most useful, so I'd recommend using the other method instead.
In [13]: type(my_gbm.cross_validation_predictions())
Out[13]: list
In [14]: len(my_gbm.cross_validation_predictions())
Out[14]: 10
In [15]: my_gbm.cross_validation_predictions()[0]
Out[15]:
predict p0 p1
--------- -------- --------
1 0.323155 0.676845
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
[10000 rows x 3 columns]

calculate precision and recall in a confusion matrix

Suppose I have a confusion matrix as like as below. How can I calculate precision and recall?
first, your matrix is arranged upside down.
You want to arrange your labels so that true positives are set on the diagonal [(0,0),(1,1),(2,2)] this is the arrangement that you're going to find with confusion matrices generated from sklearn and other packages.
Once we have things sorted in the right direction, we can take a page from this answer and say that:
True Positives are on the diagonal position
False positives are column-wise sums. Without the diagonal
False negatives are row-wise sums. Without the diagonal.
\ Then we take some formulas from sklearn docs for precision and recall.
And put it all into code:
import numpy as np
cm = np.array([[2,1,0], [3,4,5], [6,7,8]])
true_pos = np.diag(cm)
false_pos = np.sum(cm, axis=0) - true_pos
false_neg = np.sum(cm, axis=1) - true_pos
precision = np.sum(true_pos / (true_pos + false_pos))
recall = np.sum(true_pos / (true_pos + false_neg))
Since we remove the true positives to define false_positives/negatives only to add them back... we can simplify further by skipping a couple of steps:
true_pos = np.diag(cm)
precision = np.sum(true_pos / np.sum(cm, axis=0))
recall = np.sum(true_pos / np.sum(cm, axis=1))
I don't think you need summation at last. Without summation, your method is correct; it gives precision and recall for each class.
If you intend to calculate average precision and recall, then you have two options: micro and macro-average.
Read more here http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
For the sake of completeness for future reference, given a list of grounth (gt) and prediction (pd). The following code snippet computes confusion matrix and then calculates precision and recall.
from sklearn.metrics import confusion_matrix
gt = [1,1,2,2,1,0]
pd = [1,1,1,1,2,0]
cm = confusion_matrix(gt, pd)
#rows = gt, col = pred
#compute tp, tp_and_fn and tp_and_fp w.r.t all classes
tp_and_fn = cm.sum(1)
tp_and_fp = cm.sum(0)
tp = cm.diagonal()
precision = tp / tp_and_fp
recall = tp / tp_and_fn
Given:
hypothetical confusion matrix (cm)
cm =
[[ 970 1 2 1 1 6 10 0 5 0]
[ 0 1105 7 3 1 6 0 3 16 0]
[ 9 14 924 19 18 3 13 12 24 4]
[ 3 10 35 875 2 34 2 14 19 19]
[ 0 3 6 0 903 0 9 5 4 32]
[ 9 6 4 28 10 751 17 5 24 9]
[ 7 2 6 0 9 13 944 1 7 0]
[ 3 11 17 3 16 3 0 975 2 34]
[ 5 38 10 16 7 28 5 4 830 20]
[ 5 3 5 13 39 10 2 34 5 853]]
Goal:
precision and recall for each class using map() to calculate list division.
from operator import truediv
import numpy as np
tp = np.diag(cm)
prec = list(map(truediv, tp, np.sum(cm, axis=0)))
rec = list(map(truediv, tp, np.sum(cm, axis=1)))
print ('Precision: {}\nRecall: {}'.format(prec, rec))
Result:
Precision: [0.959, 0.926, 0.909, 0.913, 0.896, 0.880, 0.941, 0.925, 0.886, 0.877]
Recall: [0.972, 0.968, 0.888, 0.863, 0.937, 0.870, 0.954, 0.916, 0.861, 0.880]
please note: 10 classes, 10 precisions and 10 recalls.
Agreeing with gruangly and EuWern, I modified PabTorre's solution accordingly to generate precision and recall per class.
Also, given my use case (NER) where a model could:
Never predict a class that is present in the input text (i.e. a column of zeros, i.e. TP:0, FP:0, FN: all), causing a nan in the precision array, or
Predict a class that is completely absent in the input text (i.e. a row of zeros, i.e. TP:0, FN:0, FP: all), causing a nan in the recall array...
I wrap the array with a numpy.nan_to_num() to convert any nan to zero. This is not a mathematical decision, but a per use-case, functional decision in how to handle never-predicted, or never-occuring classes.
import numpy
confusion_matrix = numpy.array([
[ 5, 0, 0, 0, 0, 3],
[ 0, 2, 0, 1, 0, 5],
[ 0, 0, 0, 3, 5, 7],
[ 0, 0, 0, 9, 0, 0],
[ 0, 0, 0, 9, 32, 3],
[ 0, 0, 0, 0, 0, 0]
])
true_positives = numpy.diag(confusion_matrix)
false_positives = numpy.sum(confusion_matrix, axis=0) - true_positives
false_negatives = numpy.sum(confusion_matrix, axis=1) - true_positives
precision = numpy.nan_to_num(numpy.divide(true_positives, (true_positives + false_positives)))
recall = numpy.nan_to_num(numpy.divide(true_positives, (true_positives + false_negatives)))
print(true_positives) # [ 5 2 0 9 32 0 ]
print(false_positives) # [ 0 0 0 13 5 18 ]
print(false_negatives) # [ 3 6 15 0 12 0 ]
print(precision) # [1. 1. 0. 0.40909091 0.86486486 0. ]
print(recall) # [0.625 0.25 0. 1. 0.72727273 0. ]
import numpy as np
n_classes=3
cm = np.array([[0,1,2],
[5,4,3],
[8,7,6]])
sp = []
f1 = []
gm = []
sens = []
acc= []
for c in range(n_classes):
tp = cm[c,c]
fp = sum(cm[:,c]) - cm[c,c]
fn = sum(cm[c,:]) - cm[c,c]
tn = sum(np.delete(sum(cm)-cm[c,:],c))
recall = tp/(tp+fn)
precision = tp/(tp+fp)
accuracy = (tp+tn)/(tp+fp+fn+tn)
specificity = tn/(tn+fp)
f1_score = 2*((precision*recall)/(precision+recall))
g_mean = np.sqrt(recall * specificity)
sp.append(specificity)
f1.append(f1_score)
gm.append(g_mean)
sens.append(recall)
acc.append(tp)
print("for class {}: recall {}, specificity {}\
precision {}, f1 {}, gmean {}".format(c,round(recall,4), round(specificity,4), round(precision,4),round(f1_score,4),round(g_mean,4)))
print("sp: ", np.average(sp))
print("f1: ", np.average(f1))
print("gm: ", np.average(gm))
print("sens: ", np.average(sens))
print("accuracy: ", np.sum(acc)/np.sum(cm))

Resources