I am finding that my model has a tensor that is of shape (?,1,60). I want to know how I can reduce this to (?,60)? Not sure whether reshape or Flatten can be done with respect to a dimension. Any help?
Both layers will work, but in this case I prefer using keras.layers.Flatten. Here is an example:
from keras.layers import Input, Flatten
from keras.models import Model
import numpy as np
a = Input(shape=(1, 60))
b = Flatten()(a)
model = Model(inputs=a, outputs=b)
model.compile('sgd', 'mse')
pred = model.predict(x=np.ones(shape=(2, 1, 60)))
print(pred.shape)
Related
I'm trying to develop a multitask deep neural network (MTDNN) to make prediction on small molecule bioactivity against kinase targets and something is definitely wrong with my model structure but I can't figure out what.
For my training data (highly imbalanced data with 0 as inactive and 1 as active), I have 423 unique kinase targets (tasks) and over 400k unique compounds. I first calculate the ECFP fingerprint using smiles, and then I randomly split the input data into train, test, and valid sets based on 8:1:1 ratio using RandomStratifiedSplitter from deepchem package. After training my model using the train set and I want to make prediction on the test set to check model performance.
Here's what my data looks like (screenshot example):
(https://i.stack.imgur.com/8Hp36.png)
Here's my code:
# Import Packages
import numpy as np
import pandas as pd
import deepchem as dc
from sklearn.metrics import roc_auc_score, roc_curve, auc, confusion_matrix
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import initializers, regularizers
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Dropout, Reshape
from tensorflow.keras.optimizers import SGD
from rdkit import Chem
from rdkit.Chem import rdMolDescriptors
# Build Model
inputs = keras.Input(shape = (1024, ))
x = keras.layers.Dense(2000, activation='relu', name="dense2000",
kernel_initializer=initializers.RandomNormal(stddev=0.02),
bias_initializer=initializers.Ones(),
kernel_regularizer=regularizers.L2(l2=.0001))(inputs)
x = keras.layers.Dropout(rate=0.25)(x)
x = keras.layers.Dense(500, activation='relu', name='dense500')(x)
x = keras.layers.Dropout(rate=0.25)(x)
x = keras.layers.Dense(846, activation='relu', name='output1')(x)
logits = Reshape([423, 2])(x)
outputs = keras.layers.Softmax(axis=2)(logits)
Model1 = keras.Model(inputs=inputs, outputs=outputs, name='MTDNN')
Model1.summary()
opt = keras.optimizers.SGD(learning_rate=.0003, momentum=0.9)
def loss_function (output, labels):
loss = tf.nn.softmax_cross_entropy_with_logits(output,labels)
return loss
loss_fn = loss_function
Model1.compile(loss=loss_fn, optimizer=opt,
metrics=[keras.metrics.Accuracy(),
keras.metrics.AUC(),
keras.metrics.Precision(),
keras.metrics.Recall()])
for train, test, valid in split2:
trainX = pd.DataFrame(train.X)
trainy = pd.DataFrame(train.y)
trainy2 = tf.one_hot(trainy,2)
testX = pd.DataFrame(test.X)
testy = pd.DataFrame(test.y)
testy2 = tf.one_hot(testy,2)
validX = pd.DataFrame(valid.X)
validy = pd.DataFrame(valid.y)
validy2 = tf.one_hot(validy,2)
history = Model1.fit(x=trainX, y=trainy2,
shuffle=True,
epochs=10,
verbose=1,
batch_size=100,
validation_data=(validX, validy2))
y_pred = Model1.predict(testX)
y_pred2 = y_pred[:, :, 1]
y_pred3 = np.round(y_pred2)
# Check the # of nonzero in assay
(y_pred3!=0).sum () #all 0s
My questions are:
The roc and precision recall are all extremely high (>0.99), but the prediction result of test set contains all 0s, no actives at all. I also use the randomized dataset with same active:inactive ratio for each task to test if those values are too good to be true, and turns out all values are still above 0.99, including roc which is expected to be 0.5.
Can anyone help me to identify what is wrong with my model and how should I fix it please?
Can I use built-in functions in sklearn to calculate roc/accuracy/precision-recall? Or should I manually calculate the metrics based on confusion matrix on my own for multitasking purpose. Why and why not?
I created a keras- tensorflow model, much influenced by
this guide
which looks like
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import time
import numpy as np
import sys
from keras import losses
model = keras.Sequential()
model.add(layers.Dense(nodes,activation = tf.keras.activations.relu, input_shape=[len(data_initial.keys())]))
model.add(layers.Dense(64,activation = tf.keras.activations.relu))
model.add(layers.Dropout(0.1, noise_shape=None))
model.add(layers.Dense(1))
model.compile(loss='mse', # <-------- Here we define the loss function
optimizer=tf.keras.optimizers.Adam(lr= 0.01,
beta_1 = 0.01,
beta_2 = 0.001,
epsilon= 0.03),
metrics=['mae', 'mse'])
model.fit(train_data,train_labels,epochs = 200)
It is a regression model and instead of the loss = 'mse' I would like to use
tf keras mse loss together with an L2 regularization term. The question is
How can I add a predefined regularizer function (I think, it is this one ) into the model.compile statement.
How can I write a completely custom loss function and add it to model.compile.
You can add regularization as either a layer parameter or as a layer.
Use it as a layer parameter looks like below
model.add(layers.Dense(8,
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01)))
Sample code with first dense layer regularized and a custom loss function
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import time
import numpy as np
import sys
from keras import losses
from keras import regularizers
import keras.backend as K
model = keras.Sequential()
model.add(layers.Dense(8,activation = tf.keras.activations.relu, input_shape=(8,),
kernel_regularizer=regularizers.l2(0.01),
activity_regularizer=regularizers.l1(0.01)))
model.add(layers.Dense(4,activation = tf.keras.activations.relu))
model.add(layers.Dropout(0.1, noise_shape=None))
model.add(layers.Dense(1))
def custom_loss(y_true, y_pred):
return K.mean(y_true - y_pred)**2
model.compile(loss=custom_loss,
optimizer=tf.keras.optimizers.Adam(lr= 0.01,
beta_1 = 0.01,
beta_2 = 0.001,
epsilon= 0.03),
metrics=['mae', 'mse'])
model.fit(np.random.randn(10,8),np.random.randn(10,1),epochs = 1)
I am experimenting with LSTM encoder-decoder. It is not clear to me who should I reshape the input data.
I used the following code:
import keras
import random
import numpy as np
from random import randint
from numpy import array
from numpy import argmax
from pandas import DataFrame
from pandas import concat
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.layers import RepeatVector
cardinality= 10
n_steps=10
n_steps_y=3
n_features=1
def getRandomInt():
return getOneHotEncoded(random.randint(1,cardinality),cardinality)
def getOneHotEncoded(value, cardinality):
encoded = [0 for _ in range(cardinality+1)]
encoded[value] = 1
return encoded
def generateXY():
X, y = list(), list()
for q in range(100):
x_temp = [getRandomInt() for _ in range(10)]
y_temp = x_temp[-3:]
X.append(x_temp)
y.append(y_temp)
return np.array(X), np.array(y)
def getModel(n_steps=n_steps,n_features=n_features):
model = Sequential()
model.add(LSTM(12, input_shape=(n_steps,n_features)))
model.add(RepeatVector(n_steps_y))
model.add(LSTM(5, return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(loss='categorical_crossentropy',optimizer='adam')
print(model.summary())
return model
X,y = generateXY()
model=getModel()
model.fit(X,y, epochs=10, batch_size=10,verbose=1)
and got error about the shape of the input.
ValueError: Error when checking input: expected lstm_1_input to have
shape (10, 1) but got array with shape (10, 11)
how should I reshape the input appropriately for this code?
I think what you are trying to do is passing a sequence of arrays of one-hot-encoded random numbers. Your sequences are 10 long and your arrays are 11 long.
To represent that, you need to set n_steps = 10 and n_features = 11
By the way: In encoded = [0 for _ in range(cardinality+1)], I don't quite understand the reasoning behind cardinality+1. You don't need to add 1 to represent the numbers from 0 to 9. If you change it to encoded = [0 for _ in range(cardinality)], you can set n_features = 10.
I hope this helped.
I'm trying to convert the input tensor to a numpy array inside a custom keras loss function, after following the instructions here.
The above code runs on my machine with no errors. Now, I want to extract a numpy array with values from the input tensor. However, I get the following error:
"tensorflow.python.framework.errors_impl.InvalidArgumentError: You
must feed a value for placeholder tensor 'input_1' with dtype float
[[Node: input_1 = Placeholderdtype=DT_FLOAT, shape=[],
_device="/job:localhost/replica:0/task:0/cpu:0"]]"
I need to convert to a numpy array because I have other keras models that must operate on the input - I haven't shown those lines below in joint_loss, but even the code sample below doesn't run at all.
import numpy as np
from keras.models import Model, Sequential
from keras.layers import Dense, Activation, Input
import keras.backend as K
def joint_loss_wrapper(x):
def joint_loss(y_true, y_pred):
x_val = K.eval(x)
return y_true - y_pred
return joint_loss
input_tensor = Input(shape=(6,))
hidden1 = Dense(30, activation='relu')(input_tensor)
hidden2 = Dense(40, activation='sigmoid')(hidden1)
out = Dense(1, activation='sigmoid')(hidden2)
model = Model(input_tensor, out)
model.compile(loss=joint_loss_wrapper(input_tensor), optimizer='adam')
I figured it out!
What you want to do is use the Functional API for Keras.
Then your submodels outputs as tensors can be obtained as y_pred_submodel = submodel(x).
This is similar to how a Keras layer operates on a tensor.
Manipulate only tensors within the loss function. That should work fine.
Using accepted answer here,I'm trying to change layer's weights with set_weights() method, but it seems don't work...
here the code I used
from keras.layers import Input
from keras.layers.convolutional import Convolution2D
from keras.models import Model
import numpy as np
print("Building Model...")
inp = Input(shape=(20,20,1))
output = Convolution2D(1, (3,3), padding='same',bias=False)(inp)
model_network=Model(inp, output)
print("Weights before change:")
print (model_network.layers[1].get_weights())
w = np.asarray([
[[[
[2,2,2],
[2,2,2],
[2,2,2]
]]]
])
w=np.reshape(w,np.shape(model_network.layers[1].get_weights()))
#print("W:",w)
model_network.layers[1].set_weights(w)
print("Weights after change:")
print(model_network.layers[1].get_weights())
but my weights remain the same; (output in comments)