Speed up the Keras sequential model in for loop - python-3.x

I am trying to decrease the execution time of the Keras sequential model that runs in a loop several times.
My training dataset shape: (1,9526,32736,1) (1,ntimes,ngrid,1)
and test data shape is (1,1059,32736,1)
The test data time dimension is not fixed (variable) but the ngrid is fixed.
I created a dummy dimension in the end so that when I call the training data in the for loop the dimension shape will be (1,ntimes,1)
This is the description of what model does:
First, the model does the convolution along the time axis for a single grid point.
Subtracts the output of the convolution from the input data.
Does the convolution (along the time axis) of the output from the second layer.
The above steps are repeated 32736 ngrid times.
Here is the code:
import tensorflow.keras as keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input,Conv1D,subtract
import tensorflow as tf
print(tf.__version__)
2.4.1
import tensorflow.keras as keras
print(keras.__version__)
2.4.0
no_epochs = 1000
validation_split = 0
verbosity = 0
pred = np.ones(xtest.shape[1:3])
for i in tqdm(range(ngrid)):
keras.backend.clear_session()
inputs = Input(shape=(None,1),batch_size=1,name='input_layer')
smoth1 = Conv1D(1, kernel_size=90,padding='same',activation='linear')(inputs)
diff = subtract([inputs, smoth1])
smoth2 = Conv1D(1, kernel_size=30,padding='same',activation='linear')(diff)
model = Model(inputs=inputs, outputs=smoth2)
model.compile(optimizer='adam', loss='mse')
model.fit(xtrain[:,:,i,:],ytrain[:,:,i,:],epochs=no_epochs,validation_split=validation_split,verbose=verbosity)
pred[:,i] = model.predict(xtest[:,:,i,:]).squeeze()
del model
I am looking for other alternatives that can speed up my code. Any suggestions are greatly appreciated.

Related

Questions about Multitask deep neural network modeling using Keras

I'm trying to develop a multitask deep neural network (MTDNN) to make prediction on small molecule bioactivity against kinase targets and something is definitely wrong with my model structure but I can't figure out what.
For my training data (highly imbalanced data with 0 as inactive and 1 as active), I have 423 unique kinase targets (tasks) and over 400k unique compounds. I first calculate the ECFP fingerprint using smiles, and then I randomly split the input data into train, test, and valid sets based on 8:1:1 ratio using RandomStratifiedSplitter from deepchem package. After training my model using the train set and I want to make prediction on the test set to check model performance.
Here's what my data looks like (screenshot example):
(https://i.stack.imgur.com/8Hp36.png)
Here's my code:
# Import Packages
import numpy as np
import pandas as pd
import deepchem as dc
from sklearn.metrics import roc_auc_score, roc_curve, auc, confusion_matrix
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import initializers, regularizers
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Dropout, Reshape
from tensorflow.keras.optimizers import SGD
from rdkit import Chem
from rdkit.Chem import rdMolDescriptors
# Build Model
inputs = keras.Input(shape = (1024, ))
x = keras.layers.Dense(2000, activation='relu', name="dense2000",
kernel_initializer=initializers.RandomNormal(stddev=0.02),
bias_initializer=initializers.Ones(),
kernel_regularizer=regularizers.L2(l2=.0001))(inputs)
x = keras.layers.Dropout(rate=0.25)(x)
x = keras.layers.Dense(500, activation='relu', name='dense500')(x)
x = keras.layers.Dropout(rate=0.25)(x)
x = keras.layers.Dense(846, activation='relu', name='output1')(x)
logits = Reshape([423, 2])(x)
outputs = keras.layers.Softmax(axis=2)(logits)
Model1 = keras.Model(inputs=inputs, outputs=outputs, name='MTDNN')
Model1.summary()
opt = keras.optimizers.SGD(learning_rate=.0003, momentum=0.9)
def loss_function (output, labels):
loss = tf.nn.softmax_cross_entropy_with_logits(output,labels)
return loss
loss_fn = loss_function
Model1.compile(loss=loss_fn, optimizer=opt,
metrics=[keras.metrics.Accuracy(),
keras.metrics.AUC(),
keras.metrics.Precision(),
keras.metrics.Recall()])
for train, test, valid in split2:
trainX = pd.DataFrame(train.X)
trainy = pd.DataFrame(train.y)
trainy2 = tf.one_hot(trainy,2)
testX = pd.DataFrame(test.X)
testy = pd.DataFrame(test.y)
testy2 = tf.one_hot(testy,2)
validX = pd.DataFrame(valid.X)
validy = pd.DataFrame(valid.y)
validy2 = tf.one_hot(validy,2)
history = Model1.fit(x=trainX, y=trainy2,
shuffle=True,
epochs=10,
verbose=1,
batch_size=100,
validation_data=(validX, validy2))
y_pred = Model1.predict(testX)
y_pred2 = y_pred[:, :, 1]
y_pred3 = np.round(y_pred2)
# Check the # of nonzero in assay
(y_pred3!=0).sum () #all 0s
My questions are:
The roc and precision recall are all extremely high (>0.99), but the prediction result of test set contains all 0s, no actives at all. I also use the randomized dataset with same active:inactive ratio for each task to test if those values are too good to be true, and turns out all values are still above 0.99, including roc which is expected to be 0.5.
Can anyone help me to identify what is wrong with my model and how should I fix it please?
Can I use built-in functions in sklearn to calculate roc/accuracy/precision-recall? Or should I manually calculate the metrics based on confusion matrix on my own for multitasking purpose. Why and why not?

Polynomial Regression using keras

hi i am new to keras and i just wanted to know are ann's good for polynomial regression tasks or we shuold just
use sklearn for exmaple i write this script
import numpy as np
import keras
from keras.layers import Dense
from keras.models import Sequential
x=np.arange(1, 100)
y=x**2
model = Sequential()
model.add(Dense(units=200, activation = 'relu',input_dim=1))
model.add(Dense(units=200, activation= 'relu'))
model.add(Dense(units=1))
model.compile(loss='mean_squared_error',optimizer=keras.optimizers.SGD(learning_rate=0.001))
model.fit(x, y,epochs=2000)
but after testing it on some of numbers i didn't get good result like :
model.predict([300])
array([[3360.9023]], dtype=float32)
is there any problem in my code or i just shouldn't use ann's for polynomial regressions.
thank you.
I'm not 100 percent sure, but I think that the reason you are getting such bad predictions is because you did not scale your data. Artificial neural networks are extremely computationally intensive, and thus, scaling is a must. Scale your data as shown below:
import numpy as np
import keras
from keras.layers import Dense
from keras.models import Sequential
x=np.arange(1, 100)
y=x**2
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x = sc_x.fit_transform(x)
sc_y = StandardScaler()
y = sc_y.fit_transform(y)
model = Sequential()
model.add(Dense(units=5, activation = 'relu',input_dim=1))
model.add(Dense(units=5, activation= 'relu'))
model.add(Dense(units=1))
model.compile(loss='mean_squared_error',optimizer=keras.optimizers.SGD(learning_rate=0.001))
model.fit(x, y,epochs=75, batch_size=10)
prediction = sc_y.inverse_transform(model.predict(sc_x.transform([300])))
print(prediction)
Note that I changed the number of epochs from 2000 to 75. This is because 2000 epochs is way to high for a neural network, and it requires lots of time to train. Your X dataset contains only 100 values, so the maximum number of epochs I would suggest is 75.
Furthermore, I also changed the number of neurons in each hidden layer from 200 to 5. This is because 200 neurons is far to many for most datasets, let alone a small dataset of length 100.
These changes should ensure that your neural network produces more accurate predictions.
Hope that helped.

Question about enabling/disabling dropout with keras functional API

I am using Keras functional API to build a classifier and I am using the training flag in the dropout layer to enable dropout when predicting new instances (in order to get an estimate of the uncertainty). In order to get the expected response one needs to repeat this prediction several times, with keras randomly activating links in the dense layer, and of course it is computational expensive. Therefore, I would also like to have the option to not use dropout at the prediction phase, i.e., use all the network links. Does anyone know how I can do this? Following is a sample code of what I am doing. I tried to look if predict has any relevant parameter but does not seem like it does (?). I can technically train the same model without the training flag at the dropout layer, but I do not want to do this (or better I want to have a more clean solution, rather than having 2 different models).
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.utils import to_categorical
from keras.layers import Dense
from keras.layers import Dropout
import numpy as np
import keras
# generate a 2d classification sample dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
trainy = to_categorical(trainy)
testy = to_categorical(testy)
inputlayer = keras.layers.Input((2,))
d = keras.layers.Dense(500, activation = 'relu')(inputlayer)
d1 = keras.layers.Dropout(rate = .3)(d,training = True)
out = keras.layers.Dense(2, activation = 'softmax')(d1)
model = keras.Model(inputs = inputlayer, outputs = out)
model.compile(loss = 'categorical_crossentropy',metrics = ['accuracy'],optimizer='adam')
model.fit(x = trainX, y = trainy, validation_data=(testX, testy),epochs=1000, verbose=1)
# another prediction on a specific sample
print(model.predict(testX[0:1,:]))
# another prediction on the same sample
print(model.predict(testX[0:1,:]))
Running the above example I get the following output:
[[0.9230819 0.07691813]]
[[0.8222245 0.17777553]]
which is as expected, different class probabilities for the same input, since there is a random (de)activation of the links from the dropout layer.
Any suggestions on how I can enable/disable dropout at the prediction phase with the functional API?
Sure, you do not need to set the training flag when building the Dropout layer. After training your model you define this function:
mc_func = K.function([model.input, K.learning_phase()],
[model.output])
Then you call mc_func with your input and flag 1 to enable dropout, or 0 to disable it:
stochastic_pred = mc_func([some_input, 1])
deterministic_pred = mc_func([some_input, 0])

Keras LSTM batch input shape

Hello I can not seem to figure out the relationship between the reshapping of X,Y with the batch input shape of Keras when dealing with a LSTM.
current database is a 84119,190 pandas dataframe i am bringing in. from there break out to X and Y. so features is 189. If you could point out where i am wrong as it relates to the (sequence, timestep, dimensions) it would be appreciated.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import LSTM
# load dataset
training_data_df = pd.read_csv("C:/Users/####/python_folders/stock_folder/XYstore/Big_data22.csv")
X = training_data_df.drop('Change Month End Stock Price', axis=1).values
Y = training_data_df[['Change Month End Stock Price']].values
data_dim = 189
timesteps = 4
numberofSequence = 1
X=X.reshape(numberofSequence,timesteps,data_dim)
Y=Y.reshape(numberofSequence,timesteps, 1)
model = Sequential()
model.add(LSTM(200, return_sequences=True,batch_input_shape=(timesteps,data_dim)))
model.add(LSTM(200, return_sequences=True))
model.add(LSTM(100,return_sequences=True))
model.add(LSTM(1,return_sequences=False, activation='linear'))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X,Y,epochs=100)
Edit to fix issue
thanks to the help below. Both helped me think through the problem. still have some work to do to really understand it.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import LSTM
training_data_df = pd.read_csv("C:/Users/TurnerJ/python_folders/stock_folder/XYstore/Big_data22.csv")
training_data_df.replace(np.nan,value=0,inplace=True)
training_data_df.replace(np.inf,value=0,inplace=True)
training_data_df = training_data_df.loc[279:,:]
X = training_data_df.drop('Change Month End Stock Price', axis=1).values
Y = training_data_df[['Change Month End Stock Price']].values
data_dim = 189
timesteps = 1
numberofSequence = 83840
X=X.reshape(numberofSequence,timesteps,data_dim)
Y=Y.reshape(numberofSequence,timesteps, 1)
model = Sequential()
model.add(LSTM(200, return_sequences=True,batch_input_shape=(32,timesteps,data_dim)))
model.add(LSTM(200, return_sequences=True))
model.add(LSTM(100,return_sequences=True))
model.add(LSTM(1,return_sequences=True, activation='linear'))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X,Y,epochs=100)
batch_input_shape needs the size of the batch: (numberofSequence,timesteps,data_dim)
input_shape needs only the shape of a sample: (timesteps,data_dim).
Now, there is a problem.... 84199 is not a multiple of 4, how can we expect to reshape it in steps of 4?
There may also be other issues, such as:
If you have one single long sequence, why divide it in steps?
If you intend to use a sliding window case, you need to prepare your data as this: Sample 1 = [step1,step2,step3,step4]; Sample 2 = [step2,step3,step4,step5]; and so on. This will imply in numberofSquence > 1 (something near the 90000)
If you intend to have a single sequence divided because of memory/performance issues, you should be using stateful=True and call model.reset_states() at the beginning of every epoch.

predict with different time step in LSTM using keras

I am using keras to predict time series with LSTM and I realize that we can predict using datas that has not the same timestep than the ones we used to train. For example:
import numpy as np
import keras.optimizers
from keras.models import Sequential
from keras.layers import Dense,Activation,Dropout,TimeDistributed
from keras.layers import LSTM
Xtrain = np.random.rand(10,3,2) #Here timestep is 3
Ytrain = np.random.rand(10,1)
model = Sequential()
model.add(LSTM(input_dim = Xtrain.shape[2],output_dim =10,return_sequences = False))
model.add(Activation("sigmoid"))
model.add(Dense(1))
KerasOptimizer = keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(loss="mse", optimizer=KerasOptimizer)
model.fit(Xtrain,Ytrain,nb_epoch = 1,batch_size = 1)
XBis = np.random.rand(10,4,2) #here timestep is 4
XTer = np.random.rand(10,2,2) #here timestep is 2
model.predict(Xtrain)
model.predict(XBis)
model.predict(XBis)
So my question is: why is that? If we train a model with n timesteps and we use data with n+1 timestep for prediction maybe the model uses only the first n timesteps. But if we try to predict with n-1 timestep, how is it working?
If you look at how the LSTM layer is defined in your example, you will note that you are not telling specifically what is the size of the time dimension, only the number of features present at each time point (input_dim) and the number of desired output features (output_dim). Also, since you have return_sequences=False it will only output the result at the last time point, so the tensor yielded by the layer will always have the shape [batch size] x [output dim] (in this case, 10 x 10), discarding the time dimension.
So the size of the time dimension does not really affect to the "applicability" of the model; the layer will just go through all the available time steps and give you the last output.
Of course, that does not mean that the model will necessarily work well for any input. If all the examples in your training data have a time dimension of size N but the you try to predict using N+1, N-1, 100 * N or whatever else, you may not have reliable results.

Resources