First CNN and shapes error - keras

I just started to build my first CNN. I'm practicing with the MNIST dataset, this is the code I just wrote:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, Dropout, Flatten, Dense
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam
from sklearn.preprocessing import RobustScaler
import os
import numpy as np
import matplotlib.pyplot as plt
# CONSTANTS
EPOCHS = 300
TIME_STEPS = 30000
NUM_CLASSES = 10
# Loading data
print('Loading data:')
(train_X, train_y), (test_X, test_y) = mnist.load_data()
print('X_train: ' + str(train_X.shape))
print('Y_train: ' + str(train_y.shape))
print('X_test: ' + str(test_X.shape))
print('Y_test: ' + str(test_y.shape))
print('------------------------------')
# Splitting train/val
print('Splitting training/validation set:')
X_train = train_X[0:TIME_STEPS, :]
X_val = train_X[TIME_STEPS:TIME_STEPS*2, :]
print('X_train: ' + str(X_train.shape))
print('X_val: ' + str(X_val.shape))
# Normalizing data
print('------------------------------')
print('Normalizing data:')
X_train = X_train/255
X_val = X_val/255
print('X_train: ' + str(X_train.shape))
print('X_val: ' + str(X_val.shape))
# Building model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=5, input_shape=(28, 28)))
model.add(Conv1D(filters=16, kernel_size=4, activation="relu"))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(optimizer=Adam(), loss=categorical_crossentropy, metrics=['accuracy'])
model.summary()
model.fit(x=X_train, y=X_train, batch_size=10, epochs=EPOCHS, shuffle=False)
I'm going to explain what I did, any correction would be helpful so I can learn more:
The first thing I did is splitting the training set in two parts: a training part and a validation part, on which I would like to do the training before testing it on the test set.
Then, I normalized the data (is this a standard when we work with images?)
I then built my CNN with a simple structure: the first layer is the one which gets the inputs (with dimension 28x28) and I've chosen 32 filters that should be enough to perform well on this dataset. The kernel size is the one I did not understood since I thought that the kernel was the equivalent of the filter. I selected a low number to avoid problems. The second layer is similar to the previous one, but now it has an activation function (relu, but I'm not convinced, I was thinking to use a softmax to pass a set of probabilities to the full connected layer).
The last 3 layers are the full connected layer to get the output.
In the fit function I used a batch size of 10 and I think that this could be one of the reason I get the error:
ValueError: Shapes (10, 28, 28) and (10, 10) are incompatible
Even removing it I still getting the following error:
ValueError: Shapes (None, 28, 28) and (None, 10) are incompatible
Am I missing something important?

You are passing in the X_train variable twice, once as the x argument and once as the y argument. Instead of passing in X_train as the y argument in .fit() you should pass in an array of values you are trying to predict. Given that you are using MNIST is assume that you are trying to predict the written digit, so your y array should be of shape (n_samples, 10) with the digit being one-hot encoded.

Related

ValueError: Input arrays should have the same number of samples as target arrays LSTM Keras

Here is the part for data preparing - I just want the data to be in the correct shape
x_train , x_test, y_train, y_test = train_test_split(input_data, y , test_size = 0.2 , random_state = 33)
print(x_train.shape)
print(y_train.shape)
(200, 3)
(200, 1)
#Converting them into numpy arrays
input_x_train = x_train.as_matrix()
input_y_train = y_train.as_matrix()
print(input_x_train.shape)
print(input_y_train.shape)
(200, 3)
(200, 1)
input_x_test = x_test.as_matrix()
input_y_test = y_test.as_matrix()
print(input_x_test.shape)
print(input_y_test.shape)
(51, 3)
(51, 1)
#Reshaping into LSTM input format
input_x = input_x_train.reshape((1, input_x_train.shape[0], input_x_train.shape[1]))
print(input_x.shape)
(1, 200, 3)
Then I built my model like this:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers.recurrent import LSTM
from keras.layers.normalization import BatchNormalization
model = Sequential()
model.add(LSTM(32, input_shape=(200, 3)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(input_x, input_y_train, epochs=1, batch_size=16)
But I am getting this error
ValueError: Input arrays should have the same number of samples as
target arrays. Found 1 input samples and 200 target samples.
The input_shape parameter should not include your batch size. Given that each sample of yours dataset has three features, you should set input_shape=(3,).
In addition, you should not reshape your batch to (1, batch_size, 3). I'm not sure why you're doing that, but as far as I can tell that will break things. Remove that line entirely.

how to reuse last layers' bias in next layers in Keras with tensorflow Backend

I'm new to Keras
my neural network structure is here:
neural network structure
my idea is :
import keras.backend as KBack
import tensorflow as tf
#...some code here
model = Sequential()
hidden_units = 4
layer1 = Dense(
hidden_units,
input_dim=len(InputIndex),
activation='sigmoid'
)
model.add(layer1)
# layer1_bias = layer1.get_weights()[1][0]
layer2 = Dense(
1, activation='sigmoid',
use_bias=False
)
model.add(layer2)
# KBack.bias_add(model.output, layer1_bias[0])
I know this is not working cause layer1_bias[0] is not tensor, but I have no idea how to fix it. Or somebody has other solution.
Thanks.
You get the error because bias_add expects a Tensor and you are passing it a float (the actual value of the bias). Also, be aware that your hidden layer actually has 3 biases (one for each node). If you want to add the bias of the first node to your output layer, this should work:
import keras.backend as K
from keras.layers import Dense, Activation
from keras.models import Sequential
model = Sequential()
layer1 = Dense(3, input_dim=2, activation='sigmoid')
layer2 = Dense(1, activation=None, use_bias=False)
activation = Activation('sigmoid')
model.add(layer1)
model.add(layer2)
K.bias_add(model.output, layer1.bias[0:1]) # slice like this to not lose a dimension
model.add(activation)
print(model.summary())
Note that, to be 'correct' (according to the definition of what a dense layer does), you should add the bias first, then the activation.
Also, your code is not really in line with the picture of your network. In the picture, one single shared bias is added to each of the nodes in the network. You can do this with the functional API. The idea is to disable the use of biases in the hidden layer and the output layers, and to manually add a bias variable that you define yourself and that will be shared by the layers. I'm using tensorflow for tf.add() since that supports broadcasting:
from keras.layers import Dense, Lambda, Input, Add
from keras.models import Model
import keras.backend as K
import tensorflow as tf
# Define the shared bias as a custom keras variable
shared_bias = K.variable(value=[0], name='shared_bias')
input_layer = Input(shape=(2,))
# Disable biases in the hidden layer
dense_1 = Dense(units=3, use_bias=False, activation=None)(input_layer)
# Manually add the shared bias
dense_1 = Lambda(lambda x: tf.add(x, shared_bias))(dense_1)
# Disable bias in output layer
output_layer = Dense(units=1, use_bias=False)(dense_1)
# Manually add the bias variable
output_layer = Lambda(lambda x: tf.add(x, shared_bias))(output_layer)
model = Model(inputs=input_layer, outputs=output_layer)
print(model.summary())
This assumes that your shared bias is not trainable though.

Conv2d input parameter mismatch

I am giving variable size images (all 278 images of different size 139 of each category) input to my cnn model. As a fact that cnn required fixed size images, so from here i got solution for this is to make input_shape=(None,Nonen,1) (for tensorflow backend and gray scale). but this solution doesnot work with flatten layer, so from their only i got solution of using GlobleMaxpooling or Globalaveragepooling. So from uses these facrts i am making a cnn model in keras to train my network with following code:
import os,cv2
import numpy as np
from sklearn.utils import shuffle
from keras import backend as K
from keras.utils import np_utils
from keras.models import Sequential
from keras.optimizers import SGD,RMSprop,adam
from keras.layers import Conv2D, MaxPooling2D,BatchNormalization,GlobalAveragePooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import regularizers
from keras import initializers
from skimage.io import imread_collection
from keras.preprocessing import image
from keras import Input
import keras
from keras import backend as K
#%%
PATH = os.getcwd()
# Define data path
data_path = PATH+'/current_exp'
data_dir_list = os.listdir(data_path)
img_rows=None
img_cols=None
num_channel=1
# Define the number of classes
num_classes = 2
img_data_list=[]
for dataset in data_dir_list:
img_list=os.listdir(data_path+'/'+ dataset)
print ('Loaded the images of dataset-'+'{}\n'.format(dataset))
for img in img_list:
input_img=cv2.imread(data_path + '/'+ dataset + '/'+ img,0)
img_data_list.append(input_img)
img_data = np.array(img_data_list)
if num_channel==1:
if K.image_dim_ordering()=='th':
img_data= np.expand_dims(img_data, axis=1)
print (img_data.shape)
else:
img_data= np.expand_dims(img_data, axis=4)
print (img_data.shape)
else:
if K.image_dim_ordering()=='th':
img_data=np.rollaxis(img_data,3,1)
print (img_data.shape)
#%%
num_classes = 2
#Total 278 sample, 139 for 0 category and 139 for category 1
num_of_samples = img_data.shape[0]
labels = np.ones((num_of_samples,),dtype='int64')
labels[0:138]=0
labels[138:]=1
x,y = shuffle(img_data,labels, random_state=2)
y = keras.utils.to_categorical(y, 2)
model = Sequential()
model.add(Conv2D(32,(2,2),input_shape=(None,None,1),activation='tanh',kernel_initializer=initializers.glorot_uniform(seed=100)))
model.add(Conv2D(32, (2,2),activation='tanh'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (2,2),activation='tanh'))
model.add(Conv2D(64, (2,2),activation='tanh'))
model.add(MaxPooling2D())
model.add(Dropout(0.25))
#model.add(Flatten())
model.add(GlobalAveragePooling2D())
model.add(Dense(256,activation='tanh'))
model.add(Dropout(0.25))
model.add(Dense(2,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
model.fit(x, y,batch_size=1,epochs=5,verbose=1)
but i am getting following error:
ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (278, 1)
how to solve it.
In the docs for Conv2D it says that the input tensor has to be in this format:
(samples, channels, rows, cols)
I believe you can't have a variable input size unless your network is fully convolutional.
Maybe what you want to do is to keep it to a fixed input size, and just resize the image to that size before feeding it into your network?
Your array with input data cannot have variable dimensions (this is a numpy limitation).
So the array, instead of being a regular array of numbers with 4 dimensions is being created as an array of arrays.
You should fit each image individually because of this limitation.
for epoch in range(epochs):
for img,class in zip(x,y):
#expand the first dimension of the image to have a batch size
img = img.reshape((1,) + img.shape)) #print and check there are 4 dimensions, like (1, width, height, 1).
class = class.reshape((1,) + class.shape)) #print and check there are two dimensions, like (1, classes).
model.train_on_batch(img,class,....)

how to build LSTM RNN network for binary classification?

I am trying to build a deep learning network for binary classification using LSTM based RNN.
Here is what I have tried using python
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
import numpy as np
train = np.loadtxt("TrainDatasetFinal.txt", delimiter=",")
test = np.loadtxt("testDatasetFinal.txt", delimiter=",")
y_train = train[:,7]
y_test = test[:,7]
train_spec = train[:,6]
test_spec = test[:,6]
model = Sequential()
model.add(Embedding(8, 256, input_length=1))
model.add(LSTM(output_dim=128, activation='sigmoid',
inner_activation='hard_sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop')
model.fit(train_spec, y_train, batch_size=2000, nb_epoch=11)
score = model.evaluate(test_spec, y_test, batch_size=2000)
Here is a sample from the dataset
(Patient Number, time in millisecond, accelerometer x-axis,y-axis,
z-axis,magnitude, spectrogram,label (0 or 1))
1,15,70,39,-970,947321,596768455815000,0
1,31,70,39,-970,947321,612882670787000,0
1,46,60,49,-960,927601,602179976392000,0
1,62,60,49,-960,927601,808020878060000,0
1,78,50,39,-960,925621,726154800929000,0
I believe that the my problem in those lines but I cannot recognize the error
model.add(Embedding(8, 256, input_length=1))
model.add(LSTM(output_dim=128, activation='sigmoid',
inner_activation='hard_sigmoid'))
and this is the error I have got
InvalidArgumentError (see above for traceback): indices[0,0] = -2147483648 is not in [0, 8)
Is the sample from your dataset provided above, the data you are trying to feed into the model? If so, there is a problem because your data is 2-dimensional, but for an RNN you need a 3-dimensional input tensor. You need a feature dimension, a batch size dimension and a time dimension. It looks like you are missing a proper time dimension. You should not have a column with 15, 31, 46,... (time in milliseconds) this should be shaped into its own dimension, so your input data looks like a "cube". Otherwise, you don't need a temporal model at all. Furthermore, you should standardize your input since your features have vastly different orders of magnitude. Moreover, the batch size of 2000 is almost certainly too large. Are you trying to express that your whole training set has 2000 samples? In this case, you may not have enough training data for the model you are building.

Python Keras LSTM input output shape issue

I am running keras over tensorflow, trying to implement a multi-dimensional LSTM network to predict a linear continuous target variable , a single value for each example(return_sequences = False).
My sequence length is 10 and number of features (dim) is 11.
This is what I run:
import pprint, pickle
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
[0,0,0,0,0,0,0,0,2,1,0],
[0,0,0,0,0,0,0,2,1,0,0],
[0,0,0,0,0,0,2,1,0,0,0],
[0,0,0,0,0,2,1,0,0,0,0],
[0,0,0,0,2,1,0,0,0,0,0],
[0,0,0,2,1,0,0,0,0,0,0],
[0,0,2,1,0,0,0,0,0,0,0],
[0,2,1,0,0,0,0,0,0,0,0],
[2,1,0,0,0,0,0,0,0,0,0]]
# Preprocess Data:
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence
target = np.array([20])
# Reshape training data for Keras LSTM model
data = data.reshape(1, 10, 11)
target = target.reshape(1, 1, 1)
# Build Model
model = Sequential()
model.add(LSTM(11, input_shape=(10, 11), unroll=True, return_sequences=False))
model.add(Dense(11))
model.add(Activation('linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(data, target, nb_epoch=1, batch_size=1, verbose=2)
and get the error ValueError: Error when checking target: expected activation_1 to have 2 dimensions, but got array with shape (1, 1, 1)
Not sure what should the activation layer should get (shape wise)
Any help appreciated
thanks
If you just want to have a single linear output neuron, you can simply use a dense layer with one hidden unit and supply the activation there. Your output then can be a single vector without the reshape- I adjusted your given example code to make it work:
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence
target = np.array([20])
# Reshape training data for Keras LSTM model
data = data.reshape(1, 10, 11)
# Build Model
model = Sequential()
model.add(LSTM(11, input_shape=(10, 11), unroll=True, return_sequences=False))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(data, target, nb_epoch=1, batch_size=1, verbose=2)

Resources