Input shape for 1D convolution network in keras - keras

I am quite new to keras and I have a problem in understanding shapes.
I wanted to create 1D Conv Keras model as follows, I don't know this is correct or not:
TIME_PERIODS = 511
num_sensors = 2
num_classes = 4
BATCH_SIZE = 400
EPOCHS = 50
model_m = Sequential()
model_m.add(Conv1D(100, 10, activation='relu', input_shape=(TIME_PERIODS, num_sensors)))
model_m.add(Conv1D(100, 10, activation='relu'))
model_m.add(MaxPooling1D(3))
model_m.add(Conv1D(160, 10, activation='relu'))
model_m.add(Conv1D(160, 10, activation='relu'))
model_m.add(GlobalAveragePooling1D())
model_m.add(Dropout(0.5))
model_m.add(Dense(num_classes, activation='softmax'))
The input data I have is 888 different panda data frame where each frame is of shape (511, 3) where 511 is numbers of signal points and 0th column is sensor1 values, 1st column is sensor2 values and 2nd column is labelled signals.
Now how I should combine all my 888 different panda data frame so I have x_train and y_train from X and Y using Sklearn train_test_split.
Also, I think the input shape I am defining for the model is wrong and I don't think I actually have TIME_PERIODS because, for 1-time point, I have 2 sensor inputs (orange, blue line) value and 1 output label (green line).
The context of the problem I am trying to solve e.g.
input: time-based 2 sensors values say for 1 AM-2 AM hour from a user, output: the range of times e.g where the user was doing activity 1, activity 2, activity X on 1:10-1:15, 1:15-1:30, 1:30-2:00, The above plot show a sample training input and output.
The problem is inspired from here but in my case, I don't have any time period, my 1-time point has 1 output label.
Update 1:
I am almost certain that my TIME_PERIODS=1 as for the prediction I will give 511 inputs and expects to get 511 output values.

Each dataframe is an independent sequence?
fileNames = get a list of filenames here, you can maybe os.listdir for that
allFrames = [pandas.read_csv(filename,... other_things...).values for filename in fileNames]
allData = np.stack(allFrames, axis=0)
inputData = allData[:,:num_sensors]
outputData = allData[:, -1:]
You can now use train test split the way you want.
Your input shape is correct.
If you want to predict the whole sequence, then you have to remove the poolings. Every convolution should use padding='same'.
And maybe you should use a Biridectional(LSTM(units, return_sequences=True)) layer somewhere to make your model stronger.
A simple model as an example. (Notice that models are totally open to creativity)
from keras.layers import *
inputs = Input((TIME_PERIODS,num_sensors)) #Should be called "time_steps" to be precise
outputs = Conv1D(any, 3, padding='same', activation = 'tanh')(inputs)
outputs = Bidirectional(LSTM(any, return_sequences=True))(outputs)
outputs = Conv1D(num_classes, activation='softmax', padding='same')(outputs)
model = keras.models.Model(inputs, outputs)

To say the least, you're in the correct path. The full solution for this would be like,
df = pd.concat([pd.read_csv(fname, index_col=<int>, header=<int>) for f filenames], ignore_index=True, axis=0)
inputs = df.loc[:,:-1]
labels = df.loc[:,0]
X_train, X_test, y_train, y_test = train_test_split(inputs, labels, test_size=<float>)
To add a bit more information, note how you are doing,
model_m.add(Conv1D(100, 10, activation='relu', input_shape=(TIME_PERIODS, num_sensors)))
and not
model_m.add(Conv1D(100, 10, activation='relu', padding='SAME', input_shape=(TIME_PERIODS, num_sensors)))
So, as you're not setting padding="Same" for the convolution layers this might have the undesirable effect of input becoming smaller and smaller as you go deeper to the model. If that's what you need, that's okay. Otherwise, set `padding="SAME".
For example, without same-padding you'll get, a width around 144 when you get to the GlobalPooling layer, where if you use same-padding it would be roughly 170. It's not a major problem here, but can easily lead to negative sizes in your input for deeper layers.

Related

dimension of the input layer for embeddings in Keras

It is not clear to me whether there is any difference between specifying the input dimension Input(shape=(20,)) or not Input(shape=(None,)) in the following example:
input_layer = Input(shape=(None,))
emb = Embedding(86, 300) (input_layer)
lstm = Bidirectional(LSTM(300)) (emb)
output_layer = Dense(10, activation="softmax") (lstm)
model = Model(input_layer, output_layer)
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["acc"])
history = model.fit(my_x, my_y, epochs=1, batch_size=632, validation_split=0.1)
my_x (shape: 2000, 20) contains integers referring to characters, while my_y contains the one-hot encoding of some labels. With Input(shape=(None,)), I see that I could use model.predict(my_x[:, 0:10]), i.e., I could give only 10 characters as an input instead of 20: how is that possible? I was assuming that all the 20 dimensions in my_x were needed to predict the corresponding y.
What you say with None is, that the sequences you feed into the model have the strict length of 20. While a model usually needs a fixed length, recurrent neural networks (as the LSTM you use there), do not need a fixed sequence Length. So the LSTM just does not care whether your sequence contains 20 or 100 timesteps, as it simply loops over them. However, when you specify the amount of timesteps to 20, the LSTM expects 20 and will raise an error if it does not get them.
For more information see this post of Tim♦

create Dense layers in a loop

I need to create multiple dense layers by for loop, the number of iteration depends on the number of labels. I want to create one dense layer for each label. Each label has a different set of features, so I want to predict each label separately with corresponding feature set in each dense layer. Is that possible? The following code is my attempt.
layers = []
for i in range(num_labels):
h1 = Dense(num_genes_per+10, kernel_initializer='normal', input_dim = num_genes_per, activation='relu')(inputs)
h2 = Dense(int(num_genes_per/2), kernel_initializer='normal', activation='relu')(h1)
output= Dense(1, kernel_initializer='normal', activation='linear')(h2)
layers.append(output)
merged_output = concatenate(layers, axis=1)
model = Model(inputs, merged_output)
The output of each h2 will have shape [batch, 1], and the merged_output will have shape [batch, num_labels]. Is there any error in the above code?
I know it is not efficient, but if I concatenate the different set of features into one input tensor, and use only one dense layer to predict all labels at same time, would it harms the prediction accuracy?
It depends on how you defined features and labels. If features 1, 2 and 3 are used to predict label 1 and they have no relation to label 2, It does not make sense to include it in label 3 inference.

Keras 3Dconvnet time-series issue

I have a time-series of data and am running some very basic tests to get a feel for TensorFlow, Keras, Python, etc.
To setup the problem, I have a large amount of images whereby 7 images of data (with Cartesian dimensions 33 x 33) when accumulated should yield a single value. Therefore, the amount of 'x' data should be y*7 where y is the 'truth' data being trained with.
All of the training data is in entitled 'alldatax' which is a large matrix: [420420 x 33 x 33 x 7 x 1] where the dimensions are the total number of single images, x-dimension, y-dimension, number of images to be accumulated for a single 'truth' value, and then a final dimension necessary for 3D convolving.
The 'truth' matrix, alldatay, is a 1D matrix which is simply 420420 / 7 = 60060.
When running a simple convnet:
model = models.Sequential()
model.add(layers.InputLayer(input_shape=(33,33,7,1)))
model.add(layers.Conv3D(16,(3,3,1), activation = 'relu', input_shape = (33,33,7,1)))
model.add(layers.LeakyReLU(alpha=0.3))
model.add(layers.MaxPooling3D((2,2,1)))
model.add(layers.Conv3D(32,(3,3,1), activation = 'relu'))
model.add(layers.LeakyReLU(alpha=0.3))
model.add(layers.MaxPooling3D((2,2,1)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = 'relu'))
model.add(layers.LeakyReLU(alpha=0.3))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(32, activation = 'relu'))
model.add(layers.LeakyReLU(alpha=0.3))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation = 'relu'))
model.compile(optimizer = 'adam', loss = 'mse')
model.fit(x = alldatax, y = alldatay, batch_size = 1000, epochs = 50, verbose = 1, shuffle = False)
I get an error: ValueError: Input arrays should have the same number of samples as target arrays. Found 420420 input samples and 60060 target samples.
What needs to change to get the convnet to realize it needs 7*x for every y value?
Something seems to be wrong in your calculations.
You state that the neural net should take seven 33x33 images as one input example, so you set the input shape of the first layer to (33,33,7,1) which is right. This means for every 33x33x7x1 input there should be exactly one y value.
Since all of your data all your data comprises 420420 33x33x7x1 images there should be 420420 y values, not 60060.

Confusion about Keras RNN Input shape requirement

I have read plenty of posts for this point. They are inconsistent with each other and every answer seems to have a different explanation so I thought to ask based on my analyzing of all of them.
As Keras RNN documentation states, the input shape is always in this form (batch_size, timesteps, input_dim). I am a bit confused about that but I guess, not sure though, that input_dim is always 1 while timesteps depends on your problem (could be the data dimension as well). Is that roughly correct?
The reason for this question is that I always get an error when trying to change the value of input_dim to be my dataset dimension (as input_dim sounds like that!!), so I made an assumption that input_dim represent the shape of the input vector to LSTM at a time. Am I wrong again?
C = C.reshape((C.shape[0], C.shape[1], 1))
tr_C, ts_C, tr_r, ts_r = train_test_split(C, r, train_size=.8)
batch_size = 1000
print('Build model...')
model = Sequential()
model.add(LSTM(8, batch_input_shape=(batch_size, C.shape[1], 1), stateful=True, activation='relu'))
model.add(Dense(1, activation='relu'))
print('Training...')
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(tr_C, tr_r,
batch_size=batch_size, epochs=1,
shuffle=True, validation_data=(ts_C, ts_r))
Thanks!
Indeed, input_dim is the shape of the input vector at a time. In other words, input_dim is the number of the input features.
It's not necessarily 1, though. If you're working with more than one var, it can be any number.
Suppose you have 10 sequences, each sequence has 200 time steps, and you're measuring just a temperature. Then you have one feature:
input_shape = (200,1) -- notice that the batch size (number of sequences) is ignored here
batch_input_shape = (10,200,1) -- only in specific cases, like stateful = True, you will need a batch input shape.
Now suppose you're measuring not only temperature, but also pressure and volume. Now you've got three input features:
input_shape = (200,3)
batch_input_shape = (10,200,3)
In other words, the first dimension is the number of different sequences. The second is the length of the sequence (how many measures along time). And the last is how many vars at each time.

Keras conv1d layer parameters: filters and kernel_size

I am very confused by these two parameters in the conv1d layer from keras:
https://keras.io/layers/convolutional/#conv1d
the documentation says:
filters: Integer, the dimensionality of the output space (i.e. the number output of filters in the convolution).
kernel_size: An integer or tuple/list of a single integer, specifying the length of the 1D convolution window.
But that does not seem to relate to the standard terminologies I see on many tutorials such as https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/ and https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/
Using the second tutorial link which uses Keras, I'd imagine that in fact 'kernel_size' is relevant to the conventional 'filter' concept which defines the sliding window on the input feature space. But what about the 'filter' parameter in conv1d? What does it do?
For example, in the following code snippet:
model.add(embedding_layer)
model.add(Dropout(0.2))
model.add(Conv1D(filters=100, kernel_size=4, padding='same', activation='relu'))
suppose the embedding layer outputs a matrix of dimension 50 (rows, each row is a word in a sentence) x 300 (columns, the word vector dimension), how does the conv1d layer transforms that matrix?
Many thanks
You're right to say that kernel_size defines the size of the sliding window.
The filters parameters is just how many different windows you will have. (All of them with the same length, which is kernel_size). How many different results or channels you want to produce.
When you use filters=100 and kernel_size=4, you are creating 100 different filters, each of them with length 4. The result will bring 100 different convolutions.
Also, each filter has enough parameters to consider all input channels.
The Conv1D layer expects these dimensions:
(batchSize, length, channels)
I suppose the best way to use it is to have the number of words in the length dimension (as if the words in order formed a sentence), and the channels be the output dimension of the embedding (numbers that define one word).
So:
batchSize = number of sentences
length = number of words in each sentence
channels = dimension of the embedding's output.
The convolutional layer will pass 100 different filters, each filter will slide along the length dimension (word by word, in groups of 4), considering all the channels that define the word.
The outputs are shaped as:
(number of sentences, 50 words, 100 output dimension or filters)
The filters are shaped as:
(4 = length, 300 = word vector dimension, 100 output dimension of the convolution)
Below code from the explanation can help do this. I went similar question and answered it myself.
from tensorflow.keras.layers import MaxPool1D
import tensorflow.keras.backend as K
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Conv1D
tf.random.set_seed(1) # nowadays instead of tf.set_random_seed(1)
batch,rows,cols = 3,8,3
m, n, k = batch, rows, cols
input_shape = (batch,rows,cols)
np.random.seed(132) # nowadays instead of np.set_random_seed = 132
data = np.random.randint(low=1,high=6,size=input_shape,dtype='int32')
data = np.float32(data)
data = tf.constant(data)
print("Data:")
print(K.eval(data))
print()
print(f'm,n,k:{input_shape}')
from tensorflow.keras.layers import Conv1D
#############################
# Understandin filters and kernel_size
##############################
num_filters=5
kernel_size= 3
'''
Few Notes about Kernel_size:
1. max_kernel_size == max_rows
2. since Conv1D, we are creating 1D Matrix of 1's with kernel_size
if kernel_size = 1, [[1,1,1..]]
if kernel_size = 2, [[1,1,1..][1,1,1,..]]
if kernel_size = 3, [[1,1,1..][1,1,1,..]]
I have chosen tf.keras.initializers.constant(1) to create a matrix of Ones.
Size of matrix is Kernel_Size
'''
y= Conv1D(filters=num_filters,kernel_size=kernel_size,
kernel_initializer=tf.keras.initializers.constant(1),
#glorot_uniform(seed=12)
input_shape=(k,n)
)(data)
#########################
# Checking the out outcome
#########################
print(K.eval(y))
print(f' Resulting output_shape == (batch_size, num_rows-kernel_size+1,num_filters): {y.shape}')
# # Verification
K.eval(tf.math.reduce_sum(data,axis=(2,1), # Sum along axis=2, and then along
axis=1,keep_dims=True)
###########################################
# Understanding MaxPool and Strides in
##########################################
pool = MaxPool1D(pool_size=3,strides=3)(y)
print(K.eval(pool))
print(f'Shape of Pool: {pool.shape}')

Resources