Getting a continous prediction even though output layer is sigmoid - keras

Below is my code. I'm looking to get a direction prediction on price. I understand from various tutorials it is my output layer that needs to be in Sigmoid to obtain a binary prediction.
However, even though I have done so as below, my prediction still comes out as a continuous?
model = Sequential()
# Add first layer with dropout regularisation with 100 neurons and inputs shape for 1st set
model.add(LSTM(units=256, input_shape = (data.shape[1],data.shape[2]), return_sequences=True))
model.add(Dropout(0.4))
# Add second layer with dropout
model.add(LSTM(units=128, return_sequences=False))
model.add(Dropout(0.4))
# Add a Dense layer
model.add(Dense(64, activation ='relu'))
# Add the output layer #as we are predicting direction, we use the sigmoid activation function
model.add(Dense(1, activation ='sigmoid'))

Related

Why this simple keras 3 class classifier is predicting only one class instead of other classes?

I am trying to create a simple 3 class deep learning classifier using keras as follows:
clf = Sequential()
clf.add(Dense(20, activation='relu', input_dim=NUM_OF_FEATURES))
clf.add(Dense(10, activation='relu'))
clf.add(Dense(3, activation='relu'))
clf.add(Dense(1, activation='softmax'))
# Model Compilation
clf.compile(optimizer = 'adam',
loss = 'categorical_crossentropy',
metrics = ['accuracy'])
# Training the model
clf.fit(X_train,
y_train,
epochs=10,
batch_size=16,
validation_data=(X_val, y_val))
How after training while predicting, it is only predicting the same class (class 1) out of the 3 classes ALWAYS.
Is my network architecture not correct?
I am new to deep learning and AI.
If you want a network to classify three classes, your last dense layer should have three output nodes. In the example, the last dense layer has one output node.
clf = Sequential()
clf.add(Dense(20, activation='relu', input_dim=NUM_OF_FEATURES))
clf.add(Dense(10, activation='relu'))
clf.add(Dense(3, activation='relu'))
clf.add(Dense(3, activation='softmax'))
For each input sample, the output will be three values, all of which sum to one. These represent the probabilities that the input belongs to each class.
Regarding the loss function, if you want to use cross entropy, you have a choice between sparse categorical cross entropy and categorical cross entropy. The latter expects ground truth labels to be one-hot encoded (you can use tf.one_hot for this). In other words, the shape of the labels is the same as the shape as the network's output. Sparse categorical cross entropy, on the other hand, expects labels with a rank N-1, where N is the rank of the neural network's output. In order words, these are the labels before one-hot encoding.
When the model is used for inference, the predicted class values can be retrieved with argmax of the last dimension of the predictions.
predictions = clf.predict(x)
classes = predictions.argmax(-1)

Can model.summarize() in Keras (Sequential) - Multi Input (Numerical + n Embeddings) work?

I'm having difficulty printing the model.summary() after using the Sequential class in keras to build a structure like so:
embedding_inputs* numerical_input
\ /
\ /
-- CONCATENATE--
|
DENSE (50) #1
DENSE (50) #2
DENSE (50) #3
DENSE (50) #4
DENSE (1) #output
* embedding_inputs are a bunch of concatenated sequential models from
categorical variables. For the sake of simplicity,
let's pretend there is only one.
I know without the embedding layer(s), my model works and looks fine. But following my addition of an embedding layer and a concatenate layer, I'm told I need to build the model or that my Output tensors "must be the output of a Keras Layer."
I'm just utterly confused at this point. (I'm used to using the functional api but embarrassingly am having so much trouble with the Sequential one and would like to learn).
categorical = Sequential()
categorical.add(Embedding(
input_dim=len(df_train['mon'].astype('category').cat.categories),
output_dim=2,
input_length=1))
categorical.add(Flatten())
numeric = Sequential()
numeric.add(InputLayer(input_shape(1,len(numeric_column_names)),dtype='float32',name='numerical_in'))
model = Sequential()
model.add(Concatenate([numeric,categoric]))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal')) #output layer (1 number)
If I attempt to use model.summary() without a build:
ValueError: This model has not yet been built. Build the model first by calling build() or calling fit() with some data. Or specify input_shape or batch_input_shape in the first layer for automatic build.
If I attempt to use model.build() first, I get a message like:
ValueError: Output tensors to a Sequential must be the output of a Keras `Layer` (thus holding past layer metadata). Found: None

Why does CNN only predict one class

I have a model that needs to detect if a plant is dead or alive. It is only predicting one class, that data is imbalanced, but i have used weights to counter the imbalance.
I have looked at loads of questions about this problem, but none seem to work, apparently this problem occurs when overfitting, so I have used dropout. But the model still only predicts one class.
Heres the model:
model=Sequential()
# Convolutional layer / input layer
model.add(Conv2D(60, 5,5, activation='relu', input_shape=np.shape(X[1])))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.8))
model.add(Flatten())
model.add(Dropout(0.7))
model.add(Dense(130, activation='relu'))
model.add(Dropout(0.6))
# Output layer
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X, y, epochs=6, batch_size=32, class_weight=class_weight, validation_data=(X_test, y_test))
Usually it should predict both classes with 1: a healthy plant and 0:
an unhealthy plant
Since your problem is a binary classification and your output dimensionality is 2, you should change your activation to softmax.
model.add(Dense(2, activation='softmax'))
However, if you want to keep sigmoid just change your output layer units to 1, this way you will output how likely your input is one of the two classes with only one unit.
model.add(Dense(1, activation='sigmoid'))

Input of RNN model layer how it works?

I don't understand input of RNN model. Why it show None before node size every layer. Why it is (None,1) (None,12)
This is my code.
K.clear_session()
model = Sequential()
model.add(Dense(12, input_dim=1, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
This is not a RNN, it's just a fully connected network (FC or Dense).
The first dimension of every tensor in a Keras network is the batch_size, which represents the number of "samples" or "examples" you are passing to the model. The value is None because this dimension is not fixed, you can have batches of any size you want.

Is there any method to plot vector(matrix) values after each NN layer

I modified the existing activation function and using it in the Convolutional layer of the Neural Network. I would like to know how does it perform compared to the existing activation function.Is there any method/function to plot in a graph the results(matrix values) after each Neural network layer,so that I could customise my activation function according to the values for better results?
model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=max_length, trainable=False)
model.add(e)
model.add(Conv1D(64,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Conv1D(256,kernel_size,padding='valid',activation=newactivation,strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Bidirectional(GRU(gru_output_size, dropout=0.2, recurrent_dropout=0.2)))
model.add(Bidirectional(LSTM(lstm_output_size)))
model.add(Dense(nclass, activation='softmax'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary())
model.fit(padded_docs,y_train, epochs=epoch_size, verbose=0)
loss, accuracy = model.evaluate(tpadded_docs, y_test, verbose=0)
I cannot comment yet so I post this as an answer:
Refer to the Keras FAQ: "How can I obtain the output of an intermediate layer?"
It shows you how you can access the output of each layer. If you are using the version that uses the keras function, you can even access the output of the model in the learning phase (if your model contains layer that behave differently in training vs. testing).

Resources