Can model.summarize() in Keras (Sequential) - Multi Input (Numerical + n Embeddings) work? - python-3.x

I'm having difficulty printing the model.summary() after using the Sequential class in keras to build a structure like so:
embedding_inputs* numerical_input
\ /
\ /
-- CONCATENATE--
|
DENSE (50) #1
DENSE (50) #2
DENSE (50) #3
DENSE (50) #4
DENSE (1) #output
* embedding_inputs are a bunch of concatenated sequential models from
categorical variables. For the sake of simplicity,
let's pretend there is only one.
I know without the embedding layer(s), my model works and looks fine. But following my addition of an embedding layer and a concatenate layer, I'm told I need to build the model or that my Output tensors "must be the output of a Keras Layer."
I'm just utterly confused at this point. (I'm used to using the functional api but embarrassingly am having so much trouble with the Sequential one and would like to learn).
categorical = Sequential()
categorical.add(Embedding(
input_dim=len(df_train['mon'].astype('category').cat.categories),
output_dim=2,
input_length=1))
categorical.add(Flatten())
numeric = Sequential()
numeric.add(InputLayer(input_shape(1,len(numeric_column_names)),dtype='float32',name='numerical_in'))
model = Sequential()
model.add(Concatenate([numeric,categoric]))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal')) #output layer (1 number)
If I attempt to use model.summary() without a build:
ValueError: This model has not yet been built. Build the model first by calling build() or calling fit() with some data. Or specify input_shape or batch_input_shape in the first layer for automatic build.
If I attempt to use model.build() first, I get a message like:
ValueError: Output tensors to a Sequential must be the output of a Keras `Layer` (thus holding past layer metadata). Found: None

Related

Getting a continous prediction even though output layer is sigmoid

Below is my code. I'm looking to get a direction prediction on price. I understand from various tutorials it is my output layer that needs to be in Sigmoid to obtain a binary prediction.
However, even though I have done so as below, my prediction still comes out as a continuous?
model = Sequential()
# Add first layer with dropout regularisation with 100 neurons and inputs shape for 1st set
model.add(LSTM(units=256, input_shape = (data.shape[1],data.shape[2]), return_sequences=True))
model.add(Dropout(0.4))
# Add second layer with dropout
model.add(LSTM(units=128, return_sequences=False))
model.add(Dropout(0.4))
# Add a Dense layer
model.add(Dense(64, activation ='relu'))
# Add the output layer #as we are predicting direction, we use the sigmoid activation function
model.add(Dense(1, activation ='sigmoid'))

Why does CNN only predict one class

I have a model that needs to detect if a plant is dead or alive. It is only predicting one class, that data is imbalanced, but i have used weights to counter the imbalance.
I have looked at loads of questions about this problem, but none seem to work, apparently this problem occurs when overfitting, so I have used dropout. But the model still only predicts one class.
Heres the model:
model=Sequential()
# Convolutional layer / input layer
model.add(Conv2D(60, 5,5, activation='relu', input_shape=np.shape(X[1])))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.8))
model.add(Flatten())
model.add(Dropout(0.7))
model.add(Dense(130, activation='relu'))
model.add(Dropout(0.6))
# Output layer
model.add(Dense(2, activation='softmax'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X, y, epochs=6, batch_size=32, class_weight=class_weight, validation_data=(X_test, y_test))
Usually it should predict both classes with 1: a healthy plant and 0:
an unhealthy plant
Since your problem is a binary classification and your output dimensionality is 2, you should change your activation to softmax.
model.add(Dense(2, activation='softmax'))
However, if you want to keep sigmoid just change your output layer units to 1, this way you will output how likely your input is one of the two classes with only one unit.
model.add(Dense(1, activation='sigmoid'))

Input of RNN model layer how it works?

I don't understand input of RNN model. Why it show None before node size every layer. Why it is (None,1) (None,12)
This is my code.
K.clear_session()
model = Sequential()
model.add(Dense(12, input_dim=1, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
This is not a RNN, it's just a fully connected network (FC or Dense).
The first dimension of every tensor in a Keras network is the batch_size, which represents the number of "samples" or "examples" you are passing to the model. The value is None because this dimension is not fixed, you can have batches of any size you want.

deep learning data preparation

I have a text dataset, that contains 6 classes. for each sample, I have the percent value and sum of the 6 percent values is 100% (features are related to each other). For example :
{A:16, B:35, C:7, D:0, E:3, F:40}
how can I feed a deep learning algorithm with this dataset?
I actually want the prediction to be exactly in the shape of training data.
Here is what you can do:
First of all, normalize all of your labels and scale them between 0-1.
Use a softmax layer for prediction.
Here is some code in Keras for intuition:
model = Sequential()
model.add(Dense(100, input_dim = x.shape[1], activation='relu'))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')

keras - embedding layer, can I alter values of a trained embedding layer in the pipeline of a model?

I am following examples on this page: https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
which trains a word embedding on the data using an Embedding layer, like below:
model = Sequential()
model.add(Embedding(100, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
the model starts with learning a word embedding from data, for each word, creates a 8-dimension vector.
What I would like to do, is that after this embedding is learned, I want to alter the matrix (or vectors of each word), by adding two more dimensions appended to the end of each vector. I will have another process that computes the values for this two dimensions.
Is there anyway I can do this?
Many thanks in advance
Yes - it's possible. Try to do this using following procedure:
Extract weight matrix:
weight_matrix = model.layers[0].get_weights()[0] # Matrix shape (100, 8).
Append your vectors:
new_weight_matrix = your_append(weight_matrix)
# Be sure that new_weight_matrix has shape of (100, 10)
Build an adjusted copy of your model:
new_model = Sequential()
new_model.add(Embedding(100, 10, input_length=max_length)) # Notice a change
new_model.add(Flatten())
new_model.add(Dense(1, activation='sigmoid'))
(Optional) freeze layers: In case you want to freeze embedding set:
new_model = Sequential()
new_model.add(Embedding(100, 10, input_length=max_length
trainable=False)) # Notice a change
new_model.add(Flatten())
new_model.add(Dense(1, activation='sigmoid'))
Compile a new model:
new_model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
After compile and fit, you need to replace the weights with the new ones:
new_model.layers[0].set_weights(new_weight_matrix)

Resources