This is my neural network :
model = Sequential()
act = 'relu'
model.add(Dense(430, input_shape=(3,)))
model.add(Activation(act))
model.add(Dense(256))
model.add(Activation(act))
model.add(Dropout(0.42))
model.add(Dense(148))
model.add(Activation(act))
model.add(Dropout(0.3))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=[myAccuracy])
myAccuracy is a custom metric here
I used this command to catch the output of the final layer (Dense(1))
output = model.layers[8].output
print(output)
It gave me this output:
Tensor("dense_4/BiasAdd:0", shape=(?, 1), dtype=float32)
0
I want the output for every training example I have how do I do this ?
predictions = model.predict(allTrainingExamples)
Related
I'm trying to understand how to work on CNNs. My question is that I would like to be able to extract the "encoded" version of an image from this model. The encoded version should be the output of the Dense 4096 .
def get_siamese_model(input_shape):
# Define the tensors for the two input images
left_input = Input(input_shape)
right_input = Input(input_shape)
# Convolutional Neural Network
model = Sequential()
model.add(Conv2D(64, (10,10), activation='relu', input_shape=input_shape,
kernel_initializer=initialize_weights, kernel_regularizer=l2(2e-4)))
model.add(MaxPooling2D())
model.add(Conv2D(128, (7,7), activation='relu',
kernel_initializer=initialize_weights,
bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4)))
model.add(MaxPooling2D())
model.add(Conv2D(128, (4,4), activation='relu', kernel_initializer=initialize_weights,
bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4)))
model.add(MaxPooling2D())
model.add(Conv2D(256, (4,4), activation='relu', kernel_initializer=initialize_weights,
bias_initializer=initialize_bias, kernel_regularizer=l2(2e-4)))
model.add(Flatten())
model.add(Dense(4096, activation='sigmoid',
kernel_regularizer=l2(1e-3),
kernel_initializer=initialize_weights,bias_initializer=initialize_bias))
# Generate the encodings (feature vectors) for the two images
encoded_l = model(left_input)
encoded_r = model(right_input)
# Add a customized layer to compute the absolute difference between the encodings
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
L1_distance = L1_layer([encoded_l, encoded_r])
# Add a dense layer with a sigmoid unit to generate the similarity score
prediction = Dense(1,activation='sigmoid',bias_initializer=initialize_bias)(L1_distance)
# Connect the inputs with the outputs
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
# return the model
return siamese_net
Moreover, can I give as input only one image and get the encoding of that image?
Many thanks!
Edit: I have tried by doing this
layer_name = 'sequential_3'
model2= Model(inputs=model.input, outputs=model.get_layer(layer_name).output)
but i get this error
ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 105, 105, 1), dtype=tf.float32, name='conv2d_12_input'), name='conv2d_12_input', description="created by layer 'conv2d_12_input'") at layer "conv2d_12". The following previous layers were accessed without issue: []
And I don't know how to fix it
For my NLP project I used CountVectorizer to Extract Features from a dataset using vectorizer = CountVectorizer(stop_words='english') and all_features = vectorizer.fit_transform(data.Text) and i also wrote a Simple RNN model using keras but I am not sure how to do the padding and the tokeniser step and get the data be trained on the model.
my code for RNN is:
model.add(keras.layers.recurrent.SimpleRNN(units = 1000, activation='relu',
use_bias=True))
model.add(keras.layers.Dense(units=1000, input_dim = 2000, activation='sigmoid'))
model.add(keras.layers.Dense(units=500, input_dim=1000, activation='relu'))
model.add(keras.layers.Dense(units=2, input_dim=500,activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
can someone please give me some advice on this?
Thank you
add ensemble - you don't count vectorize, you use ensemble
https://github.com/dnishimoto/python-deep-learning/blob/master/UFO%20.ipynb
docs=ufo_df["summary"] #text
LABELS=['Egg', 'Cross','Sphere', 'Triangle','Disk','Oval','Rectangle','Teardrop']
#LABELS=['Triangle']
target=ufo_df[LABELS]
#print([len(d) for d in docs])
encoded_docs=[one_hot(d,vocab_size) for d in docs]
#print([np.max(d) for d in encoded_docs])
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
#print([d for d in padded_docs])
model=Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(8, activation='softmax'))
#model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(padded_docs, target, epochs=50, verbose=0)
I don't understand this error here is my model
L_branch = Sequential()
L_branch.add(Embedding(vocab_size, output_dim=15, input_length=3000, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=70, input_shape=(3000, )))
L_branch.add(MaxPooling1D(15))
L_branch.add(Flatten())
# second model
R_branch = Sequential()
R_branch.add(Dense(14, input_shape=(14,), activation='relu'))
R_branch.add(Flatten())
merged = Concatenate()([L_branch.output, R_branch.output])
out = Dense(70, activation='softmax')(merged)
final_model = Model([L_branch.input, R_branch.input], out)
final_model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
final_model.summary()
final_model.fit(
[input1, input2],
Y_train,
batch_size=200,
epochs=1,
verbose=1,
validation_split=0.1
)
where input 1 has the shape (5039, 3000)
and input 2 (5039, 14)
so why is the flatten dense asking for a third dimension? if dense do not change the number of dimensions like embedding layer or convolution?
remove Flatten layer... no need to use it. here the full structure
L_branch = Sequential()
L_branch.add(Embedding(vocab_size, output_dim=15, input_length=3000, trainable=True))
L_branch.add(Conv1D(50, activation='relu', kernel_size=70, input_shape=(3000, )))
L_branch.add(MaxPooling1D(15))
L_branch.add(Flatten())
# second model
R_branch = Sequential()
R_branch.add(Dense(14, input_shape=(14,), activation='relu'))
merged = Concatenate()([L_branch.output, R_branch.output])
out = Dense(70, activation='softmax')(merged)
final_model = Model([L_branch.input, R_branch.input], out)
final_model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
final_model.summary()
#10-Fold split
seed = 7
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
np.random.seed(seed)
cvscores = []
act = 'relu'
for train, test in kfold.split(X, Y):
model = Sequential()
model.add(Dense(43, input_shape=(8,)))
model.add(Activation(act))
model.add(Dense(500))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(1000))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(1500))
model.add(Activation(act))
#model.add(Dropout(0.4))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
hist = model.fit(X[train], Y[train],
epochs=500,
shuffle=True,
batch_size=100,
validation_data=(X[test], Y[test]), verbose=2)
#model.summary()
When I call model.fit it reports the following error :
ValueError: Error when checking target: expected activation_5 to have shape (None, 2) but got array with shape (3869, 1)
I am using keras with TensorFlow backend. Please ask for any further clarification if needed.
The problem was solved when we used this statement
y = to_categorical(Y[:])
Model :
model = Sequential()
act = 'relu'
model.add(Dense(430, input_shape=(3,)))
model.add(Activation(act))
model.add(Dense(256))
model.add(Activation(act))
model.add(Dropout(0.4))
model.add(Dense(148))
model.add(Activation(act))
model.add(Dropout(0.3))
model.add(Dense(1))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#model.summary()
Error :
Error when checking target: expected activation_4 to have shape (None, 1) but got array with shape (1715, 2)
The error is in the last layer of the neural network. The neural network is trying to classify whether 2 drugs are synergic or not.
COMPLETE SOURCE CODE :
https://github.com/tanmay-edgelord/Drug-Synergy-Models/blob/master/Drug%20Synergy%20NN%20Classifier.ipynb
Data:
https://github.com/tanmay-edgelord/Drug-Synergy-Models/blob/master/train.csv
In your code you have the following lines:
y_binary = to_categorical(Y[:])
y_train = y_binary[:split]
y_test = y_binary[split:]
to_categorical transforms the vector into a 1-hot vector. So since you have two classes, it transforms every number to a vector of length 2 (0 is transformed to [1,0] and 1 is transformed to [0,1]). So your last layer needs to be defined as follows:
model.add(Dense(2))
model.add(Activation('softmax'))
(note that I replaced the 1 with 2).