I trained a neural network with 37 Inputs. It has around 85% accuracy. Is it possible for me to find out which Input has the most effect. I tried this code but I cannot figure out how to find most important Input
weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]
One possible solution is to wrap your model with keras.wrappers.scikit_learn and then use Recursive Feature elimination in scikit-learn:
def create_model():
# create model
model = Sequential()
model.add(Dense(512, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(10, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = KerasClassifier(build_fn=create_model, epochs=100, batch_size=128, verbose=0)
rfe = RFE(estimator=model, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking, cmap=plt.cm.Blues)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
If you need to visualize weights see here.
Related
For my NLP project I used CountVectorizer to Extract Features from a dataset using vectorizer = CountVectorizer(stop_words='english') and all_features = vectorizer.fit_transform(data.Text) and i also wrote a Simple RNN model using keras but I am not sure how to do the padding and the tokeniser step and get the data be trained on the model.
my code for RNN is:
model.add(keras.layers.recurrent.SimpleRNN(units = 1000, activation='relu',
use_bias=True))
model.add(keras.layers.Dense(units=1000, input_dim = 2000, activation='sigmoid'))
model.add(keras.layers.Dense(units=500, input_dim=1000, activation='relu'))
model.add(keras.layers.Dense(units=2, input_dim=500,activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
can someone please give me some advice on this?
Thank you
add ensemble - you don't count vectorize, you use ensemble
https://github.com/dnishimoto/python-deep-learning/blob/master/UFO%20.ipynb
docs=ufo_df["summary"] #text
LABELS=['Egg', 'Cross','Sphere', 'Triangle','Disk','Oval','Rectangle','Teardrop']
#LABELS=['Triangle']
target=ufo_df[LABELS]
#print([len(d) for d in docs])
encoded_docs=[one_hot(d,vocab_size) for d in docs]
#print([np.max(d) for d in encoded_docs])
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
#print([d for d in padded_docs])
model=Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(8, activation='softmax'))
#model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(padded_docs, target, epochs=50, verbose=0)
Getting started with simple NN but my loss remains one at each iteration. Can somebody point out what I'm doing wrong here.
This is from a Kaggle introductory course and my modified training set contains shop id, category id, item id, month and revenue. I'm basically trying to predict revenue per shop per category for the following month.
I've scaled revenue and trained on a simple NN with 2 hidden layers; however, it doesn't seem like the training is working as the loss remains constant. I haven't done anything with the labels (ie shop ids, category ids) but I would still think the loss would change on each iteration.
If you have some comments on coding practice, I would be interested as well.
Thanks.
X_train = grouped_train.drop('revenue', axis=1)
y_train = grouped_train['revenue']
print('X & y trains')
print(X_train.head())
print(y_train.head())
scaler = StandardScaler()
y_train = pd.DataFrame(scaler.fit_transform(y_train.values.reshape(-1,1)))
print('Scaled y train')
print(y_train.head())
keras.backend.clear_session()
model = Sequential()
model.add(Dense(30, activation='relu', input_shape=(4,)))
model.add(Dense(30, activation='relu'))
model.add(Dense(1, activation='relu'))
model.summary()
print('Compile & fit')
model.compile(loss='mean_squared_error', optimizer='RMSprop')
model.fit(X_train, scaled_data, batch_size=128, epochs=13)
predictions = pd.DataFrame(model.predict(test))
print('Scaled predictions')
print(predictions.head())
print('Unscaled predictions')
print(pd.DataFrame(scaler.inverse_transform(predictions)).head())
IN
OUT
Looks like you are using the wrong activation for the final layer. You have a regression problem so the standard final activation layer should be activation = 'linear'
model.add(Dense(1, activation='relu'))
model.add(Dense(1, activation='linear'))
Edit:
Additionally model.fit is using 'scaled_data' shouldn't scaled_data be replaced with y_train
I am trying to build simple ANN to learn how to tell if the two images are similar or not using two distance equations. So here how I set up things. I created a distance between 3 images (1, an anchor, 2 a positive sample, 3 a negative sample) and then created two different distance measurements. 1 using ResNet features and another using hog features. The two distance measurements are then saved with the two picture paths as well as the correct label (0/1) 0 = Same 1 = not the same.
Now I am trying to build out my ANN to learn the difference between the two values and see if this will allow for me to see if two images a similar. But nothing happens when I train up the ANN. I think there are two possibilities.
1: I didn't set up the ann correctly.
2: There is no connection at all.
Please help me see what the issue is:
Here is my code:
# Load the Pandas libraries with alias 'pd'
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
# fix random seed for reproducibility
np.random.seed(7)
import csv
data = pd.read_csv("encoding.csv")
print(data.columns)
X = data[['resnet', 'hog','label']]
x = X[['resnet', 'hog']]
y = X[['label']]
model = Sequential()
#get number of columns in training data
n_cols = x.shape[1]
#add model layers
model.add(Dense(16, activation='relu', input_shape=(n_cols,)))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation= 'softmax'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x, y,
epochs=30,
batch_size=32,
validation_split=0.10)
Right now all it does is this over and over again:
167/167 [==============================] - 0s 3ms/step - loss: 8.0189 - acc: 0.4970 - val_loss: 7.5517 - val_acc: 0.5263
Here is the csv file that I am using:
EDIT
So I have changed the setup a bit and now it does bounce up to 73% val accuracy. But then it bounces around and ends at 40% what does than mean?
Here is the new model:
model = Sequential()
#get number of columns in training data
n_cols = x.shape[1]
model.add(Dense(256, activation='relu', input_shape=(n_cols,)))
model.add(BatchNormalization())
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(Dense(1, activation= 'sigmoid'))
#sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss = "binary_crossentropy", optimizer = sgd, metrics=['accuracy'])
model.compile(loss = "binary_crossentropy", optimizer = 'rmsprop', metrics=['accuracy'])
#model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x, y,
epochs=100,
batch_size=64,
validation_split=0.10)
This makes no sense:
model.add(Dense(1, activation= 'softmax'))
Softmax with one neuron will produce a constant value of 1.0 due to the normalization. For binary classification with the binary_crossentropy loss, you should use one neuron with sigmoid activation.
model.add(Dense(1, activation= 'sigmoid'))
Two things to try :
First add complexity to your network, it is pretty simple, add more layers/neurons in order to capture more information from your data
Start with something like that, and see if it change something :
model.add(Dense(256, activation='relu', input_shape=(n_cols,)))
model.add(Dense(128, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(1, activation= 'sigmoid'))
Second, think to add more epochs, ANN can be long to converge
Update
More things to try :
Normalize and scale your data
Maybe too small dataset -> the more data you get, the better your model will be
Try differents hyper parameter, maybe decrease your learning rate like 1e-4 or 1e-5, try differents batch_size, ..
Add more regularization: try dropout between each layer
Is keras model layer ferature labels are same to the orginal labels
model.add(Flatten())
model.add(Dense(380,name = 'dense_1'))
model.add(Activation('relu'))
model.add(Dropout(0.1))
model.add(Dense(classes_num ))
model.add(Activation('softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy',mean_pred,recall,precision,fmeasure, matthews_correlation,kullback_leibler_divergence,
binary_crossentropy])
model.summary()
print('model complied!!')
print('starting training....')
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=64,validation_data=(X_test, Y_test))
extract =Model(model.input,[model.get_layer("dense_1").output,model.output])
test_feature,test_labels= extract.predict(X_test)
Is the test_labels and y_test are same or not.If i want to use the layer features what labels i should use
test_label is a decimals that shows probability of membership in each class and it is not same as y_test. if you get index of maximum value in output of softmax layer it shows the class that your network determine according to inputs.
When I create a simple Keras Model
model = Sequential()
model.add(Dense(10, activation='tanh', input_dim=1))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error'])
and do a callback to Tensorboard
tensorboard = TensorBoard(log_dir='c:/temp/tensorboard/run1', histogram_freq=1, write_graph=True, write_images=False)
model.fit(x, y, epochs=1000, batch_size=1, callbacks=[tensorboard])
The output in Tensorboard looks like this:
In other words, it's a complete mess.
Is there anything I can do to make the graphs output look more structured?
How can I create histograms of weights with Keras and Tensorboard?
You can create a name scope to group layers in your model using with K.name_scope('name_scope').
Example:
with K.name_scope('CustomLayer'):
# add first layer in new scope
x = GlobalAveragePooling2D()(x)
# add a second fully connected layer
x = Dense(1024, activation='relu')(x)
Thanks to
https://github.com/fchollet/keras/pull/4233#issuecomment-316954784