Data Normalization in Tensorflow Model - python-3.x

I have a Tensorflow regression model that i have with been working with. I have the model tuned well and getting good results while training. However, when i goto evalute the results are horrible. I did some research and found that i am not normalizing my test features and labels as well so i suspect that is where the problem is. My thought is to normalize the whole dataset before splitting the dataset into train and test sets but i am getting an attribute error that has me stumped.
here is the code sample. Please help :)
#concatenate the surface data and single_downhole_col into a single dataframe
training_Data =[]
training_Data = pd.concat([surface_Data, single_downhole_col], axis=1)
#print('training data shape:',training_Data.shape)
#print(training_Data.head())
#normalize the data using keras
model_normalizer_layer = tf.keras.layers.Normalization(axis=-1)
model_normalizer_layer.adapt(training_Data)
normalized_training_Data = model_normalizer_layer(training_Data)
#convert the data frame to array
dataset = normalized_training_Data.copy()
dataset.tail()
#create a training and test set
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
#check the data
train_dataset.describe().transpose()
#split features from labels
train_features = train_dataset.copy()
test_features = test_dataset.copy()
and if there is any interest in knowing how the normalizer layer is used in the model then please see below
def build_and_compile_model(data):
model = keras.Sequential([
model_normalizer_layer,
layers.Dense(260, input_dim=401,activation='relu'),
layers.Dense(80, activation='relu'),
#layers.Dense(40, activation='relu'),
layers.Dense(1)
])

i found that quasimodos suggestion of using normalization of the data set before processing in my model was the ideal solution. It scaled the data 0 to 1 for all columns as expected and allowed me to display the data prior to training to validate it was correct.
For whatever reason the keras.layers.normalization was not working in my case.
x = training_Data.values
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
training_Data = pd.DataFrame(x_scaled)
# normalize the data using keras
model_normalizer_layer = tf.keras.layers.Normalization(axis=-1)
model_normalizer_layer.adapt(training_Data)
normalized_training_Data = model_normalizer_layer(training_Data)
The only part that i have yet to figure out is how do i scale the predict data from the model back to the original ranges of the column??? i'm sure its simple but i'm stumped.

Related

Viewing model coefficients for a single prediction

I have a logistic regression model housed in a scikit-learn pipeline using the following:
pipeline = make_pipeline(
StandardScaler(),
LogisticRegressionCV(
solver='lbfgs',
cv=10,
scoring='roc_auc',
class_weight='balanced'
)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
I can view the model's coefficients for predictions as a whole with this code ...
# Look at model's coefficients to see what features are most important
plt.rcParams['figure.dpi'] = 50
model = pipeline.named_steps['logisticregressioncv']
coefficients = pd.Series(model.coef_[0], X_train.columns)
plt.figure(figsize=(10,12))
coefficients.sort_values().plot.barh(color='grey');
Which returns a bar plot of the features and their coefficients.
What I'm trying to do is be able to see how different input values for a single observation impact its prediction. The idea is to be able to run predictions on a sample population and examine the group with "low" predictions ... for example if I run predictions for 10 observations, I'd like to see how different input values impacted each of those 10 predictions, individually.
Recalled that I can achieve this via Shap Values using something along the following (but using LinearExplainer instead of TreeExplainer):
# Instantiate model and encoder outside of pipeline for
# use with shap
model = RandomForestClassifier( random_state=25)
# Fit on train, score on val
model.fit(X_train_encoded, y_train2)
y_pred_shap = model.predict(X_val_encoded)
# Get an individual observation to explain.
row = X_test_encoded.iloc[[-3]]
# Why did the model predict this?
# Look at a Shapley Values Force Plot
import shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(row)
shap.initjs()
shap.force_plot(
base_value=explainer.expected_value[1],
shap_values=shap_values[1],
features=row
)```

keras training on big datasets seperately keras

I am working on a keras denoising neural network that denoise high Dimension x-ray images. The idea is to train on some datasets eg.1,2,3 and after having the weights, another datasets eg.4,5,6 will start with a new training with weights initialized from the previous training. Implementation-wise it works, however the weights resulted from the last rotation perform better only on the datasets that were used to train on in this rotation. Same goes for other rotation.
In other words, weights resutlted from training on dataset: 4,5,6 doesn't give the good results on an image of dataset 1 as intended as the weights that were trained on datasets: 1,2,3. which shouldn't be what I intend to do
The idea is that weights should be tweaked to work with all datasets effectively, as training on the whole dataset doesn't fit into memory.
I tried other solutions such as creating custom generator that takes images from disk and do the training as batches which is very slow as it depends on factors like I/O operations happening on disk or the time complexity of processing functions happening inside the custom keras generator!
Below is a code that shows what I am doing. I have 12 datasets, seperated into 4 checkpoints. data is loaded and training goes and saves final model to an array and next training takes the weights from the previous rotation and continues.
EPOCHES = 150
NUM_CHKPTS = 4
weights = []
for chk in range(1,NUM_CHKPTS+1):
log_dir = os.path.join(os.getcwd(), 'resnet_checkpts_' + str(EPOCHES) + "_tl2_chkpt" + str(chk))
if not os.path.isdir(log_dir):
os.makedirs(log_dir)
else:
print('Training log directory already exists # {}.'.format(log_dir))
tb_output = TensorBoard(log_dir=log_dir, histogram_freq=1)
print("Loading Data From CHKPT #" + str(chk))
h5f = h5py.File('C:\\autoencoder\\datasets\\mix\\chk' + str(chk) + '.h5','r')
org_patch = h5f['train_data'][:]
noisy_patch = h5f['train_noisy'][:]
h5f.close()
input_patch, test_patch, noisy_patch, test_noisy_patch = train_test_split(org_patch, noisy_patch, train_size=0.8, shuffle=True)
print("Reshaping")
train_data = np.array([np.reshape(input_patch[i], (52, 52, 1)) for i in range(input_patch.shape[0])], dtype = np.float32)
train_noisy_data = np.array([np.reshape(noisy_patch[i], (52, 52, 1)) for i in range(noisy_patch.shape[0])], dtype = np.float32)
test_data = np.array([np.reshape(test_patch[i], (52, 52, 1)) for i in range(test_patch.shape[0])], dtype = np.float32)
test_noisy_data = np.array([np.reshape(test_noisy_patch[i], (52, 52, 1)) for i in range(test_noisy_patch.shape[0])], dtype = np.float32)
print('Number of training samples are:', train_data.shape[0])
print('Number of test samples are:', test_data.shape[0])
# IN = np.ones((len(XTRAINFILES), 52, 52, 1 ))
if chk == 1:
print("Generating the Model For The First Time..")
autoencoder_model = model_autoencoder(train_noisy_data)
print("Done!")
else:
autoencoder_model=load_model(weights[chk-2])
checkpt_path = log_dir + r"\\cp-{epoch:04d}.ckpt"
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpt_path, verbose=0, save_weights_only=True, save_freq='epoch')
optimizer = tf.keras.optimizers.Adam(lr=0.0001)
autoencoder_model.compile(loss='mse',optimizer=optimizer)
autoencoder_model.fit(train_noisy_data, train_data,
batch_size=128,
epochs=EPOCHES, shuffle=True, verbose=1,
validation_data=(test_noisy_data, test_data),
callbacks=[tb_output, checkpoint_callback])
weight_dir = log_dir+'\\model_resnet_new_OL' + str(EPOCHES) + 'epochs.h5'
weights.append(weight_dir)
autoencoder_model.save(weight_dir) # Defined saved model name by number of epochs.
Tensorboard Graphs, Rotations are 1,2,3,4 from up down :
Your model will forget previous dataset as you train on new dataset.
I read in reinforcement learning, when game are used to train Deep Reinforcement Learning (DRL), then you have to create memory replay, which collect data from different rounds of game, because each round of game has different data, then randomly some of that data is chosen to train model. that way DRL model can learn to play different rounds of game without forgetting previous rounds.
You can try to create a single dataset by taking some random samples from each dataset.
When you train model on new dataset that make sure data from all previous rotation are in current rotation.
Also in transfer learning, when you train model on new dataset, you have to freeze previous layers so that model don`t forget previous training. you are not using transfer learning but still when you start training on 2nd dataset your 1st dataset will slowly be removed from memory of weights.
you can try freezing initial layers of decoder so that they are not updated when extracting feature, assuming all of the dataset contain similar images, that way your model will not forget previous training as in transfer learning. but still when you train on new dataset previous will be forgotten.

SVM classification - Bad input shape Error

Im having an error bad input shape I tried searching but I can't understand yet since im new in SVM.
train.csv
testing.csv
# importing required libraries
import numpy as np
# import support vector classifier
from sklearn.svm import SVC
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
X = pd.read_csv("train.csv")
y = pd.read_csv("testing.csv")
clf = SVC()
clf.fit(X, y)
clf.decision_function(X)
print(clf.predict(X))
raise ValueError("bad input shape {0}".format(shape))
ValueError: bad input shape (1, 6)
The problem here is that you are just inserting your entire table with the training data (plus labels) as the input for just the training data and then try to predict the table of the testing data (data and labels) with the SVM.
This does not work that way.
What you need to do, is to train the SVM with your training data (so data points + label for each data point) and then test it against your testing data (testing data points + labels).
Your code should look like this instead:
# Load training and testing dataset from .csv files
training_dataset = pd.read_csv("train.csv")
testing_dataset = pd.read_csv("testing.csv")
# Load training data points with all relevant features
X_train = training_dataset[['feature1','feature2','feature3','feature4']]
# Load training labels from dataset
y_train = training_dataset['label']
# Load testing data points with all relevant features
X_test = testing_dataset[['feature1','feature2','feature3','feature4']]
# Load testing labels from dataset
y_test = testing_dataset['label']
clf = SVC()
# Train the SVC with the training data (data points and labels)
clf.fit(X_train, y_train)
# Evaluate the decision function with test samples
clf.decision_function(X_test)
# Predict the test samples
print(clf.predict(X_test))
I hope that helps and that this code runs for you. Let me know if I misunderstood something or you have more questions. :)

How to use Sklearn linear regression with doc2vec input

I have 250k text documents (tweets and newspaper articles) represented as vectors obtained with a doc2vec model. Now, I want to use a regressor (multiple linear regression) to predict continuous value outputs - in my case the UK Consumer Confidence Index.
My code runs, since forever. What am I doing wrong?
I imported my data from Excel and splitted it into x_train and x_dev. The data are composed of preprocessed text and CCI continuous values.
# Import doc2vec model
dbow = Doc2Vec.load('dbow_extended.d2v')
dmm = Doc2Vec.load('dmm_extended.d2v')
concat = ConcatenatedDoc2Vec([dbow, dmm]) # model uses vector_size 400
def get_vectors(model, input_docs):
vectors = [model.infer_vector(doc.words) for doc in input_docs]
return vectors
# Prepare X_train and y_train
train_text = x_train["preprocessed_text"].tolist()
train_tagged = [TaggedDocument(words=str(_d).split(), tags=[str(i)]) for i, _d in list(enumerate(train_text))]
X_train = get_vectors(concat, train_tagged)
y_train=x_train['CCI_UK']
# Fit regressor
from sklearn import linear_model
reg = linear_model.LinearRegression()
reg.fit(X_train, y_train)
# Predict and evaluate
prediction=reg.predict(X_dev)
print(classification_report(y_true=y_dev,y_pred=prediction),'\n')
Since the fitting never completed, I wonder whether I am using a wrong input. However, no error message is shown and the code simply runs forever. What am I doing wrong?
Thank you so much for your help!!
The variable X_train is a list or a list of lists (since the function get_vectors() return a list) whereas the input to sklearn's Linear Regression should be a 2-D array.
Try converting X_train to an array using this :
X_train = np.array(X_train)
This should help !

how can i improve number predictions?

I've got some number classification model, on test data it works OK, but when I want to classifier other images, I faced with problems that my model can't exactly predict what number is it. Pls, help me improve the model.predict() performance.
I've tried to train my model in many ways, in the code below there is a function that creates classification model, I trained this model actually many ways, [1K < n < 60K] of input test data, [3 < e < 50] of trained iterations.
def load_data():
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_images = tf.keras.utils.normalize(train_images, axis = 1)
test_images = tf.keras.utils.normalize(test_images, axis = 1)
return (train_images, train_labels), (test_images, test_labels)
def create_model():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation = tf.nn.softmax))
data = load_data(n=60000, k=5)
model.compile(optimizer ='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(data[0][0][:n], data[0][1][:n], epochs = e)# ive tried from 3-50 epochs
model.save(config.model_name)
def load_model():
return tf.keras.models.load_model(config.model_name)def predict(images):
try:
model = load_model()
except:
create_model()
model = load_model()
images = tf.keras.utils.normalize(images, axis = 0)
d = load_data()
plot_many_images([d[0][0][0].reshape((28,28)), images[0]],['data', 'image'])
predictions = model.predict(images)
return predictions
I think that my input data isn't looking like the data is predicting model, but I've tried to make it as similar as I can. On this pic(https://imgur.com/FfLGMEK) on the LEFT is train data image, and on RIGHT is my parsed image, they are both 28x28 pix, both a cv2.noramalized
for the test image predictions I've used this(https://imgur.com/RMfKtag) sudoku, it's already formatted to be similar with a test data numbers, but when I test this image with the model prediction the result is not so nice(https://imgur.com/RQFvLNE)
As you can see predicted data leaves much to be desired.
P.S. the (' ') items in predicted data result made by my hands(I've replaced numbers at that positions by ' '), cos after predictions they all have some value(1-9), its not necessary now.
what do you mean "on test data it works OK"? if you mean its works good for train data but do not has a good prediction on test data, maybe your model was over-fit in training phase. i suggest to use train/validation/test approach to train your network.

Resources