these are a series of values taken each hour during 30 days, I gathered them in group of each hour as show 2 groups below:
{'date':
['2019-11-09','2019-11-10','2019-11-11','2019-11-12','2019-11-13','2019-11-14','2019-11-15','2019-11-16','2019-11-17','2019-11-18','2019-11-19','2019-11-20','2019-11-21','2019-11-22','2019-11-23','2019-11-24','2019-11-25','2019-11-26','2019-11-27','2019-11-28','2019-11-29','2019-11-30','2019-12-01','2019-12-02','2019-12-03','2019-12-04','2019-12-05','2019-12-06','2019-12-07','2019-12-08'],
'hora0':[111666.5,121672.91666666667,87669.33333333333,89035.58333333333,91707.91666666667,94449.33333333333,103476.91666666667,123271.5,133306.58333333334,103149.91666666667,106310.25,91830.25,77733.75,96823.25,102880.25,118383.33333333333,95076.66666666667,93561.83333333333,97651.58333333333,112180.0,118051.75,135456.0,149553.0,125797.25,126098.0,128603.75,84631.08333333333,85683.16666666667,96377.16666666667,113161.16666666667],
'hora2':[83768.83333333333,83319.58333333333,72922.75,71893.75,73933.0,76598.83333333333,81021.75,93588.83333333333,94514.08333333333,87147.66666666667,91464.08333333333,74022.41666666667,63709.166666666664,75939.33333333333,79904.16666666667,84435.33333333333,76736.0,85237.33333333333,79162.75,91729.58333333333,99081.58333333333,106440.41666666667,112064.66666666667,111635.58333333333,110168.58333333333,111241.25,62634.083333333336,68203.33333333333,71515.16666666667,80674.66666666667]}
Series has a similar distribution:
The AIC value is the Akaike information criterion, it compares the forecasting models to each other. The code for testing out different ARIMA models and calculating a range of ARIMA models to see which has the lowest AIC value
def AIC_iteration_i(train):
filterwarnings("ignore")
#X = df2.values
history = [x for x in train.iloc[:,0]]
p = d = q = range(0,6)
pdq = list(product(p,d,q))
aic_results = []
parameter = []
for param in pdq:
try:
model = ARIMA(history, order=param)
results = model.fit(disp=0)
# You can print each (p,d,q) parameters uncommented line below
#print('ARIMA{} - AIC:{}'.format(param, results.aic))
aic_results.append(results.aic)
parameter.append(param)
except:
continue
d = dict(ARIMA=parameter, AIC=aic_results)
results_table = pd.DataFrame(dict([ (k, pd.Series(v)) for k,v in d.items()]))
# AIC minimum value
order = results_table.loc[results_table['AIC'].idxmin()][0]
return order
it returns the same order (0, 2, 1) for the (p,d,q) parameters with the lowest AIC value for each series.
The prediction I got it with code below but result are no OK for hour 2
# time series hora0.iloc[:,0] and hora1.iloc[:,0] from pandas df
trained = list(hora0.iloc[:,0])
# order got it above (0,2,1)
orders = order
size = math.ceil(len(trained)*.8)
train, test = [trained[i] for i in range(size)] , [trained[i] for i in range(size,len(trained))]
predictions = []
predictionslower = []
predictionsupper = []
for k in range(len(test)):
model = ARIMA(trained, order=orders)
model_fit = model.fit(disp=0)
forecast, stderr, conf_int = model_fit.forecast()
yhat = forecast[0]
yhatlower = conf_int[0][0]
yhatupper = conf_int[0][1]
predictions.append(yhat)
predictionslower.append(yhatlower)
predictionsupper.append(yhatupper)
obs = test[k]
trained.append(obs)
#error = mean_squared_error(test, predictions)
predictions
prediction for
hour0 [113815.15072419723,128600.77967037176,131580.85654685542,83200.24743417211,83167.65192576911,95062.06180437957]`
prediction for `hour1 [79564.70753715932,112491.2694928094,114410.34654966182,60882.18766484651,nan,nan]
The AIC for series 2 also I check with pmd-arima which order is same values for SARIMAX model. Please give me some light.
The issue with the values in hour2 (also in other hours) for data is non stationary in time series, for removing non-stationary we can either apply a differentiation or natural logarithm to raw data:
hora2 = np.log('hora2')
{'date':['2019-11-09','2019-11-10','2019-11-11','2019-11-12','2019-11-13','2019-11-14','2019-11-15','2019-11-16','2019-11-17','2019-11-18','2019-11-19','2019-11-20','2019-11-21','2019-11-22','2019-11-23','2019-11-24','2019-11-25','2019-11-26','2019-11-27','2019-11-28','2019-11-29','2019-11-30','2019-12-01','2019-12-02','2019-12-03','2019-12-04','2019-12-05','2019-12-06','2019-12-07','2019-12-08'],
'hora2':[11.3358163,11.33043889,11.19715594,11.18294461,11.21091456,11.24633712,11.30247292,11.44666635,11.45650413,11.37535928,11.42370164,11.21212325,11.06208373,11.23769005,11.28858328,11.34374123,11.24812624,11.3531948,11.27926114,11.42660022,11.50369886,11.57534064,11.62683136,11.62299513,11.60976705,11.61945655,11.04506487,11.13024872,11.17766483,11.29817989]}
Once obtained the order for model ARIMA(trained, order=orders) with minimized AIC value (Akaike Information Criterion) for each "horaX" series. Some series still return NaN values in prediction and I had to take second or third minimized AIC value, the prediction result returned, applied exponential logarithm for recovery original values.
{'hora2':[11.6948938,12.00191037,11.81401922,11.77476296,11.83965601,11.89443423]}
hora2 = np.exp('hora2')
{'hora2':[119957.62142129,163066.00981609,135133.60347713,129931.53854787,138642.78415756,146449.24980086]}
the prediction result over test data is depict in picture:
Related
I am tryin to predict the cost of perfumes but I got error in the line "answer = (clf.predict(result))"
cursor.execute('SELECT * FROM info')
info = cursor.fetchall()
for line in info:
z.append(line[0:2])
y.append(line[2])
enc.fit(z)
x = enc.transform(z).toarray()
result = []
clf = tree.DecisionTreeClassifier()
clf = clf.fit(x, y)
new = input('enter the Model, Volume of your perfume to see the cost: example = Midnighto,50 ').split(',')
enc.fit([new])
result = enc.transform([new]).toarray()
answer = (clf.predict(result))
print(answer)
You don't have to fit enc again with your new input, only transform with the function fitted with X (what you do is OneHotEncoding the new sample taking in consideration only one possible value of the feature, you have to consider all the possible categories of the feature you have in your X data). So delete the next row:
enc.fit([new])
After that, please check if X and results has the same number of features. You can use function shape.
Furthermore, I recommend you use training and test data to see if your model is overfitted or not. Then, you can apply your personal predict.
i'd like to create few Random forest model within a for
loop in which i move the number of estimators. Train each of them on the same data sample and measure the accuracy of each.
This is my beginning code:
r = range(0, 100)
for i in r:
RF_model_%i = RandomForestClassifier(criterion="entropy", n_estimators=i, oob_score=True)
RF_model_%i.fit(X_train, y_train)
y_predict = RF_model_%i.predict(X_test)
accuracy_%i = accuracy_score(y_test, y_predict)
what i like to understand is:
how can i put the i-parameter on the name of each model (in
order to recognise them)?
after tained and calculated the
accuracy score each of the i-models how can i put the result on
a list within the for-loop?
You can do something like this:
results = [] # init
r = range(0, 100)
for i in r:
RF_model_%i = RandomForestClassifier(criterion="entropy", n_estimators=i, oob_score=True)
RF_model_%i.id = i # dynamically add fields to objects
RF_model_%i.fit(X_train, y_train)
y_predict = RF_model_%i.predict(X_test)
accuracy_%i = accuracy_score(y_test, y_predict)
results.append(accuracy_%i) # put the result on a list within the for-loop
I am using a keras neural net for identifying category in which the data belongs.
self.model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001, decay=0.0001),
metrics=[categorical_accuracy])
Fit function
history = self.model.fit(self.X,
{'output': self.Y},
validation_split=0.3,
epochs=400,
batch_size=32
)
I am interested in finding out which labels are getting categorized wrongly in the validation step. Seems like a good way to understand what is happening under the hood.
You can use model.predict_classes(validation_data) to get the predicted classes for your validation data, and compare these predictions with the actual labels to find out where the model was wrong. Something like this:
predictions = model.predict_classes(validation_data)
wrong = np.where(predictions != Y_validation)
If you are interested in looking 'under the hood', I'd suggest to use
model.predict(validation_data_x)
to see the scores for each class, for each observation of the validation set.
This should shed some light on which categories the model is not so good at classifying. The way to predict the final class is
scores = model.predict(validation_data_x)
preds = np.argmax(scores, axis=1)
be sure to use the proper axis for np.argmax (I'm assuming your observation axis is 1). Use preds to then compare with the real class.
Also, as another exploration you want to see the overall accuracy on this dataset, use
model.evaluate(x=validation_data_x, y=validation_data_y)
I ended up creating a metric which prints the "worst performing category id + score" on each iteration. Ideas from link
import tensorflow as tf
import numpy as np
class MaxIoU(object):
def __init__(self, num_classes):
super().__init__()
self.num_classes = num_classes
def max_iou(self, y_true, y_pred):
# Wraps np_max_iou method and uses it as a TensorFlow op.
# Takes numpy arrays as its arguments and returns numpy arrays as
# its outputs.
return tf.py_func(self.np_max_iou, [y_true, y_pred], tf.float32)
def np_max_iou(self, y_true, y_pred):
# Compute the confusion matrix to get the number of true positives,
# false positives, and false negatives
# Convert predictions and target from categorical to integer format
target = np.argmax(y_true, axis=-1).ravel()
predicted = np.argmax(y_pred, axis=-1).ravel()
# Trick from torchnet for bincounting 2 arrays together
# https://github.com/pytorch/tnt/blob/master/torchnet/meter/confusionmeter.py
x = predicted + self.num_classes * target
bincount_2d = np.bincount(x.astype(np.int32), minlength=self.num_classes**2)
assert bincount_2d.size == self.num_classes**2
conf = bincount_2d.reshape((self.num_classes, self.num_classes))
# Compute the IoU and mean IoU from the confusion matrix
true_positive = np.diag(conf)
false_positive = np.sum(conf, 0) - true_positive
false_negative = np.sum(conf, 1) - true_positive
# Just in case we get a division by 0, ignore/hide the error and set the value to 0
with np.errstate(divide='ignore', invalid='ignore'):
iou = false_positive / (true_positive + false_positive + false_negative)
iou[np.isnan(iou)] = 0
return np.max(iou).astype(np.float32) + np.argmax(iou).astype(np.float32)
~
usage:
custom_metric = MaxIoU(len(catagories))
self.model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001, decay=0.0001),
metrics=[categorical_accuracy, custom_metric.max_iou])
this question is an extension of this one which focuses on LSTM as opposed to CRF. Unfortunately, I do not have any experience with CRFs, which is why I'm asking these questions.
Problem:
I would like to predict a sequence of binary signal for multiple, non-independent groups. My dataset is moderately small (~1000 records per group), so I would like to try a CRF model here.
Available data:
I have a dataset with the following variables:
Timestamps
Group
Binary signal representing activity
Using this dataset I would like to forecast group_a_activity and group_b_activity which are both 0 or 1.
Note that the groups are believed to be cross-correlated and additional features can be extracted from timestamps -- for simplicity we can assume that there is only 1 feature we extract from the timestamps.
What I have so far:
Here is the data setup that you can reproduce on your own machine.
# libraries
import re
import numpy as np
import pandas as pd
data_length = 18 # how long our data series will be
shift_length = 3 # how long of a sequence do we want
df = (pd.DataFrame # create a sample dataframe
.from_records(np.random.randint(2, size=[data_length, 3]))
.rename(columns={0:'a', 1:'b', 2:'extra'}))
df.head() # check it out
# shift (assuming data is sorted already)
colrange = df.columns
shift_range = [_ for _ in range(-shift_length, shift_length+1) if _ != 0]
for c in colrange:
for s in shift_range:
if not (c == 'extra' and s > 0):
charge = 'next' if s > 0 else 'last' # 'next' variables is what we want to predict
formatted_s = '{0:02d}'.format(abs(s))
new_var = '{var}_{charge}_{n}'.format(var=c, charge=charge, n=formatted_s)
df[new_var] = df[c].shift(s)
# drop unnecessary variables and trim missings generated by the shift operation
df.dropna(axis=0, inplace=True)
df.drop(colrange, axis=1, inplace=True)
df = df.astype(int)
df.head() # check it out
# a_last_03 a_last_02 ... extra_last_02 extra_last_01
# 3 0 1 ... 0 1
# 4 1 0 ... 0 0
# 5 0 1 ... 1 0
# 6 0 0 ... 0 1
# 7 0 0 ... 1 0
[5 rows x 15 columns]
Before we get to the CRF part, I suspect that I cannot use approach this problem from a multi-task learning point of view (predicting patterns for both A and B via one model) and therefore I'm going to have to predict each of them individually.
Now the CRF part. I've found some relevant example (here is one) but they all tend to predict a single class value based on a prior sequence.
Here is my attempt at using a CRF here:
import pycrfsuite
crf_features = [] # a container for features
crf_labels = [] # a container for response
# lets focus on group A only for this one
current_response = [c for c in df.columns if c.startswith('a_next')]
# predictors are going to have to be nested otherwise I'll run into problems with dimensions
current_predictors = [c for c in df.columns if not 'next' in c]
current_predictors = set([re.sub('_\d+$','',v) for v in current_predictors])
for index, row in df.iterrows():
# not sure if its an effective way to iterate over a DF...
iter_features = []
for p in current_predictors:
pred_feature = []
# note that 0/1 values have to be converted into booleans
for k in range(shift_length):
iter_pred_feature = p + '_{0:02d}'.format(k+1)
pred_feature.append(p + "=" + str(bool(row[iter_pred_feature])))
iter_features.append(pred_feature)
iter_response = [row[current_response].apply(lambda z: str(bool(z))).tolist()]
crf_labels.extend(iter_response)
crf_features.append(iter_features)
trainer = pycrfsuite.Trainer(verbose=True)
for xseq, yseq in zip(crf_features, crf_labels):
trainer.append(xseq, yseq)
trainer.set_params({
'c1': 0.0, # coefficient for L1 penalty
'c2': 0.0, # coefficient for L2 penalty
'max_iterations': 10, # stop earlier
# include transitions that are possible, but not observed
'feature.possible_transitions': True
})
trainer.train('testcrf.crfsuite')
tagger = pycrfsuite.Tagger()
tagger.open('testcrf.crfsuite')
tagger.tag(xseq)
# ['False', 'True', 'False']
It seems that I did manage to get it working, but I'm not sure if I've approached it correctly. I'll formulate my questions in the Questions section, but first, here is an alternative approach using keras_contrib package:
from keras import Sequential
from keras_contrib.layers import CRF
from keras_contrib.losses import crf_loss
# we are gonna have to revisit data prep stage again
# separate predictors and response
response_df_dict = {}
for g in ['a','b']:
response_df_dict[g] = df[[c for c in df.columns if 'next' in c and g in c]]
# reformat for LSTM
# the response for every row is a matrix with depth of 2 (the number of groups) and width = shift_length
# the predictors are of the same dimensions except the depth is not 2 but the number of predictors that we have
response_array_list = []
col_prefix = set([re.sub('_\d+$','',c) for c in df.columns if 'next' not in c])
for c in col_prefix:
current_array = df[[z for z in df.columns if z.startswith(c)]].values
response_array_list.append(current_array)
# reshape into samples (1), time stamps (2) and channels/variables (0)
response_array = np.array([response_df_dict['a'].values,response_df_dict['b'].values])
response_array = np.reshape(response_array, (response_array.shape[1], response_array.shape[2], response_array.shape[0]))
predictor_array = np.array(response_array_list)
predictor_array = np.reshape(predictor_array, (predictor_array.shape[1], predictor_array.shape[2], predictor_array.shape[0]))
model = Sequential()
model.add(CRF(2, input_shape=(predictor_array.shape[1],predictor_array.shape[2])))
model.summary()
model.compile(loss=crf_loss, optimizer='adam', metrics=['accuracy'])
model.fit(predictor_array, response_array, epochs=10, batch_size=1)
model_preds = model.predict(predictor_array) # not gonna worry about train/test split here
Questions:
My main question is whether or not I've constructed both of my CRF models correctly. What worries me is that (1) there is not a lot of documentation out there on CRF models, (2) CRFs are mainly used for predicting a single label given a sequence, (3) the input features are nested and (4) when used in a multi-tasked fashion, I'm not sure if it is valid.
I have a few extra questions as well:
Is a CRF appropriate for this problem?
How are the 2 approaches (one based on pycrfuite and one based on keras_contrib) different and what are their advantages/disadvantages?
In a more general sense, what is the advantage of combining CRF and LSTM models into one (like one discussed here)
Many thanks!
I've come across the following error:
AssertionError: dimension mismatch
I've trained a linear regression model using PySpark's LinearRegressionWithSGD.
However when I try to make a prediction on the training set, I get "dimension mismatch" error.
Worth mentioning:
Data was scaled using StandardScaler, but the predicted value was not.
As can be seen in code the features used for training were generated by PCA.
Some code:
pca_transformed = pca_model.transform(data_std)
X = pca_transformed.map(lambda x: (x[0], x[1]))
data = train_votes.zip(pca_transformed)
labeled_data = data.map(lambda x : LabeledPoint(x[0], x[1:]))
linear_regression_model = LinearRegressionWithSGD.train(labeled_data, iterations=10)
The prediction is the source of the error, and these are the variations I tried:
pred = linear_regression_model.predict(pca_transformed.collect())
pred = linear_regression_model.predict([pca_transformed.collect()])
pred = linear_regression_model.predict(X.collect())
pred = linear_regression_model.predict([X.collect()])
The regression weights:
DenseVector([1.8509, 81435.7615])
The vectors used:
pca_transformed.take(1)
[DenseVector([-0.1745, -1.8936])]
X.take(1)
[(-0.17449817243564397, -1.8935926689554488)]
labeled_data.take(1)
[LabeledPoint(22221.0, [-0.174498172436,-1.89359266896])]
This worked:
pred = linear_regression_model.predict(pca_transformed)
pca_transformed is of type RDD.
The function handles RDD's and arrays differently:
def predict(self, x):
"""
Predict the value of the dependent variable given a vector or
an RDD of vectors containing values for the independent variables.
"""
if isinstance(x, RDD):
return x.map(self.predict)
x = _convert_to_vector(x)
return self.weights.dot(x) + self.intercept
When a simple array is used, there might be a dimension mismatch issue (like the error in the question above).
As can be seen, if x is not an RDD, it's being converted to a vector. The thing is the dot product will not work unless you take x[0].
Here is the error reproduced:
j = _convert_to_vector(pca_transformed.take(1))
linear_regression_model.weights.dot(j) + linear_regression_model.intercept
This works just fine:
j = _convert_to_vector(pca_transformed.take(1))
linear_regression_model.weights.dot(j[0]) + linear_regression_model.intercept