Log Transform when predicting - statistics

I am trying to predict a financial variable that is highly skewed. I thought I should log transform it, but that is causing difficulties, as I am also concerned with accurately modeling the overall mean of the data. When I exponentiate the predictions, and take the mean, I underestimate significantly the true mean. I am pretty sure this is because if the data (y) is skewed toward the right, then mean(y) > mean(exp(log(y))).
I am not sure if this is an issue only with linear models. I am also not sure how to go about fixing it. Below is some code illustrating the problem:
library(tidymodels)
data(ames)
ames <- mutate(ames, Sale_Price = log(Sale_Price))
ames_rec <-
recipe(Sale_Price ~ Neighborhood + Gr_Liv_Area + Year_Built + Bldg_Type +
Latitude + Longitude, data = ames) %>%
step_log(Gr_Liv_Area, base = 10) %>%
step_other(Neighborhood, threshold = 0.01) %>%
step_dummy(all_nominal_predictors()) %>%
step_interact( ~ Gr_Liv_Area:starts_with("Bldg_Type_") ) %>%
step_ns(Latitude, Longitude, deg_free = 20)
lm_model <- linear_reg() %>% set_engine("lm")
lm_wflow <-
workflow() %>%
add_model(lm_model) %>%
add_recipe(ames_rec)
lm_fit <- fit(lm_wflow, ames)
predictions <- predict(lm_fit, new_data = ames %>% select(-Sale_Price))
bind_cols(ames %>% select(Sale_Price), predictions) %>%
mutate(Sale_Price_Orig = exp(Sale_Price), Predicted = exp(.pred)) %>%
summarize(across(everything(), mean))
# A tibble: 1 x 4
Sale_Price .pred Sale_Price_Orig Predicted
<dbl> <dbl> <dbl> <dbl>
1 12.0 12.0 180796. 178206.
In this example, the true mean is $2,590 or 1.5% higher than the predicted mean. In my world, this would be bad!
Any help would be appreciated.
Thanks,
David

Related

sklearn.metrics r2_score negative

I can't understand r2_score in sklearn.metrics, which seems to return meaningless values. I followed all the "similar questions" proposed by stackoverflow (some of which elude to wrong argument sequence, which is why I include both orders below), but I'm still lost:
import pandas as pd
from sklearn import linear_model
from sklearn.metrics import r2_score
data = [[0.70940504,0.81604095],
[0.69506565,0.78922145],
[0.66527803,0.72174502],
[0.75251691,0.74893098],
[0.72517034,0.73999503],
[0.68269306,0.72230534],
[0.75251691,0.77163700],
[0.78954422,0.81163350],
[0.83077994,0.94561242],
[0.74107290,0.75122162]]
df = pd.DataFrame(data)
x = df[0].to_numpy().reshape(-1,1)
y = df[1].to_numpy()
print("r2 = ", r2_score(y, x))
print("r2 (wrong order) = ", r2_score(x, y))
lreg = linear_model.LinearRegression()
lreg.fit(x, y)
y_pred = lreg.predict(x)
print("predicted values: ", y_pred)
print("slope = ", lreg.coef_)
print("intercept = ", lreg.intercept_)
print("score = ", lreg.score(x, y))
returns
r2 = 0.01488309898850404 # surprise!!
r2 (wrong order) = -0.7313385423077101 # even more of a surprise!!
predicted values: [0.75664194 0.74219177 0.71217403 0.80008687 0.77252903 0.7297236 0.80008687 0.83740023 0.87895451 0.78855445]
slope = [1.00772544]
intercept = 0.04175643677503682
score = 0.5778168671193278
Plotting data and predicted values in Excel show that the linear_model return values make sense (orange dots fall on Excel trend line), but r2_score return values do not (in both argument sequences):
Your model explains nearly 60% of the target variance, which is much better than the average predictor (which would explain 0).
Why does your single feature explain less? Mainly because of the intercept in this case: r2_score(y, x + 0.042) would work nearly as good.
In a simplified way, you may think of R2 as 1 - (mean_squared_error(y, y_pred) / y.var()). Not being centered around the target mean inflates the sum of squared residuals inevitably, resulting in a poor R2.

How to measure sensitivity, speficity, PPV and NPV from confusion matrix for multiclass multiclassification [migrated]

I wonder how to compute precision and recall using a confusion matrix for a multi-class classification problem. Specifically, an observation can only be assigned to its most probable class / label. I would like to compute:
Precision = TP / (TP+FP)
Recall = TP / (TP+FN)
for each class, and then compute the micro-averaged F-measure.
In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H1 when, in truth, H1),
FN = false negative (declare H0 when, in truth, H1),
FP = false positive
TN = true negative
From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need.
Edit
The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that
a given row of the matrix corresponds to specific value for the "truth", we have:
$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$
$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$
That is, precision is the fraction of events where we correctly declared $i$
out of all instances where the algorithm declared $i$. Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$.
Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45, p. 427-437. (pdf)
The abstract reads:
This paper presents a systematic analysis of twenty four performance
measures used in the complete spectrum of Machine Learning
classification tasks, i.e., binary, multi-class, multi-labelled, and
hierarchical. For each classification task, the study relates a set of
changes in a confusion matrix to specific characteristics of data.
Then the analysis concentrates on the type of changes to a confusion
matrix that do not change a measure, therefore, preserve a
classifier’s evaluation (measure invariance). The result is the
measure invariance taxonomy with respect to all relevant label
distribution changes in a classification problem. This formal analysis
is supported by examples of applications where invariance properties
of measures lead to a more reliable evaluation of classifiers. Text
classification supplements the discussion with several case studies.
Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confusion_matrix(labels, predictions)
recall = np.diag(cm) / np.sum(cm, axis = 1)
precision = np.diag(cm) / np.sum(cm, axis = 0)
To get overall measures of precision and recall, use then
np.mean(recall)
np.mean(precision)
#Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='micro')
Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to understand what a confusion matrix is telling us in general. Let $Y$ represent a class label and $\hat Y$ represent a class prediction. In the binary case, let the two possible values for $Y$ and $\hat Y$ be $0$ and $1$, which represent the classes. Next, suppose that the confusion matrix for $Y$ and $\hat Y$ is:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
10
20
$Y = 1$
30
40
With hindsight, let us normalize the rows and columns of this confusion matrix, such that the sum of all elements of the confusion matrix is $1$. Currently, the sum of all elements of the confusion matrix is $10 + 20 + 30 + 40 = 100$, which is our normalization factor. After dividing the elements of the confusion matrix by the normalization factor, we get the following normalized confusion matrix:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
$\frac{1}{10}$
$\frac{2}{10}$
$Y = 1$
$\frac{3}{10}$
$\frac{4}{10}$
With this formulation of the confusion matrix, we can interpret $Y$ and $\hat Y$ slightly differently. We can interpret them as jointly Bernoulli (binary) random variables, where their normalized confusion matrix represents their joint probability mass function. When we interpret $Y$ and $\hat Y$ this way, the definitions of precision and recall are much easier to remember using Bayes' rule and the law of total probability:
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 0 , \hat Y = 1)} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 1 , \hat Y = 0)}
\end{align}
How do we determine these probabilities? We can estimate them using the normalized confusion matrix. From the table above, we see that
\begin{align}
P(Y = 0 , \hat Y = 0) &\approx \frac{1}{10} \\
P(Y = 0 , \hat Y = 1) &\approx \frac{2}{10} \\
P(Y = 1 , \hat Y = 0) &\approx \frac{3}{10} \\
P(Y = 1 , \hat Y = 1) &\approx \frac{4}{10}
\end{align}
Therefore, the precision and recall for this specific example are
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{2}{10}} = \frac{4}{4 + 2} = \frac{2}{3} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{3}{10}} = \frac{4}{4 + 3} = \frac{4}{7}
\end{align}
Note that, from the calculations above, we didn't really need to normalize the confusion matrix before computing the precision and recall. The reason for this is that, because of Bayes' rule, we end up dividing one value that is normalized by another value that is normalized, which means that the normalization factor can be cancelled out.
A nice thing about this interpretation is that it can be generalized to confusion matrices of any size. In the case where there are more than 2 classes, $Y$ and $\hat Y$ are no longer considered to be jointly Bernoulli, but rather jointly categorical. Moreover, we would need to specify which class we are computing the precision and recall for. In fact, the definitions above may be interpreted as the precision and recall for class $1$. We can also compute the precision and recall for class $0$, but these have different names in the literature.

How to Cluster Infrared Spectroscopy Data with Python

I have been looking at clustering infrared spectroscopy data with the sklearn clustering methods. I am having trouble getting the clustering to work with the data, since I'm new to this I don't know if the way I'm coding it is wrong or my approach is wrong.
My data, in Pandas DataFrame format, looks like this:
Index Wavenumbers (cm-1) %Transmission_i ...
0 650 100 ...
. . . ...
. . . ...
. . . ...
n 4000 95 ...
where, the x-axis for all spectra is the Wavenumbers (cm-1) column and the subsequent columns (%Transmission_i) are the actual data. I want to cluster these columns (in terms of which spectra are most similar to each other), as such I am trying this code:
X = np.array([list(df[x].values) for x in df.set_index(x)])
clusters = DBSCAN().fit(X)
where df is my DataFrame, and np is numpy (hopefully obvious). The problem is when I print out the cluster labels it just spits out nothing but -1 which means all my data is noise. This isn't the case, when I plot my data I can clearly see a some spectra look very similar (as they should).
How can I get the similar spectra to be clustered properly?
EDIT:
Here is a minimum working example.
import numpy as np
import pandas as pd
import sklearn as sk
import matplotlib.pyplot as plt
from sklearn.cluster import DBSCAN
x = 'x-vals'
def cluster_data(df):
avg_list = []
dif_list = []
for col in df:
if x == col:
continue
avg_list.append(np.mean(df[col].values))
dif_list.append(np.mean(np.diff(df[col].values)))
a = sk.preprocessing.normalize([avg_list], norm='max')[0]
b = sk.preprocessing.normalize([dif_list], norm='max')[0]
X = []
for i,j in zip(a,b):
X.append([i,j])
X = np.array(X)
clusters = DBSCAN(eps=0.2).fit(X)
return clusters.labels_
def plot_clusters(df, clusters):
colors = ['red', 'green', 'blue', 'black', 'pink']
i = 0
for col in df:
if col == x:
continue
color = colors[clusters[i]]
plt.plot(df[x], df[col], color=color)
i +=1
plt.show()
x1 = np.linspace(-np.pi, np.pi, 201)
y1 = np.sin(x1) + 1
y2 = np.cos(x1) + 1
y3 = np.zeros_like(x1) + 2
y4 = np.zeros_like(x1) + 1.9
y5 = np.zeros_like(x1) + 1.8
y6 = np.zeros_like(x1) + 1.7
y7 = np.zeros_like(x1) + 1
y8 = np.zeros_like(x1) + 0.9
y9 = np.zeros_like(x1) + 0.8
y10 = np.zeros_like(x1) + 0.7
df = pd.DataFrame({'x-vals':x1, 'y1':y1, 'y2':y2, 'y3':y3, 'y4':y4,
'y5':y5, 'y6':y6, 'y7':y7, 'y8':y8, 'y9':y9,
'y10':y10})
clusters = cluster_data(df)
plot_clusters(df, clusters)
This produces the following plot, where red is a cluster and pink is noise.
I was able to get a method working, but I'm not fully convinced this is the best method for clustering IR spectra.
First I run through all the spectra and compile a list of the mean and mean of the first derivative of each spectra. The mean is supposed to be representative of the vertical location of the spectra, while the mean of the first derivative is supposed to be representative of the shape of the spectra.
avg_list = []
dif_list = []
for col in df:
if x == col:
continue
avg_list.append(np.mean(df[col].values))
dif_list.append(np.mean(np.dif(df[col].values)))
Then I normalize each list, this is so I can pick a eps value based on percent changes.
a = sk.preprocessing.normalize([avg_list], norm='max')[0]
b = sk.preprocessing.normalize([diff_list], norm='max')[0]
After that I make a 2D array for runnning DBSCAN in 2D mode.
X = []
for i,j in zip(a,b):
X.append([i,j])
Then I run the DBSCAN clustering method with an arbitrary percent difference value for the eps parameter.
X = np.array(X)
clusters = DBSCAN(eps=0.2).fit(X)
Then clusters.labels_ returns an array with the length of the number of spectra in my DataFrame. It works fairly well, but it is rather exclusive and the clusters could be better. Some more fine tuning would be helpful.
First, transpose your dataframe, so that you have the datapoints as rows as is the standard. It should look like this:
Index 650 660 ... 4000
0 100 98 ... 95
1 . . ... .
. . . ... .
n . . ... .
Then you get your X for the clustering like that:
X = df.values
Next, you cluster:
from sklearn.cluster import DBSCAN
cluster = DBSCAN().fit(X)
print(cluster.labels_)
As a recommendation for spectral data, kmeans (disadvantage: you need to set the number of clusters beforehand) and self-organizing maps (disadvantage: soft clusters instead of hard clusters) work quite well. For example, you find an example here for clustering on hyperspectral data.

SparkR 2.0 Classification: how to get performance matrices?

How to get performance matrices in sparkR classification, e.g., F1 score, Precision, Recall, Confusion Matrix
# Load training data
df <- read.df("data/mllib/sample_libsvm_data.txt", source = "libsvm")
training <- df
testing <- df
# Fit a random forest classification model with spark.randomForest
model <- spark.randomForest(training, label ~ features, "classification", numTrees = 10)
# Model summary
summary(model)
# Prediction
predictions <- predict(model, testing)
head(predictions)
# Performance evaluation
I've tried caret::confusionMatrix(testing$label,testing$prediction) it shows error:
Error in unique.default(x, nmax = nmax) : unique() applies only to vectors
Caret's confusionMatrix will not work, since it needs R dataframes while your data are in Spark dataframes.
One not recommended way for getting your metrics is to "collect" locally your Spark dataframes to R using as.data.frame, and then use caret etc.; but this means that your data can fit in the main memory of your driver machine, in which case of course you have absolutely no reason to use Spark...
So, here is a way to get the accuracy in a distributed manner (i.e. without collecting data locally), using the iris data as an example:
sparkR.version()
# "2.1.1"
df <- as.DataFrame(iris)
model <- spark.randomForest(df, Species ~ ., "classification", numTrees = 10)
predictions <- predict(model, df)
summary(predictions)
# SparkDataFrame[summary:string, Sepal_Length:string, Sepal_Width:string, Petal_Length:string, Petal_Width:string, Species:string, prediction:string]
createOrReplaceTempView(predictions, "predictions")
correct <- sql("SELECT prediction, Species FROM predictions WHERE prediction=Species")
count(correct)
# 149
acc = count(correct)/count(predictions)
acc
# 0.9933333
(Regarding the 149 correct predictions out of 150 samples, if you do a showDF(predictions, numRows=150) you will see indeed that there is a single virginica sample misclassified as versicolor).

Correaltion and regression analysis

How should I analysis the correlation between four ordinal numbers (0,1,2,3) and various range of the continuous values? The scatter plot looks like a 4 parallel horizontal dots .
You could run a Spearman rank correlation test. Using R,
require(pspearman)
x <- c(rep("a", 5), rep("b", 5), rep("c", 5), rep("d", 5))
x <- factor(x, levels=c("a", "b", "c", "d"), ordered=T)
y <- 1:20
spearman.test(x, y)
Spearman's rank correlation rho
data: x and y
S = 40.6203, p-value = 6.566e-06
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
0.9694584
Warning message:
In spearman.test(x, y) : Cannot compute exact p-values with ties
Non-significant correlation
set.seed(123)
y2 <- rnorm(20)
spearman.test(x, y2)
Spearman's rank correlation rho
data: x and y2
S = 1144.329, p-value = 0.5558
alternative hypothesis: true rho is not equal to 0
sample estimates:
rho
0.139602
Warning message:
In spearman.test(x, y2) : Cannot compute exact p-values with ties

Resources