I read the following article
Anomaly detection with Principal Component Analysis (PCA)
In the article is written following:
• PCA algorithm basically transforms data readings from an existing coordinate system into a new coordinate system.
• The closer data readings are to the center of the new coordinate system, the closer these readings are to an optimum value.
• The anomaly score is calculated using the Mahalanobis distance between a reading and the mean of all readings, which is the center of the transformed coordinate system.
Can anyone describe me more in detail about anomaly detection using PCA (using PCA scores and Mahalanobis distance)? I'm confused because the definition of PCA is: PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables“. How to use Mahalanobis distance when there is no more correlation between the variables?
Can anybody explain me how to do this in Spark? Does the pca.transform function returns the score where i should calculate the Mahalanobis distance for every reading to the center?
Lets assume you have a dataset of 3-dimensional points.
Each point has coordinates (x, y, z).
Those (x, y, z) are dimensions.
Point represented by three values e. g. (8, 7, 4). It called input vector.
When you applying PCA algorithm you basically transform your input vector to new vector. It can be represented as function that turns (x, y, z) => (v, w).
Example: (8, 7, 4) => (-4, 13)
Now you received a vector, shorter one (you reduced an nr. of dimension), but your point still has coordinates, namely (v, w). This means that you can compute the distance between two points using Mahalanobis measure. Points that have a long distance from a mean coordinate are in fact anomalies.
Example solution:
import breeze.linalg.{DenseVector, inv}
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.feature.{PCA, StandardScaler, VectorAssembler}
import org.apache.spark.ml.linalg.{Matrix, Vector}
import org.apache.spark.ml.stat.Correlation
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import org.apache.spark.sql.functions._
object SparkApp extends App {
val session = SparkSession.builder()
.appName("spark-app").master("local[*]").getOrCreate()
session.sparkContext.setLogLevel("ERROR")
import session.implicits._
val df = Seq(
(1, 4, 0),
(3, 4, 0),
(1, 3, 0),
(3, 3, 0),
(67, 37, 0) //outlier
).toDF("x", "y", "z")
val vectorAssembler = new VectorAssembler().setInputCols(Array("x", "y", "z")).setOutputCol("vector")
val standardScalar = new StandardScaler().setInputCol("vector").setOutputCol("normalized-vector").setWithMean(true)
.setWithStd(true)
val pca = new PCA().setInputCol("normalized-vector").setOutputCol("pca-features").setK(2)
val pipeline = new Pipeline().setStages(
Array(vectorAssembler, standardScalar, pca)
)
val pcaDF = pipeline.fit(df).transform(df)
def withMahalanobois(df: DataFrame, inputCol: String): DataFrame = {
val Row(coeff1: Matrix) = Correlation.corr(df, inputCol).head
val invCovariance = inv(new breeze.linalg.DenseMatrix(2, 2, coeff1.toArray))
val mahalanobois = udf[Double, Vector] { v =>
val vB = DenseVector(v.toArray)
vB.t * invCovariance * vB
}
df.withColumn("mahalanobois", mahalanobois(df(inputCol)))
}
val withMahalanobois: DataFrame = withMahalanobois(pcaDF, "pca-features")
session.close()
}
I'm trying to tune the parameters of an ALS matrix factorization model that uses implicit data. For this, I'm trying to use pyspark.ml.tuning.CrossValidator to run through a parameter grid and select the best model. I believe my problem is in the evaluator, but I can't figure it out.
I can get this to work for an explicit data model with a regression RMSE evaluator, as follows:
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.sql.functions import rand
conf = SparkConf() \
.setAppName("MovieLensALS") \
.set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
dfRatings = sqlContext.createDataFrame([(0, 0, 4.0), (0, 1, 2.0), (1, 1, 3.0), (1, 2, 4.0), (2, 1, 1.0), (2, 2, 5.0)],
["user", "item", "rating"])
dfRatingsTest = sqlContext.createDataFrame([(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)], ["user", "item"])
alsExplicit = ALS()
defaultModel = alsExplicit.fit(dfRatings)
paramMapExplicit = ParamGridBuilder() \
.addGrid(alsExplicit.rank, [8, 12]) \
.addGrid(alsExplicit.maxIter, [10, 15]) \
.addGrid(alsExplicit.regParam, [1.0, 10.0]) \
.build()
evaluatorR = RegressionEvaluator(metricName="rmse", labelCol="rating")
cvExplicit = CrossValidator(estimator=alsExplicit, estimatorParamMaps=paramMapExplicit, evaluator=evaluatorR)
cvModelExplicit = cvExplicit.fit(dfRatings)
predsExplicit = cvModelExplicit.bestModel.transform(dfRatingsTest)
predsExplicit.show()
When I try to do this for implicit data (let's say counts of views rather than ratings), I get an error that I can't quite figure out. Here's the code (very similar to the above):
dfCounts = sqlContext.createDataFrame([(0,0,0), (0,1,12), (0,2,3), (1,0,5), (1,1,9), (1,2,0), (2,0,0), (2,1,11), (2,2,25)],
["user", "item", "rating"])
dfCountsTest = sqlContext.createDataFrame([(0, 0), (0, 1), (1, 1), (1, 2), (2, 1), (2, 2)], ["user", "item"])
alsImplicit = ALS(implicitPrefs=True)
defaultModelImplicit = alsImplicit.fit(dfCounts)
paramMapImplicit = ParamGridBuilder() \
.addGrid(alsImplicit.rank, [8, 12]) \
.addGrid(alsImplicit.maxIter, [10, 15]) \
.addGrid(alsImplicit.regParam, [1.0, 10.0]) \
.addGrid(alsImplicit.alpha, [2.0,3.0]) \
.build()
evaluatorB = BinaryClassificationEvaluator(metricName="areaUnderROC", labelCol="rating")
evaluatorR = RegressionEvaluator(metricName="rmse", labelCol="rating")
cv = CrossValidator(estimator=alsImplicit, estimatorParamMaps=paramMapImplicit, evaluator=evaluatorR)
cvModel = cv.fit(dfCounts)
predsImplicit = cvModel.bestModel.transform(dfCountsTest)
predsImplicit.show()
I tried doing this with an RMSE evaluator and I get an error. As I understand, I should also be able to use the AUC metric for the binary classification evaluator, because the predictions of the implicit matrix factorization are a confidence matrix c_ui for predictions of a binary matrix p_ui per this paper, which the documentation for pyspark ALS cites.
Using either evaluator gives me an error and I can't find any fruitful discussion about cross-validating implicit ALS models online. I'm looking through the CrossValidator source code to try to figure out what's wrong, but am having trouble. One of my thoughts is that after the process converts the implicit data matrix r_ui to the binary matrix p_ui and confidence matrix c_ui, I'm not sure what it's comparing the predicted c_ui matrix against during the evaluation stage.
Here is the error:
Traceback (most recent call last):
File "<ipython-input-16-6c43b997005e>", line 1, in <module>
cvModel = cv.fit(dfCounts)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\pipeline.py", line 69, in fit
return self._fit(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\tuning.py", line 239, in _fit
model = est.fit(train, epm[j])
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\pipeline.py", line 67, in fit
return self.copy(params)._fit(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\wrapper.py", line 133, in _fit
java_model = self._fit_java(dataset)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\ml\wrapper.py", line 130, in _fit_java
return self._java_obj.fit(dataset._jdf)
File "C:\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\java_gateway.py", line 813, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "C:/spark-1.6.1-bin-hadoop2.6/python\pyspark\sql\utils.py", line 45, in deco
return f(*a, **kw)
File "C:\spark-1.6.1-bin-hadoop2.6\python\lib\py4j-0.9-src.zip\py4j\protocol.py", line 308, in get_return_value
format(target_id, ".", name), value)
etc.......
UPDATE
I tried scaling the input so it's in the range of 0 to 1 and using a RMSE evaluator. It seems to work well until I try to insert it into the CrossValidator.
The following code works. I get predictions and i get an RMSE value from my evaluator.
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import FloatType
import pyspark.sql.functions as F
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import RegressionEvaluator
conf = SparkConf() \
.setAppName("ALSPractice") \
.set("spark.executor.memory", "2g")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
# Users 0, 1, 2, 3 - Items 0, 1, 2, 3, 4, 5 - Ratings 0.0-5.0
dfCounts2 = sqlContext.createDataFrame([(0,0,5.0), (0,1,5.0), (0,3,0.0), (0,4,0.0),
(1,0,5.0), (1,2,4.0), (1,3,0.0), (1,4,0.0),
(2,0,0.0), (2,2,0.0), (2,3,5.0), (2,4,5.0),
(3,0,0.0), (3,1,0.0), (3,3,4.0) ],
["user", "item", "rating"])
dfCountsTest2 = sqlContext.createDataFrame([(0,0), (0,1), (0,2), (0,3), (0,4),
(1,0), (1,1), (1,2), (1,3), (1,4),
(2,0), (2,1), (2,2), (2,3), (2,4),
(3,0), (3,1), (3,2), (3,3), (3,4)], ["user", "item"])
# Normalize rating data to [0,1] range based on max rating
colmax = dfCounts2.select(F.max('rating')).collect()[0].asDict().values()[0]
normalize = udf(lambda x: x/colmax, FloatType())
dfCountsNorm = dfCounts2.withColumn('ratingNorm', normalize(col('rating')))
alsImplicit = ALS(implicitPrefs=True)
defaultModelImplicit = alsImplicit.fit(dfCountsNorm)
preds = defaultModelImplicit.transform(dfCountsTest2)
evaluatorR2 = RegressionEvaluator(metricName="rmse", labelCol="ratingNorm")
evaluatorR2.evaluate(defaultModelImplicit.transform(dfCountsNorm))
preds = defaultModelImplicit.transform(dfCountsTest2)
What I don't understand is why the following doesn't work. I'm using the same estimator, the same evaluator and fitting the same data. Why would these work above but not within the CrossValidator:
paramMapImplicit = ParamGridBuilder() \
.addGrid(alsImplicit.rank, [8, 12]) \
.addGrid(alsImplicit.maxIter, [10, 15]) \
.addGrid(alsImplicit.regParam, [1.0, 10.0]) \
.addGrid(alsImplicit.alpha, [2.0,3.0]) \
.build()
cv = CrossValidator(estimator=alsImplicit, estimatorParamMaps=paramMapImplicit, evaluator=evaluatorR2)
cvModel = cv.fit(dfCountsNorm)
Ignoring technical issues, strictly speaking neither method is correct given the input generated by ALS with implicit feedback.
you cannot use RegressionEvaluator because, as you already know, prediction can be interpreted as a confidence value and is represented as a floating point number in range [0, 1] and label column is just an unbound integer. These values are clearly not comparable.
you cannot use BinaryClassificationEvaluator because even if the prediction can be interpreted as probability label doesn't represent binary decision. Moreover prediction column has invalid type and couldn't be used directly with BinaryClassificationEvaluator
You can try to convert one of the columns so input fit the requirements but this is is not really a justified approach from a theoretical perspective and introduces additional parameters which are hard to tune.
map label column to [0, 1] range and use RMSE.
convert label column to binary indicator with fixed threshold and extend ALS / ALSModel to return expected column type. Assuming threshold value is 1 it could be something like this
from pyspark.ml.recommendation import *
from pyspark.sql.functions import udf, col
from pyspark.mllib.linalg import DenseVector, VectorUDT
class BinaryALS(ALS):
def fit(self, df):
assert self.getImplicitPrefs()
model = super(BinaryALS, self).fit(df)
return ALSBinaryModel(model._java_obj)
class ALSBinaryModel(ALSModel):
def transform(self, df):
transformed = super(ALSBinaryModel, self).transform(df)
as_vector = udf(lambda x: DenseVector([1 - x, x]), VectorUDT())
return transformed.withColumn(
"rawPrediction", as_vector(col("prediction")))
# Add binary label column
with_binary = dfCounts.withColumn(
"label_binary", (col("rating") > 0).cast("double"))
als_binary_model = BinaryALS(implicitPrefs=True).fit(with_binary)
evaluatorB = BinaryClassificationEvaluator(
metricName="areaUnderROC", labelCol="label_binary")
evaluatorB.evaluate(als_binary_model.transform(with_binary))
## 1.0
Generally speaking, material about evaluating recommender systems with implicit feedbacks is kind of missing in textbooks, I suggest you take a read on eliasah's answer about evaluating these kind of recommenders.
With implicit feedbacks we don't have user reactions to our recommendations. Thus, we cannot use precision based metrics.
In the already cited paper, the expected percentile ranking metric is used instead.
You can try to implement an Evaluator based on a similar metric in the Spark ML lib, and use it in your Cross Validation pipeline.
Very late to the party here, but I'll post in case anyone stumbles upon this question like I did.
I was getting a similar error when trying to use CrossValidator with an ALS model. I resolved it by setting the coldStartStrategy parameter in ALS to "drop". That is:
alsImplicit = ALS(implicitPrefs=True, coldStartStrategy="drop")
and keep the rest of the code the same.
I expect what was happening in my example is that the cross-validation splits created scenarios where I had items in the validation set that did not appear in the training set, which results in NaN prediction values. The best solution is to drop the NaN values when evaluating, as described in the documentation.
I don't know if we were getting the same error so can't guarantee this would solve OP's problem, but it's good practice to set coldStartStrategy="drop" for cross validation anyway.
Note: my error message was "Params must be either a param map or a list/tuple of param maps", which didn't seem to imply an issue with the coldStartStrategy parameter or NaN values but this solution resolved the error.
In order to cross validate my ALS model with implicitPrefs=True, I needed to adapt #zero323's answer slightly for pyspark==2.3.0 where I was getting the following exception:
xspy4j.Py4JException: Target Object ID does not exist for this gateway :o2733\\n\tat py4j.Gateway.invoke(Gateway.java...java:79)\\n\tat py4j.GatewayConnection.run(GatewayConnection.java:214)\\n\tat java.lang.Thread.run(Thread.java:748)\\n
ALS extends JavaEstimator which provides the hooks necessary for fitting Estimators that wrap Java/Scala implementations. We need to override _create_model in BinaryALS so PySpark can keep all the Java object references straight:
import pyspark.sql.functions as F
from pyspark.ml.linalg import DenseVector, VectorUDT
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.sql.dataframe import DataFrame
class ALSBinaryModel(ALSModel):
def transform(self, df: DataFrame) -> DataFrame:
transformed = super().transform(df)
as_vector = F.udf(lambda x: DenseVector([1 - x, x]), VectorUDT())
return transformed.withColumn("rawPrediction", as_vector(F.col("prediction")))
class BinaryALS(ALS):
def fit(self, df: DataFrame) -> ALSBinaryModel:
assert self.getImplicitPrefs()
return super().fit(df)
def _create_model(self, java_model) -> ALSBinaryModel:
return ALSBinaryModel(java_model=java_model)
Using MLLib LinearRegressionWithSGD for the dummy data set (y, x1, x2) for y = (2*x1) + (3*x2) + 4 is producing wrong intercept and weights. Actual data used is,
x1 x2 y
1 0.1 6.3
2 0.2 8.6
3 0.3 10.9
4 0.6 13.8
5 0.8 16.4
6 1.2 19.6
7 1.6 22.8
8 1.9 25.7
9 2.1 28.3
10 2.4 31.2
11 2.7 34.1
I set the following input parameters and got the below model outputs
[numIterations, step, miniBatchFraction, regParam] [intercept, [weights]]
[5,9,0.6,5] = [2.36667135839938E13, weights:[1.708772545209758E14, 3.849548062850367E13] ]
[2,default,default,default] = [-2495.5635231554793, weights:[-19122.41357929275,-4308.224496146531]]
[5,default,default,default] = [2.875191315671051E8, weights: [2.2013802074495964E9,4.9593017130199933E8]]
[20,default,default,default] = [-8.896967235537095E29, weights: [-6.811932001659158E30,-1.5346020624812824E30]]
Need to know,
How do i get the correct intercept and weights [4, [2, 3]] for the above mentioned dummy data.
Will tuning the step size help in convergence? I need to run this in a automated manner for several hundred variables, so not keen to do that.
Should I scale the data? How will it help?
Below is the code used to generate these results.
object SciBenchTest {
def main(args: Array[String]): Unit = run
def run: Unit = {
val sparkConf = new SparkConf().setAppName("SparkBench")
val sc = new SparkContext(sparkConf)
// Load and parse the dummy data (y, x1, x2) for y = (2*x1) + (3*x2) + 4
// i.e. intercept should be 4, weights (2, 3)?
val data = sc.textFile("data/dummy.csv")
// LabeledPoint is (label, [features])
val parsedData = data.map { line =>
val parts = line.split(',')
val label = parts(2).toDouble
val features = Array(parts(0), parts(1)) map (_.toDouble)
LabeledPoint(label, Vectors.dense(features))
}
//parsedData.collect().foreach(x => println(x));
// Scale the features
/*val scaler = new StandardScaler(withMean = true, withStd = true)
.fit(parsedData.map(x => x.features))
val scaledData = parsedData
.map(x =>
LabeledPoint(x.label,
scaler.transform(Vectors.dense(x.features.toArray))))
scaledData.collect().foreach(x => println(x));*/
// Building the model: SGD = stochastic gradient descent
val numIterations = 20 //5
val step = 9.0 //9.0 //0.7
val miniBatchFraction = 0.6 //0.7 //0.65 //0.7
val regParam = 5.0 //3.0 //10.0
//val model = LinearRegressionWithSGD.train(parsedData, numIterations, step) //scaledData
val algorithm = new LinearRegressionWithSGD() //train(parsedData, numIterations)
algorithm.setIntercept(true)
algorithm.optimizer
//.setMiniBatchFraction(miniBatchFraction)
.setNumIterations(numIterations)
//.setStepSize(step)
//.setGradient(new LeastSquaresGradient())
//.setUpdater(new SquaredL2Updater()) //L1Updater //SimpleUpdater //SquaredL2Updater
//.setRegParam(regParam)
val model = algorithm.run(parsedData)
println(s">>>> Model intercept: ${model.intercept}, weights: ${model.weights}")
// Evaluate model on training examples
val valuesAndPreds = parsedData.map { point =>
val prediction = model.predict(point.features)
(point.label, point.features, prediction)
}
// Print out features, actual and predicted values...
valuesAndPreds.take(10).foreach({ case (v, f, p) =>
println(s"Features: ${f}, Predicted: ${p}, Actual: ${v}")
})
}
}
As described in the documentation
https://spark.apache.org/docs/1.0.2/mllib-optimization.html
selecting the best step-size for SGD methods can often be delicate.
I would try with lover values, for example
// Build linear regression model
var regression = new LinearRegressionWithSGD().setIntercept(true)
regression.optimizer.setStepSize(0.001)
val model = regression.run(parsedData)
Adding the stepsize did not help us much.
We used the following parameters to calculate the intercept/weights and loss and used the same to construct a linear regression model in order to predict our features. Thanks #selvinsource for pointing me in the correct direction.
val data = sc.textFile("data/dummy.csv")
// LabeledPoint is (label, [features])
val parsedData = data.map { line =>
val parts = line.split(',')
val label = parts(2).toDouble
val features = Array(parts(0), parts(1)) map (_.toDouble)
(label, MLUtils.appendBias(Vectors.dense(features)))
}.cache()
val numCorrections = 5 //10//5//3
val convergenceTol = 1e-4 //1e-4
val maxNumIterations = 20 //20//100
val regParam = 0.00001 //0.1//10.0
val (weightsWithIntercept, loss) = LBFGS.runLBFGS(
parsedData,
new LeastSquaresGradient(),//LeastSquaresGradient
new SquaredL2Updater(), //SquaredL2Updater(),SimpleUpdater(),L1Updater()
numCorrections,
convergenceTol,
maxNumIterations,
regParam,
Vectors.dense(0.0, 0.0, 0.0))//initialWeightsWithIntercept)
loss.foreach(println)
val model = new LinearRegressionModel(
Vectors.dense(weightsWithIntercept.toArray.slice(0, weightsWithIntercept.size - 1)),
weightsWithIntercept(weightsWithIntercept.size - 1))
println(s">>>> Model intercept: ${model.intercept}, weights: ${model.weights}")
// Evaluate model on training examples
val valuesAndPreds = parsedData.collect().map { point =>
var prediction = model.predict(Vectors.dense(point._2.apply(0), point._2.apply(1)))
(prediction, point._1)
}
// Print out features, actual and predicted values...
valuesAndPreds.take(10).foreach({ case (v, f) =>
println(s"Features: ${f}, Predicted: ${v}")//, Actual: ${v}")
})