Using DataFrame with MLlib - apache-spark

Let's say I have a DataFrame (that I read in from a csv on HDFS) and I want to train some algorithms on it via MLlib. How do I convert the rows into LabeledPoints or otherwise utilize MLlib on this dataset?

Assuming you're using Scala:
Let's say your obtain the DataFrame as follows:
val results : DataFrame = sqlContext.sql(...)
Step 1: call results.printSchema() -- this will show you not only the columns in the DataFrame and (this is important) their order, but also what Spark SQL thinks are their types. Once you see this output things get a lot less mysterious.
Step 2: Get an RDD[Row] out of the DataFrame:
val rows: RDD[Row] = results.rdd
Step 3: Now it's just a matter of pulling whatever fields interest you out of the individual rows. For this you need to know the 0-based position of each field and it's type, and luckily you obtained all that in Step 1 above. For example,
let's say you did a SELECT x, y, z, w FROM ... and printing the schema yielded
root
|-- x double (nullable = ...)
|-- y string (nullable = ...)
|-- z integer (nullable = ...)
|-- w binary (nullable = ...)
And let's say all you wanted to use x and z. You can pull them out into an RDD[(Double, Integer)] as follows:
rows.map(row => {
// x has position 0 and type double
// z has position 2 and type integer
(row.getDouble(0), row.getInt(2))
})
From here you just use Core Spark to create the relevant MLlib objects. Things could get a little more complicated if your SQL returns columns of array type, in which case you'll have to call getList(...) for that column.

Assuming you're using JAVA (Spark version 1.6.2):
Here is a simple example of JAVA code using DataFrame for machine learning.
It loads a JSON with the following structure,
[{"label":1,"att2":5.037089672359123,"att1":2.4100883023159456}, ... ]
splits the data into training and testing,
train the model using the train data,
apply the model to the test data and
stores the results.
Moreover according to the official documentation the "DataFrame-based API is primary API" for MLlib since the current version 2.0.0. So you can find several examples using DataFrame.
The code:
SparkConf conf = new SparkConf().setAppName("MyApp").setMaster("local[2]");
SparkContext sc = new SparkContext(conf);
String path = "F:\\SparkApp\\test.json";
String outputPath = "F:\\SparkApp\\justTest";
System.setProperty("hadoop.home.dir", "C:\\winutils\\");
SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);
DataFrame df = sqlContext.read().json(path);
df.registerTempTable("tmp");
DataFrame newDF = df.sqlContext().sql("SELECT att1, att2, label FROM tmp");
DataFrame dataFixed = newDF.withColumn("label", newDF.col("label").cast("Double"));
VectorAssembler assembler = new VectorAssembler().setInputCols(new String[]{"att1", "att2"}).setOutputCol("features");
StringIndexer indexer = new StringIndexer().setInputCol("label").setOutputCol("labelIndexed");
// Split the data into training and test
DataFrame[] splits = dataFixed.randomSplit(new double[] {0.7, 0.3});
DataFrame trainingData = splits[0];
DataFrame testData = splits[1];
DecisionTreeClassifier dt = new DecisionTreeClassifier().setLabelCol("labelIndexed").setFeaturesCol("features");
Pipeline pipeline = new Pipeline().setStages(new PipelineStage[] {assembler, indexer, dt});
// Train model
PipelineModel model = pipeline.fit(trainingData);
// Make predictions
DataFrame predictions = model.transform(testData);
predictions.rdd().coalesce(1,true,null).saveAsTextFile("justPlay.txt" +System.currentTimeMillis());

RDD based Mllib is on its way to be deprecated, so you should rather use DataFrame based Mllib.
Generally the input to these MLlib apis is a DataFrame containing 2 columns - label and feature. There are various methods to build this DataFrame - low level apis like org.apache.spark.mllib.linalg.{Vector, Vectors}, org.apache.spark.mllib.regression.LabeledPoint, org.apache.spark.mllib.linalg.{Matrix, Matrices} etc. They all take numeric values for feature and label.
Words can be converted to vectors using - org.apache.spark.ml.feature.{Word2Vec, Word2VecModel}. This documentation explains more - https://spark.apache.org/docs/latest/mllib-data-types.html
Once input dataframe with label and feature is created, instantiate the MLlib api and pass in the DataFrame to 'fit' function to get the model and then call 'transform' or 'predict' function on the model to get the results.
Example -
training file looks like -
<numeric label> <a string separated by space>
//Build word vector
val trainingData = spark.read.parquet(<path to training file>)
val sampleDataDf = trainingData
.map { r =>
val s = r.getAs[String]("value").split(" ")
val label = s.head.toDouble
val feature = s.tail
(label, feature)
}.toDF("lable","feature_words")
val word2Vec = new Word2Vec()
.setInputCol("feature_words")
.setOutputCol("feature_vectors")
.setMinCount(0)
.setMaxIter(10)
//build word2Vector model
val model = word2Vec.fit(sampleDataDf)
//convert training text data to vector and labels
val wVectors = model.transform(sampleDataDf)
//train LinearSVM model
val svmAlgorithm = new LinearSVC()
.setFeaturesCol("feature_vectors")
.setMaxIter(100)
.setLabelCol("lable")
.setRegParam(0.01)
.setThreshold(0.5)
.fit(wVectors) //use word vectors created
//Predict new data, same format as training data containing words
val predictionData = spark.read.parquet(<path to prediction file>)
val pDataDf = predictionData
.map { r =>
val s = r.getAs[String]("value").split(" ")
val label = s.head.toDouble
val feature = s.tail
(label, feature)
}.toDF("lable","feature_words")
val pVectors = model.transform(pDataDf)
val predictionlResult = pVectors.map{ r =>
val s = r.getAs[Seq[String]]("feature_words")
val v = r.getAs[Vector]("feature_vectors")
val c = svmAlgorithm.predict(v) // predict using trained SVM
s"$c ${s.mkString(" ")}"
}

Related

Spark ML - StringType is not supported [duplicate]

have a DataFrame with some categorical string values (e.g uuid|url|browser).
I would to convert it in a double to execute an ML algorithm that accept double matrix.
As convertion method I used StringIndexer (spark 1.4) that map my string values to double values, so I defined a function like this:
def str(arg: String, df:DataFrame) : DataFrame =
(
val indexer = new StringIndexer().setInputCol(arg).setOutputCol(arg+"_index")
val newDF = indexer.fit(df).transform(df)
return newDF
)
Now the issue is that i would iterate foreach column of a df, call this function and add (or convert) the original string column in the parsed double column, so the result would be:
Initial df:
[String: uuid|String: url| String: browser]
Final df:
[String: uuid|Double: uuid_index|String: url|Double: url_index|String: browser|Double: Browser_index]
Thanks in advance
You can simply foldLeft over the Array of columns:
val transformed: DataFrame = df.columns.foldLeft(df)((df, arg) => str(arg, df))
Still, I will argue that it is not a good approach. Since src discards StringIndexerModel it cannot be used when you get new data. Because of that I would recommend using Pipeline:
import org.apache.spark.ml.Pipeline
val transformers: Array[org.apache.spark.ml.PipelineStage] = df.columns.map(
cname => new StringIndexer()
.setInputCol(cname)
.setOutputCol(s"${cname}_index")
)
// Add the rest of your pipeline like VectorAssembler and algorithm
val stages: Array[org.apache.spark.ml.PipelineStage] = transformers ++ ???
val pipeline = new Pipeline().setStages(stages)
val model = pipeline.fit(df)
model.transform(df)
VectorAssembler can be included like this:
val assembler = new VectorAssembler()
.setInputCols(df.columns.map(cname => s"${cname}_index"))
.setOutputCol("features")
val stages = transformers :+ assembler
You could also use RFormula, which is less customizable, but much more concise:
import org.apache.spark.ml.feature.RFormula
val rf = new RFormula().setFormula(" ~ uuid + url + browser - 1")
val rfModel = rf.fit(dataset)
rfModel.transform(dataset)

how to convert Dataset Row type to Dataset String type

I am using spark 2.2 with java 8. I have a dataset in Rowtype and I want to used this dataset into ML model so I want to convert Dataset into Dataset when I used Dataset into model it's shown below error.
Type mismatch: cannot convert from Dataset to Dataset
I fond below solution for scala but I want to do this into java.
df.map(row => row.mkString())
val strings = df.map(row => row.mkString()).collect
first convert Row dataset into list and then convert that list into String dataset. Try this
Dataset<Row> df= spark.read()...
List<String> list = df.as(Encoders.STRING()).collectAsList();
Dataset<String> df1 = session.createDataset(list, Encoders.STRING());
If you are planning to read the dataset line by line, then you can use the iterator over the dataset:
Dataset<Row>csv=session.read().format("csv").option("sep",",").option("inferSchema",true).option("escape, "\"").option("header", true).option("multiline",true).load(users/abc/....);
for(Iterator<Row> iter = csv.toLocalIterator(); iter.hasNext();) {
String item = (iter.next()).toString();
System.out.println(item.toString());
}

Spark ALS with strings labels - Conversion back to string

I have this code:
val userIndexer: StringIndexer = new StringIndexer()
.setInputCol("userKey")
.setOutputCol("user")
val alsRatings = userIndexerModel.transform(ratings)
val matrixFactorizationModel = ALS.trainImplicit(alsRatings.rdd, rank = 10, iterations = 10)
val rec = matrixFactorizationModel.recommendProductsForUsers(20)
This gives me back recommendations with user ids. I want to have my user key strings back. What is the more efficient way to do it? Thanks.
PD: I certainly cannot understand why ALS library developers don't accept string labels. It's extremely painful and expensive to deal with conversions (string to int and then int to string) from the outside. Hope there is an issue or something in their backlog.
I generally run the StringIndexer collect the labels in the Driver. And
parallelize the labels with an index. And instead of calling Transform using the StringIndexer. I join the DataFrames to get the same result as a StringIndexer.
val swidConverter = new StringIndexer()
.setInputCol("id")
.setOutputCol("idIndex").fit(df)
val idDf = spark.sparkContext.parallelize(
swidConverter.labels.zipWithIndex
).toDF("id", "idIndex").repartition(PARTITION_SIZE) // set the partition size depending on your data size.
// Joining the idDf(DataFrame) with the actual Data.
val indexedDF = df.join(idDf,idDf.col("id")===df.col("id")).select("idIndex","product_id","rating")
val als = new ALS()
.setMaxIter(5)
.setRegParam(0.01)
.setUserCol("idIndex")
.setItemCol("product_id")
.setRatingCol("rating")
val model = als.fit(indexedDF)
val resultRaw = model.recommendForAllUsers(4)
// Joining the idDf(DataFrame) with the Result to get the original ID from the indexed Id.
val resultDf = resultRaw.join(idDf,resultRaw.col("idIndex")===idDf.col("idIndex")).select("id","recommendations")

K means clustering ml library using apache spark

I am trying to implement k means clustering using apache spark ml version in 2.0.2. After finding the cluster center , facing the issue in how do identity the data belongs to which cluster point. Please help me.. Thanks in advance. Please find my code :
val tokenFrameprocess = TokenizerProcessor.process("value", "tokenized")
val stopwordRemover = StopWordsProcessor.process("tokenized", "stopwords")
val stemmingProcess = StemmingProcessor.process("value", "stemmed")
val HashingProcess = HashingTFProcessor.process("stopwords" ,"features")
val pipeline = new Pipeline().setStages(Array(tokenFrameprocess,stopwordRemover,stemmingProcess,HashingProcess))
val finalFrameProcess = pipeline.fit(df)
val finalFramedata = finalFrameProcess.transform(df);
val kmeans = new KMeans().setK(4).setSeed(1L)
val model = kmeans.fit(finalFramedata)
val WSSSE = model.computeCost(finalFramedata)
// Shows the result.
println("Cluster Centers: ")
model.clusterCenters.foreach(println)
You have to set the feature column which will be used to predict on while instantiating the Kmeans like following example:
val kmeans = new KMeans().setK(4).setSeed(1L).setFeaturesCol("features").setPredictionCol("prediction")
Once you call the .fit() on the kmeans, It returns a Transformer. In case of your example, variable "model" is a transformer. You can call the .transform() To get the desired prediction on the given data. For example below lines will give you the dataframe with prediction column.
val model = kmeans.fit(finalFramedata)
val transformedDF = model.transform(finalFramedata)
transformedDF.show(false)
Prediction column denotes clusters points. If you give k value as 3 and the prediction column will have values like 0,1,2.

Spark MLlib: Including categorical features

What is the correct or best method for including categorical variables (both string and int) into a feature for an MLlib algorithm?
Is it correct to use OneHotEncoders on the categorical variables and then include the output columns with other columns in a VectorAssembler like in the code below?
The reason is that I end up with a data frame with rows like this where it looks like feature3 and feature4 combined look like they are on the same 'level' of importance as the two categorical features singly.
+------------------+-----------------------+---------------------------+
|prediction |actualVal |features |
+------------------+-----------------------+---------------------------+
|355416.44924898935|990000.0 |(17,[0,1,2,3,4,5,10,15],[1.0,206.0]) |
|358917.32988024893|210000.0 |(17,[0,1,2,3,4,5,10,15,16],[1.0,172.0]) |
|291313.84175674635|4600000.0 |(17,[0,1,2,3,4,5,12,15,16],[1.0,239.0]) |
Here is my code:
val indexer = new StringIndexer()
.setInputCol("stringFeatureCode")
.setOutputCol("stringFeatureCodeIndex")
.fit(data)
val indexed = indexer.transform(data)
val encoder = new OneHotEncoder()
.setInputCol("stringFeatureCodeIndex")
.setOutputCol("stringFeatureCodeVec")
var encoded = encoder.transform(indexed)
encoded = encoded.withColumn("intFeatureCodeTmp", encoded.col("intFeatureCode")
.cast(DoubleType))
.drop("intFeatureCode")
.withColumnRenamed("intFeatureCodeTmp", "intFeatureCode")
val intFeatureCodeEncoder = new OneHotEncoder()
.setInputCol("intFeatureCode")
.setOutputCol("intFeatureCodeVec")
encoded = intFeatureCodeEncoder.transform(encoded)
val assemblerDeparture =
new VectorAssembler()
.setInputCols(
Array("stringFeatureCodeVec", "intFeatureCodeVec", "feature3", "feature4"))
.setOutputCol("features")
var data2 = assemblerDeparture.transform(encoded)
val Array(trainingData, testData) = data2.randomSplit(Array(0.7, 0.3))
val rf = new RandomForestRegressor()
.setLabelCol("actualVal")
.setFeaturesCol("features")
.setNumTrees(100)
In general this is a recommended method.
When working tree models it unnecessary and should be avoided. You can use StringIndexer only.

Resources