Transform RDD to valid input for kmeans - apache-spark

I am calculating TF and IDF using spark mllib algorithm of a directory that contains csv files with the following code:
import argparse
from os import system
### args parsing
parser = argparse.ArgumentParser(description='runs TF/IDF on a directory of
text docs')
parser.add_argument("-i","--input", help="the input in HDFS",
required=True)
parser.add_argument("-o", '--output', help="the output in HDFS",
required=True )
parser.add_argument("-mdf", '--min_document_frequency', default=1 )
args = parser.parse_args()
docs_dir = args.input
d_out = "hdfs://master:54310/" + args.output
min_df = int(args.min_document_frequency)
# import spark-realated stuff
from pyspark import SparkContext
from pyspark.mllib.feature import HashingTF
from pyspark.mllib.feature import IDF
sc = SparkContext(appName="TF-IDF")
# Load documents (one per line).
documents = sc.textFile(docs_dir).map(lambda title_text:
title_text[1].split(" "))
hashingTF = HashingTF()
tf = hashingTF.transform(documents)
# IDF
idf = IDF().fit(tf)
tfidf = idf.transform(tf)
#print(tfidf.collect())
#save
tfidf.saveAsTextFile(d_out)
Using
print(tfidf.collect())
I get this output:
[SparseVector(1048576, {812399: 4.3307}), SparseVector(1048576, {411697:
0.0066}), SparseVector(1048576, {411697: 0.0066}), SparseVector(1048576,
{411697: 0.0066}), SparseVector(1048576, {411697: 0.0066}), ....
I have also tested the KMeans mllib algorithm :
from __future__ import print_function
import sys
import numpy as np
from pyspark import SparkContext
from pyspark.mllib.clustering import KMeans
runs=4
def parseVector(line):
return np.array([float(x) for x in line.split(' ')])
if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: kmeans <file> <k>", file=sys.stderr)
exit(-1)
sc = SparkContext(appName="KMeans")
lines = sc.textFile(sys.argv[1])
data = lines.map(parseVector)
k = int(sys.argv[2])
model = KMeans.train(data, k, runs)
print("Final centers: " + str(model.clusterCenters))
print("Total Cost: " + str(model.computeCost(data)))
sc.stop()
with this sample test case
0.0 0.0 0.0
0.1 0.1 0.1
0.2 0.2 0.2
9.0 9.0 9.0
9.1 9.1 9.1
9.2 9.2 9.2
and it works fine.
Now I want to apply the rdd output from tfidf above in the KMeans algorithm but I don't know how is it possible to transform the rdd like the sample text above, or how to split properly the rdd in the KMeans algorithm to work properly.
I really need some help with this one.
UPDATE
My real question is how can i read the input to apply it to KMeans mllib from a text file like this
(1048576,[155412,857472,756332],[1.75642010278,2.41857747478,1.97365255252])
(1048576,[159196,323305,501636],[2.98856378408,1.63863706713,2.44956728334])
(1048576,[135312,847543,743411],[1.42412015238,1.58759872958,2.01237484818])
UPDATE2
I am not sure at all but i think i need to go from above vectors to the below array so as to apply it directly to KMeans mllib algorithm
1.75642010278 2.41857747478 1.97365255252
2.98856378408 1.63863706713 2.44956728334
1.42412015238 1.58759872958 2.01237484818

The output of IDF is a dataframe of SparseVector. KMeans takes a vector as input (sparse or dense), hence, there should be no need to make any transformations. You should be able to use the output column from IDF directly as input to KMeans.
If you need to save the data to disk in between running the TFIDF and KMeans, I would recommend saving it as a csv through the dataframe API.
First convert to a dataframe using Row:
from pyspark.sql import Row
row = Row("features") # column name
df = tfidf.map(row).toDF()
An alternative way to convert without import:
df = tfidf.map(lambda x: (x, )).toDF(["features"])
After the conversion save the dataframe as a parquet file:
df.write.parquet('/path/to/save/file')
To read the data, simply use:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.parquet('/path/to/file')
# converting from dataframe into an RDD[Vector]
data = df.rdd.map(list)
If you in any case need to convert from a vector saved as a string, that is also possible. Here is some example code:
from pyspark.mllib.linalg import Vectors, VectorUDT
from pyspark.sql.functions import udf
df = sc.parallelize(["(7,[1,2,4],[1,1,1])"]).toDF(["features"])
parse = udf(lambda s: Vectors.parse(s), VectorUDT())
df.select(parse("features"))
First an example dataframe is created with the same formatting. Then an UDF is used to parse the string into a vector. If you want an rdd instead of the dataframe, use the code above at the "reading from parquet" part to convert.
However, the output from IDF is very sparse. The vectors have a length of 1048576 and only one of these have a values over 1. KMeans would not give you any interesting results.
I would recommend you to look into word2vec instead. It will give you a more compact vector for each word and clustering these vectors would make more sense. Using this method you can receive a map of words to their vector representations which can be used for clustering.

Related

Pyspark: applying kmeans on different groups of a dataframe

Using Pyspark I would like to apply kmeans separately on groups of a dataframe and not to the whole dataframe at once. For the moment I use a for loop which iterates on each group, applies kmeans and appends the result to another table. But having a lot of groups makes it time consuming. Anyone could help me please??
Thanks a lot!
for customer in customer_list:
temp_df = togroup.filter(col("customer_id")==customer)
df = assembler.transform(temp_df)
k = 1
while (k < 5 & mtrc < width):
k += 1
kmeans = KMeans(k=k,seed=5,maxIter=20,initSteps=5)
model = kmeans.fit(df)
mtric = 1 - model.computeCost(df)/ttvar
a = model.transform(df)select(cols)
allcustomers = allcustomers .union(a)
I came up with a solution using pandas_udf. A pure spark or scala solution is preferred and yet to be offered.
Assume my data is
import pandas as pd
df_pd = pd.DataFrame([['cat1',10.],['cat1',20.],['cat1',11.],['cat1',21.],['cat1',22.],['cat1',9.],['cat2',101.],['cat2',201.],['cat2',111.],['cat2',214.],['cat2',224.],['cat2',99.]],columns=['cat','val'])
df_sprk = spark.createDataFrame(df_pd)
First solve the problem in pandas:
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2,random_state=0)
def skmean(kmeans,x):
X = np.array(x)
kmeans.fit(X)
return(kmeans.predict(X))
You can apply skmean() to a panda data frame (to make sure it works properly):
df_pd.groupby('cat').apply(lambda x:skmean(kmeans,x)).reset_index()
To apply the function to pyspark data frame, we use pandas_udf. But first define a schema for the output data frame:
from pyspark.sql.types import *
schema = StructType(
[StructField('cat',StringType(),True),
StructField('clusters',ArrayType(IntegerType()))])
Convert the function above to a pandas_udf:
from pyspark.sql.functions import pandas_udf
from pyspark.sql.functions import PandasUDFType
#pandas_udf(schema, functionType=PandasUDFType.GROUPED_MAP)
def skmean_udf(df):
result = pd.DataFrame(
df.groupby('cat').apply(lambda x: skmean(kmeans,x))
result.reset_index(inplace=True, drop=False)
return(result)
You can use the function as follows:
df_spark.groupby('cat').apply(skmean_udf).show()
I came up with a second solution which is I think is slightly better than the last one. The idea is to use groupby() together withcollect_list() and write a udf that takes a list as input and generates the clusters. Continuing with df_spark in the other solution we write:
df_flat = df_spark.groupby('cat').agg(F.collect_list('val').alias('val_list'))
Now we write the udf function:
import numpy as np
import pyspark.sql.functions as F
from sklearn.cluster import KMeans
from pyspark.sql.types import *
def skmean(x):
kmeans = KMeans(n_clusters=2, random_state=0)
X = np.array(x).reshape(-1,1)
kmeans.fit(X)
clusters = kmeans.predict(X).tolist()
return(clusters)
clustering_udf = F.udf(lambda arr : skmean(arr), ArrayType(IntegerType()))
Then apply the udf to the flattened dataframe:
df = df_flat.withColumn('clusters', clustering_udf(F.col('val')))
Then you can use F.explode() to convert the list to a column.

Spark matrix multiplication code takes a lot of time to execute

I have a simple PySpark environment set up using findspark.init() on Spyder and I'm running the code on localhost. I am confused as to how can simple matrix multiplication take hours and hours of time using BlockMatrix in Spark, whereas the same code takes a few mins to run on numpy.
Here's the code I'm using:
import numpy as np
import pandas as pd
from sklearn import cross_validation as cv
import itertools
import random
import findspark
import time
start=time.time()
findspark.init()
from pyspark.mllib.linalg.distributed import *
from pyspark.sql import SparkSession
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName('myapp')
sc = SparkContext(conf=conf)
spark = SparkSession(sc)
from pyspark.mllib.linalg.distributed import *
def as_block_matrix(rdd, rowsPerBlock=1024, colsPerBlock=1024):
return IndexedRowMatrix(
rdd.zipWithIndex().map(lambda xi: IndexedRow(xi[1], xi[0]))
).toBlockMatrix(rowsPerBlock, colsPerBlock)
def prediction(P,Q):
# np.r_[ pp,np.zeros(len(pp)) ].reshape(2,20)
Pn=np.r_[ P,np.zeros(len(P)),np.zeros(len(P)),np.zeros(len(P)),np.zeros(len(P)) ].reshape(5,len(P))
Qn=np.r_[ Q,np.zeros(len(Q)),np.zeros(len(Q)),np.zeros(len(Q)),np.zeros(len(Q)) ].reshape(5,len(Q))
A = Pn[:1]
B = Qn[:1].T
distP = sc.parallelize(A)
distQ = sc.parallelize(B)
mat=as_block_matrix(distP).multiply(as_block_matrix(distQ))
blocksRDD = mat.blocks
m=(list(blocksRDD.collect())[0][1])
#print(m)
return m.toArray()[0,0]
for epoch in range(1):
for u, i in zip(users,items):
e = R[u, i] - prediction(P[:,u],Q[:,i])
Not knowing the size of your matrices makes it more difficult to answer this question, but if you are working with high dimensional sparse matrices, one possible issue is inherent to the way pyspark does matrix multiplication. In order to multiply sparse matrices, pyspark converts the sparse matrices to dense matrices. This is noted in the documentation:
http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#pyspark.mllib.linalg.distributed.BlockMatrix.multiply
which states that:
multiply(other) Left multiplies this BlockMatrix by other, another BlockMatrix. The colsPerBlock of this matrix must equal the rowsPerBlock of other. If other contains any SparseMatrix blocks, they will have to be converted to DenseMatrix blocks. The output BlockMatrix will only consist of DenseMatrix blocks. This may cause some performance issues until support for multiplying two sparse matrices is added.
As far as I know, there isn't a good work around for this if you intend to use the built in matrix data types. One way to fix is to abandon the matrix datatypes and hand roll your own matrix multiplication using rdd or dataframe join operations. For example, if you can use dataframes, the following has been tested and works fairly well at scale:
from pyspark.sql.functions import sum
def multiply_df_matrices(A,B):
return A.join(B,A['column']==B['row'])\
.groupBy(A['row'],B['column'])\
.agg(sum(A['value']*B['value']).alias('value'))
You can do something similar by joining two rdds.

Preparing data for LDA training with PySpark 1.6

I have a corpus of documents that I'm reading into a spark data frame.
I have tokeniked and vectorized the text and now I want to feed the vectorized data into an mllib LDA model. The LDA API docs seems to require the data to be:
rdd – RDD of documents, which are tuples of document IDs and term (word) count vectors. The term count vectors are “bags of words” with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique and >= 0.
How can I get from my data frame to a suitable rdd?
from pyspark.mllib.clustering import LDA
from pyspark.ml.feature import Tokenizer
from pyspark.ml.feature import CountVectorizer
#read the data
tf = sc.wholeTextFiles("20_newsgroups/*")
#transform into a data frame
df = tf.toDF(schema=['file','text'])
#tokenize
tokenizer = Tokenizer(inputCol="text", outputCol="words")
tokenized = tokenizer.transform(df)
#vectorize
cv = CountVectorizer(inputCol="words", outputCol="vectors")
model = cv.fit(tokenized)
result = model.transform(tokenized)
#transform into a suitable rdd
myrdd = ?
#LDA
model = LDA.train(myrdd, k=2, seed=1)
PS : I'm using Apache Spark 1.6.3
Let's first organize imports, read the data, do some simple special characters removal and transform it into a DataFrame:
import re # needed to remove special character
from pyspark import Row
from pyspark.ml.feature import StopWordsRemover
from pyspark.ml.feature import Tokenizer, CountVectorizer
from pyspark.mllib.clustering import LDA
from pyspark.sql import functions as F
from pyspark.sql.types import StructType, StructField, LongType
pattern = re.compile('[\W_]+')
rdd = sc.wholeTextFiles("./data/20news-bydate/*/*/*") \
.mapValues(lambda x: pattern.sub(' ', x)).cache() # ref. https://stackoverflow.com/a/1277047/3415409
df = rdd.toDF(schema=['file', 'text'])
We will need to add an index to each Row. The following code snippet is inspired from this question about adding primary keys with Apache Spark :
row_with_index = Row(*["id"] + df.columns)
def make_row(columns):
def _make_row(row, uid):
row_dict = row.asDict()
return row_with_index(*[uid] + [row_dict.get(c) for c in columns])
return _make_row
f = make_row(df.columns)
indexed = (df.rdd
.zipWithUniqueId()
.map(lambda x: f(*x))
.toDF(StructType([StructField("id", LongType(), False)] + df.schema.fields)))
Once we have added the index, we can proceed to the features cleansing, extraction and transformation :
# tokenize
tokenizer = Tokenizer(inputCol="text", outputCol="tokens")
tokenized = tokenizer.transform(indexed)
# remove stop words
remover = StopWordsRemover(inputCol="tokens", outputCol="words")
cleaned = remover.transform(tokenized)
# vectorize
cv = CountVectorizer(inputCol="words", outputCol="vectors")
count_vectorizer_model = cv.fit(cleaned)
result = count_vectorizer_model.transform(cleaned)
Now, let's transform the results dataframe back to rdd
corpus = result.select(F.col('id').cast("long"), 'vectors').rdd \
.map(lambda x: [x[0], x[1]])
Our data is now ready to be trained :
# training data
lda_model = LDA.train(rdd=corpus, k=10, seed=12, maxIterations=50)
# extracting topics
topics = lda_model.describeTopics(maxTermsPerTopic=10)
# extraction vocabulary
vocabulary = count_vectorizer_model.vocabulary
We can print the topics descriptions now as followed :
for topic in range(len(topics)):
print("topic {} : ".format(topic))
words = topics[topic][0]
scores = topics[topic][1]
[print(vocabulary[words[word]], "->", scores[word]) for word in range(len(words))]
PS : This above code was tested with Spark 1.6.3.

How to convert type Row into Vector to feed to the KMeans

when i try to feed df2 to kmeans i get the following error
clusters = KMeans.train(df2, 10, maxIterations=30,
runs=10, initializationMode="random")
The error i get:
Cannot convert type <class 'pyspark.sql.types.Row'> into Vector
df2 is a dataframe created as follow:
df = sqlContext.read.json("data/ALS3.json")
df2 = df.select('latitude','longitude')
df2.show()
latitude| longitude|
60.1643075| 24.9460844|
60.4686748| 22.2774728|
how can i convert this two columns to Vector and feed it to KMeans?
ML
The problem is that you missed the documentation's example, and it's pretty clear that the method train requires a DataFrame with a Vector as features.
To modify your current data's structure you can use a VectorAssembler. In your case it could be something like:
from pyspark.sql.functions import *
vectorAssembler = VectorAssembler(inputCols=["latitude", "longitude"],
outputCol="features")
# For your special case that has string instead of doubles you should cast them first.
expr = [col(c).cast("Double").alias(c)
for c in vectorAssembler.getInputCols()]
df2 = df2.select(*expr)
df = vectorAssembler.transform(df2)
Besides, you should also normalize your features using the class MinMaxScaler to obtain better results.
MLLib
In order to achieve this using MLLib you need to use a map function first, to convert all your string values into Double, and merge them together in a DenseVector.
rdd = df2.map(lambda data: Vectors.dense([float(c) for c in data]))
After this point you can train your MLlib's KMeans model using the rdd variable.
I got PySpark 2.3.1 to perform KMeans on a DataFrame as follows:
Write a list of the columns you want to include in the clustering analysis:
feat_cols = ['latitude','longitude']`
You need all of the columns to be numeric values:
expr = [col(c).cast("Double").alias(c) for c in feat_cols]
df2 = df2.select(*expr)
Create your features vector with mllib.linalg.Vectors:
from pyspark.ml.feature import VectorAssembler
assembler = VectorAssembler(inputCols=feat_cols, outputCol="features")
df3 = assembler.transform(df2).select('features')
You should normalize your features as normalization is not always required, but it rarely hurts (more about this here):
from pyspark.ml.feature import StandardScaler
scaler = StandardScaler(
inputCol="features",
outputCol="scaledFeatures",
withStd=True,
withMean=False)
scalerModel = scaler.fit(df3)
df4 = scalerModel.transform(df3).drop('features')\
.withColumnRenamed('scaledFeatures', 'features')
Turn your DataFrame object df4 into a dense vector RDD:
from pyspark.mllib.linalg import Vectors
data5 = df4.rdd.map(lambda row: Vectors.dense([x for x in row['features']]))
Use the obtained RDD object as input for KMeans training:
from pyspark.mllib.clustering import KMeans
model = KMeans.train(data5, k=3, maxIterations=10)
Example: classify a point p in your vector space:
prediction = model.predict(p)

random forest with spark: get predicted values and R²

I am using MLlib of spark to perform a regression random forest.
I am using the python code here:
https://spark.apache.org/docs/1.2.0/mllib-ensembles.html#tab_python_1
It works but now I would like to get the predicted values as well as the R or R² of the prediction model.
How to get that?
Here is how to save a csv file into RDD (spark data format):
# Imports
import csv
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
from collections import namedtuple
from operator import add, itemgetter
from pyspark import SparkConf, SparkContext
from pyspark.mllib.tree import RandomForest, RandomForestModel
from pyspark.mllib.util import MLUtils
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
import shutil
import numpy
def parse(row):
"""
Parses a row and returns a named tuple.
"""
row[0] = str(row[0])
row[1] = float(row[1])
row[2] = float(row[2])
row[3] = float(row[3])
row[4] = float(row[4])
return LabeledPoint(row[4], row[:4])
def split(line):
"""
Operator function for splitting a line with csv module
"""
reader = csv.reader(StringIO(line), delimiter=';')
return next(reader)
#save csv file on a spark cluster (RDD format)
data = sc.textFile("datafile").map(split).map(parse)
Here is how to perform the random forest algorithm and how to get the predicted values:
def random_forest_regression(data):
"""
Run the random forest (regression) algorithm on the data to perform the prediction
"""
# Split the data into training and test sets (30% held out for testing)
(trainingData, testData) = data.randomSplit([0.7, 0.3])
model = RandomForest.trainRegressor(trainingData, categoricalFeaturesInfo={}, numTrees=100, featureSubsetStrategy="auto", impurity='variance', maxDepth=10, maxBins=32)
#increase number of trees to have a better prediction
# Evaluate model on TEST instances and compute test error
predictions_test = model.predict(testData.map(lambda x: x.features))
real_and_predicted_test = testData.map(lambda lp: lp.label).zip(predictions_test)
#get the list of real and predicted values FOR ALL THE POINTS
predictions = model.predict(data.map(lambda x: x.features))
real_and_predicted = data.map(lambda lp: lp.label).zip(predictions)
real_and_predicted=real_and_predicted.collect()
print("real and predicted values")
for value in real_and_predicted:
print(value)
return model, real_and_predicted
To get the correlation coefficient (R value), I used numpy:
def compute_correlation_coefficient(real_and_predicted):
"""
compute and display the correlation coefficient from a list of real and predicted values
"""
list1=[]
list2=[]
for tuple in real_and_predicted:
list1.append(tuple[0])
list2.append(tuple[1])
print("correlation coefficient")
print(numpy.corrcoef(list1, list2)[0, 1])
To get the R², take the square value of the correlation coefficient.
Voilà !

Resources