Spark cosine distance between rows using Dataframe - apache-spark

I have to compute a cosine distance between each rows but I have no idea how to do it using Spark API Dataframes elegantly. The idea is to compute similarities for each rows(items) and take top 10 similarities by comparing their similarities between rows. --> This is need for Item-Item Recommender System.
All that I've read about it is referred to computing similarity over columns Apache Spark Python Cosine Similarity over DataFrames
May someone say is it possible to compute a cosine distance elegantly between rows using PySpark Dataframe's API or RDD's or I have to do it manually?
That's just some code to show what I intend to do
def cosineSimilarity(vec1, vec2):
return vec1.dot(vec2) / (LA.norm(vec1) * LA.norm(vec2))
#p.s model is ALS
Pred_Factors = model.itemFactors.cache() #Pred_Factors = DataFrame[id: int, features: array<float>]
sims = []
for _id,_feature in Pred_Factors.toLocalIterator():
for id, feature in Pred_Factors.toLocalIterator():
itemFactor = _feature
sims = sims.append(_id, cosineSimilarity(asarray(feature),itemFactor))
sims = sc.parallelize(l)
sortedSims = sims.takeOrdered(10, key=lambda x: -x[1])
Thanks in Advance for all the help

You can use mllib.feature.IndexedRowMatrix's columnSimilarities function. It uses cosine metrics as distance function. It computes similarities between columns so, you have to take transpose before applying this function.
pred_ = IndexedRowMatrix(Pred_Factors.rdd.map(lambda x: IndexedRow(x[0],x[1]))).toBlockMatrix().transpose().toIndexedRowMatrix()
pred_sims = pred.columnSimilarities()

Related

how to predict the cluster label of a new observation using a hierarchical clustering?

I want to study a population of 47532 individuals with 16230 features. Thus I created a matrix with 16230 lines and 47532 columns
>>> import scipy.cluster.hierarchy as hcluster
>>> from scipy.spatial import distance
>>> import sklearn.cluster import AgglomerativeClustering
>>> matrix.shape
(16230, 47532)
# remove all duplicate vectors in order to not waste computation time
>>> uniq_vectors, row_index = np.unique(matrix, return_index=True, axis=0)
>>> uniq_vectors.shape
(22957, 16230)
# compute distance between each observations
>>> distance_matrix = distance.pdist(uniq_vectors, metric='jaccard')
>>> distance_matrix_2d = distance.squareform(distance_matrix, force='tomatrix')
>>> distance_matrix_2d.shape
(22957, 22957)
# Perform linkage
>>> linkage = hcluster.linkage(distance_matrix, method='complete')
So now I can use scikit-learn to perform a clustering
>>> model = AgglomerativeClustering(n_clusters=40, affinity='precomputed', linkage='complete')
>>> cluster_label = model.fit_predict(distance_matrix_2d)
How to predict future observations using this model ?
Indeed AgglomerativeClustering do not own a predict method and it will be too long to compute again the distance for 16230 x (47532 + 1)
Is it possible to compute a distance between new observations and all pre-computed cluster ?
Indeed the use of pdist from scipy will compute the distance n x n In my case I would like compute the distance from one observation o vs n samples o x n
Thanks for your highlight
The answer is simple: you cannot. Hierarchical clustering is not designed to predict cluster labels for new observations. The reason why this is happening is because it just links data points according to their distances and it is not defining "regions" for each cluster.
There are two solutions for you at this stage I believe:
For new data points, find the nearest observation in your data set (using the same distance function as during the training) and assign the same cluster label. This requires a bit more coding, and obviously, it is a bit of a hack. But keep in mind that the results might not make a lot of sense as you will be extrapolating cluster labels using a different methodology than the training procedure.
Use another clustering algorithm! It seems like you are using hierarchical clustering when your use case does not match the model. KMeans could be a good choice, as it explicitly can assign new data points to the closest cluster.

Is there any efficient function for extracting the probability vector after running pyspark ML algorithm other than rdd.extract in Pyspark?

I want to extract the probability from the vector and make it into rows. For that i have used rdd.map(extract)
def extract(row):
return(row.prediction,)+tuple(row.probability.toArray() .tolist())
I have 96 probabilities within that vector. After extracting them into rows i sorted and selected top 10 probabilities. This works good for small dataset like 1000 records (ie. 96*1000 =96000 rows). But for 100k records the function is taking more time. So is there any other way to extract those probabilities and make them as rows?
One thing might be improved in your code is to calculate the top-N directly in extract() function instead of retrieving all 96 probabilities and then post-processing them to find top-N. For example using np.partition on the Numpy ndarray returned from toArray() method:
from numpy import partition, arange
N = 10
extract = lambda row: (row.prediction,) + tuple(-partition(-row.probability.toArray(),arange(N))[:N])
my_model.summary.predictions.rdd.map(extract).take(20)
Note: if order is not important for the top-N probabilities, then adjust arange(N) to N.
EDIT: Since Vector is not natively supported by SparkSQL(as of Spark 2.4.4), to use Dataframe APIs and its optimization, you'll have to first use an UDF to convert the Vector in this question into ArrayType.
For spark 2.4+ use udf+sort_array+slice which can use Java's optimization engine to sort but no partial sorting:
from pyspark.sql.functions import udf, sort_array, slice
udf_extract_1 = udf(lambda v: v.toArray().tolist(), 'array<double>')
(my_model.summary.predictions
.select('prediction', udf_extract_1('probability').alias('probability'))
.withColumn('probability', slice(sort_array('probability', False),1,N))
.show(truncate=False))
Or use udf + Python's partial-sorting function:
from pyspark.sql.functions import udf
from numpy import partition, arange
udf_extract_2 = udf(lambda v: (-partition(-v.toArray(), arange(N))[:N]).tolist(), 'array<double>')
(my_model.summary.predictions
.select('prediction', udf_extract_2('probability').alias('probability'))
.show(truncate=False))

precomputed matric cost much memory in dbscan in cluster

There are 40 million datasets in my scieniao.Can dbscan support so large datasets in sklean?Below is my code
result=[]
for line in open("./raw_data1"):
#for line in sys.stdin:
tagid_result = [0]*10
line = line.strip()
fields = line.split("\t")
if len(fields)<6:
continue
tagid = fields[3]
tagids = tagid.split(":")
for i in tagids:
tagid_result[tagid2idx[i]] = 1
result.append(tagid_result)
distance_matrix = pairwise_distances(X, metric='jaccard')
#print (distance_matrix)
dbscan = DBSCAN(eps=0.1, min_samples=1200, metric="precomputed", n_jobs=-1)
db = dbscan.fit(distance_matrix)
for i in range(0,len(db.labels_)):
print (db.labels_[i])
How can it improve my code to support large datasets?
DBSCAN itself never requires your data to be available as a matrix, and will only need linear memory.
Unfortunately for you, the sklearn authors decided to implement DBSCAN a bit different than the original article. This causes their implementation to potentially use much more memory. In cases such as yours, these decisions can have drawbacks.
For Jaccard distance, the neighborhood search of DBSCAN can be nicely accelerated for example with inverted indexes. But even so, you only need to compute one row at a time if you implement the "textbook" version of DBSCAN yourself.

Filtering Spark DataFrame on new column

Context: I have a dataset too large to fit in memory I am training a Keras RNN on. I am using PySpark on an AWS EMR Cluster to train the model in batches that are small enough to be stored in memory. I was not able to implement the model as distributed using elephas and I suspect this is related to my model being stateful. I'm not entirely sure though.
The dataframe has a row for every user and days elapsed from the day of install from 0 to 29. After querying the database I do a number of operations on the dataframe:
query = """WITH max_days_elapsed AS (
SELECT user_id,
max(days_elapsed) as max_de
FROM table
GROUP BY user_id
)
SELECT table.*
FROM table
LEFT OUTER JOIN max_days_elapsed USING (user_id)
WHERE max_de = 1
AND days_elapsed < 1"""
df = read_from_db(query) #this is just a custom function to query our database
#Create features vector column
assembler = VectorAssembler(inputCols=features_list, outputCol="features")
df_vectorized = assembler.transform(df)
#Split users into train and test and assign batch number
udf_randint = udf(lambda x: np.random.randint(0, x), IntegerType())
training_users, testing_users = df_vectorized.select("user_id").distinct().randomSplit([0.8,0.2],123)
training_users = training_users.withColumn("batch_number", udf_randint(lit(N_BATCHES)))
#Create and sort train and test dataframes
train = df_vectorized.join(training_users, ["user_id"], "inner").select(["user_id", "days_elapsed","batch_number","features", "kpi1", "kpi2", "kpi3"])
train = train.sort(["user_id", "days_elapsed"])
test = df_vectorized.join(testing_users, ["user_id"], "inner").select(["user_id","days_elapsed","features", "kpi1", "kpi2", "kpi3"])
test = test.sort(["user_id", "days_elapsed"])
The problem I am having is that I cannot seem to be able to filter on batch_number without caching train. I can filter on any of the columns that are in the original dataset in our database, but not on any column I have generated in pyspark after querying the database:
This: train.filter(train["days_elapsed"] == 0).select("days_elapsed").distinct.show() returns only 0.
But, all of these return all of the batch numbers between 0 and 9 without any filtering:
train.filter(train["batch_number"] == 0).select("batch_number").distinct().show()
train.filter(train.batch_number == 0).select("batch_number").distinct().show()
train.filter("batch_number = 0").select("batch_number").distinct().show()
train.filter(col("batch_number") == 0).select("batch_number").distinct().show()
This also does not work:
train.createOrReplaceTempView("train_table")
batch_df = spark.sql("SELECT * FROM train_table WHERE batch_number = 1")
batch_df.select("batch_number").distinct().show()
All of these work if I do train.cache() first. Is that absolutely necessary or is there a way to do this without caching?
Spark >= 2.3 (? - depending on a progress of SPARK-22629)
It should be possible to disable certain optimization using asNondeterministic method.
Spark < 2.3
Don't use UDF to generate random numbers. First of all, to quote the docs:
The user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.
Even if it wasn't for UDF, there are Spark subtleties, which make it almost impossible to implement this right, when processing single records.
Spark already provides rand:
Generates a random column with independent and identically distributed (i.i.d.) samples from U[0.0, 1.0].
and randn
Generates a column with independent and identically distributed (i.i.d.) samples from the standard normal distribution.
which can be used to build more complex generator functions.
Note:
There can be some other issues with your code but this makes it unacceptable from the beginning (Random numbers generation in PySpark, pyspark. Transformer that generates a random number generates always the same number).

K-Means clustering is biased to one center

I have a corpus of wiki pages (baseball, hockey, music, football) which I'm running through tfidf and then through kmeans. After a couple issues to start (you can see my previous questions), I'm finally getting a KMeansModel...but when I try to predict, I keep getting the same center. Is this because of the small dataset, or because I'm comparing a multi-word document against a smaller amount of words(1-20) query? Or is there something else I'm doing wrong? See the below code:
//Preprocessing of data includes splitting into words
//and removing words with only 1 or 2 characters
val corpus: RDD[Seq[String]]
val hashingTF = new HashingTF(100000)
val tf = hashingTF.transform(corpus)
val idf = new IDF().fit(tf)
val tfidf = idf.transform(tf).cache
val kMeansModel = KMeans.train(tfidf, 3, 10)
val queryTf = hashingTF.transform(List("music"))
val queryTfidf = idf.transform(queryTf)
kMeansModel.predict(queryTfidf) //Always the same, no matter the term supplied
This question seems somewhat related to this one
More a checklist than an answer:
A single word query or a very short sentence is probably not a good choice especially when combined with a large feature vector. I would start with significant fragments of the documents from the corpus
Manually check similarity between query an each cluster. Is it even remotely similar to each cluster?
import breeze.linalg.{DenseVector => BDV, SparseVector => BSV, Vector => BV}
import breeze.linalg.functions.cosineDistance
import org.apache.spark.mllib.linalg.{Vector, SparseVector, DenseVector}
def toBreeze(v: Vector): BV[Double] = v match {
case DenseVector(values) => new BDV[Double](values)
case SparseVector(size, indices, values) => {
new BSV[Double](indices, values, size)
}
}
val centers = kMeansModel.clusterCenters.map(toBreeze(_))
val query = toBreeze(queryTfidf)
centers.map(c => cosineDistance(query, c))
Does K-Means converge? Depending on a dataset and initial centroids ten or twenty iterations can be not enough. Try to increase this number to one thousand or so and see if the problem persist.
Is your corpus diverse enough to form meaningful clusters? Try to find centroids for each document in you corpus. Do you get a relatively uniform distribution or almost all documents are assigned to a single cluster.
Perform visual inspection. Take your tfidf RDD convert to a matrix, apply PCA, plot, color by cluster and see if you get a meaningful results.
Plot centroids as well and check if these cover possible cluster. If not check convergence once again.
You can also check similarities between centroids:
(0 until centers.size)
.toList
.flatMap(i => ((i + 1) until centers.size)
.map(j => (i, j, 1 - cosineDistance(centers(i), centers(j)))))
Is your pre-processing thorough enough? Simple removal of the short words most likely won't suffice. I would at lest extend it using with stopwords removal. Some stemming wouldn't hurt too.
K-Means results depend on the initial centroids. Try running an algorithm multiple times an see if problem persists.
Try more sophisticated algorithm like LDA

Resources