k mean clustering for mixed categorical and numeric value - apache-spark

Any help please
i want to provide a simple framework for identifying and cleaning duplicates data in the context big data . This pretreatment must be performed in real time (streaming).
we reperesent our data base by a file.csv , this file contains patient (medical) records without duplication .
we want to clusterig the file.csv into 4 clusters by using a incremental parallel k mean clustering for mixed categorical and numeric value, each cluster contain similars records.
every time that (data stream) a structured data comes (record), we must compare it with representatives of clusters (M1, M2, M3, M4).............
If the data does not represent a duplicate data , we save it in file.csv , if it represents a duplicate data it is not saved in file.csv.
1)so what's the effiscient tool in my case hadoop or spark !
2) how can i impliment clustering for mixed categorical and numeric value with Mlib(spark) or mahout (hadoop).
3) what does it mean incremental clustering , is that the same of streaming clustering!

As already noted a dozen of times here on SO/CV:
k-means computes means
unless you can define a least-squares mean for categorical data (that is still useful in practise) using k-means on such data doesn't work.
Sure, you can do one-hot encoding amd similar hacks, but they make the results next to meaningless. "Least-squares" is not a meaningful objective on binary input data.
KMeans dealing with categorical variable
Why am I not getting points around clusers in this kmeans implementation?
https://stats.stackexchange.com/questions/58910/kmeans-whether-to-standardise-can-you-use-categorical-variables-is-cluster-3

Related

suggestion for clustering algorithm?

I have a dataset of 590000 records after preprocessing and i wanted to find clusters out of it and it contains string data (for now assume i have only one column with 590000 unique values in dataset). Also i am using custom defined distance measure and needed to calculate the distance matrix of size 590000*590000. Using some partition logic i created the distance matrix but cannot merge those partitions into one big distance matrix due to memory constarints. Does anyone have any sort of idea to resolve it ?? I picked DBSCAN for it. Is there any way to use deep learning methodologies?? any other ideas
Use a manageable sample first.
Because I doubt the results will be good enough to warrant any effort on scaling a method that does not work anyway.

Clustering of facebook-users with k-means

i got a facebook-list of user-ids from following page:
Stanford Facebook-Data
If you look at the facebook_combined data, you can see that it is a list of user-connections (edges). So for instance user 0 has something to do with user 1,2,3 and so on.
Now my work is to find clusters in the dataset.
In the first step i used node.js to read the file and save the data in an array like this:
array=[[0,1],[0,2], ...]
In the second step i used a k-means plugin for node.js to cluster the data:
Cluster-Plugin
But i dont know if the result is right, because now i get clusters of edges and not clusters of users.
UPDATE:
I am trying out a markov implementation for node js. The Markov Plugin however needs an adjacency matrix to build clusters. I implemented an algorithm with java to save the matrix in a file.
Maybe you got any other suggestion how i could get clusters out of edges.
K-means assumes your input data issue an R^d vector space.
In fact, it requires the data to be this way, because it computes means as cluster centers, hence the name k-means.
So if you want to use k-means, then you need
One row per datapoint (not an edge list)
A fixed dimensionality data space where the mean is a useful center (usually, you should have continuous attributes, on binary data the mean does not make too much sense) and where least-squares is a meaningful optimization criterion (again, on binary data, least-squares does not have a strong theoretical support)
On your Faceboook data, you could try some embedding, but I'd have doubts about the trustworthiness.

PCA in Spark MLlib and Spark ML

Spark now has two machine learning libraries - Spark MLlib and Spark ML. They do somewhat overlap in what is implemented, but as I understand (as a person new to the whole Spark ecosystem) Spark ML is the way to go and MLlib is still around mostly for backward compatibility.
My question is very concrete and related to PCA. In MLlib implementation there seems to be a limitation of the number of columns
spark.mllib supports PCA for tall-and-skinny matrices stored in row-oriented format and any Vectors.
Also, if you look at the Java code example there is also this
The number of columns should be small, e.g, less than 1000.
On the other hand, if you look at ML documentation, there are no limitations mentioned.
So, my question is - does this limitation also exists in Spark ML? And if so, why the limitation and is there any workaround to be able to use this implementation even if the number of columns is large?
PCA consists in finding a set of decorrelated random variables that you can represent your data with, sorted in decreasing order with respect to the amount of variance they retain.
These variables can be found by projecting your data points onto a specific orthogonal subspace. If your (mean-centered) data matrix is X, this subspace is comprised of the eigenvectors of X^T X.
When X is large, say of dimensions n x d, you can compute X^T X by computing the outer product of each row of the matrix by itself, then adding all the results up. This is of course amenable to a simple map-reduce procedure if d is small, no matter how large n is. That's because the outer product of each row by itself is a d x d matrix, which will have to be manipulated in main memory by each worker. That's why you might run into trouble when handling many columns.
If the number of columns is large (and the number of rows not so much so) you can indeed compute PCA. Just compute the SVD of your (mean-centered) transposed data matrix and multiply it by the resulting eigenvectors and the inverse of the diagonal matrix of eigenvalues. There's your orthogonal subspace.
Bottom line: if the spark.ml implementation follows the first approach every time, then the limitation should be the same. If they check the dimensions of the input dataset to decide whether they should go for the second approach, then you won't have problems dealing with large numbers of columns if the number of rows is small.
Regardless of that, the limit is imposed by how much memory your workers have, so perhaps they let users hit the ceiling by themselves, rather than suggesting a limitation that may not apply for some. That might be the reason why they decided not to mention the limitation in the new docs.
Update: The source code reveals that they do take the first approach every time, regardless of the dimensionality of the input. The actual limit is 65535, and at 10,000 they issue a warning.

how to calculate distance between any two elements in more than 10^8 data to Clustering them using spark?

I have more than 10^8 records stored in elasticSearch. Now I want to clustering them by writing a hierarchical algorithm or using PIC based on spark MLlib.
However, I can't use some efficient algorithm like K-means because every record is stored in the form of
{mainID:[subId1,subId2,subId3,...]}
which obviously is not in euclidean space.
I need to calculate the distance of every pair of records which will take a very LONG time I guess (10^8 * 10^8). I know the cartesian product in spark to do such computing , but there will appear the duplicated ones like (mainID1,mainID2) and (mainID2,mainID1), which is not suitable to PIC.
Does anyone know a better way to cluster these records? Or any method to delete the duplicated ones in the result RDD of cartesian product?
Thanks A lot!
First of all, don't take the full Cartesian product:
select where a.MainID > b.MainID
This doesn't reduce the complexity, but it does save about 2x in generation time.
That said, consider your data "shape" and select the clustering algorithm accordingly. K-means, HC, and PIC have three different applications. You know K-means already, I'm sure.
PIC basically finds gaps in the distribution of distances. It's great for well-defined sets (clear boundaries), even when those curl around each other or nest. However, if you have a tendril of connecting points (like a dumbbell with a long, thin bar), PIC will not separate the obvious clusters.
HC is great for such sets, and is a good algorithm in general. Most HC algorithms have an "understanding" of density, and tend to give clusterings that fit human cognition's interpretation. However, HC tends to be slow.
I strongly suggest that you consider a "seeded" algorithm: pick a random subset of your points, perhaps
sqrt(size) * dim
points, where size is the quantity of points (10^8) and dim is the number of dimensions. For instance, your example has 5 dimensions, so take 5*10^4 randomly selected points. Run the first iterations on those alone, which will identify centroids (K-means), eigenvectors (PIC), or initial hierarchy (HC). With those "seeded" values, you can now characterize each of the candidate clusters with 2-3 parameters. Classifying the remaining 10^8 - 5*10^4 points against 3 parameters is a lot faster, being O(size) time instead of O(size^2).
Does that get you moving toward something useful?

Spark clustering - one RDD multiple clusters, optimal cluster size, controlling set sizes

What's the best way to achieve what follows:
(1)
My input data consists of three columns: Object, Category, Value. I need to cluster Objects based on Value but the clusters need to be Category specific i.e. I need a cluster for every Category. It's impractical to split a file and load category specific data individually.
Initially I thought it was simple (I was already able to cluster Objects for one specific Category) and loaded data into a pair RDD where the key was Category value. However, KMeans train method accepts RDD and I got stuck on trying to make RDD of the value for each key of original RDD.
(2)
Is there a method of clustering that returns optimal number of sets in the cluster except for starting with low K and iterating training while K increases until Within Set Sum of Squared Error stabilizes?
(3)
Is there a method of clustering where the size of cluster sets could be controlled (the goal being producing more balanced sizes of sets)?
Thank you.
Why is it impractical to split your data set?
this will not take longer than a single k-means iteration (1 pass over the data set)
it will untangle the multiple problems you have, so some subsets can converge earlier, thus speed zp the overall process.
Note that k-means is best on multivariate data. On 1-dimensional data it is much more efficient to sort the data and then do kernel density estimation (or even simply histograms and have the user intuitively decide). Then you can easily do all thes "extras" such as ensuring a minimum cluster size etc.

Resources