Clustering on Market-1501 dataset - pytorch

I am trying to perform clustering on the Market-1501 dataset. The approach that I am using is as follows:
I train a Person-Reid Model (using this repository: Reid-Strong-Baseline
Use a version of depth first search for clustering data (not part of the training set) into individual classes.
Although the Rank-1, Rank-5 metrics of the ReID model are very good, the overall effect of clustering is rather disappointing. I am also struggling to find relevant literature that could help me.
Does anyone have any pointers on where I could at least find relevant literature (i.e Person-Reid followed by clustering).
Thanks in advance.

Related

Do I need a test-train split for K-means clustering even if I'm not looking to predict anything?

I have a set of 2000 points which are basically x,y coordinates of pass origins from association football. I want to run a k-means clustering algorithm on it to just classify it to get which 10 passes are the most common (k=10). However, I don't want to predict any points for future values. I simply want to work with the existing data. Do I still need to split it into testing-training sets? I assume they're only done when we want to train the model on a particular set to calculate for future values (?)
I'm new to clustering (and Python as a whole) so any help would be appreciated.
No, in clustering (i.e unsupervised learning ) you do not need to split the data
I disagree with the answer. Clustering has accuracy as a metric. If you do not split the data into train and test then most likely you'll be overfitting the model. See these similar question 1, 2, 3. Please note, data splitting into train/test set is unrelated to the supervised or unsupervised problem.

categorize non-functional requirements

I am developing a machine learning project which analyzes requirement specification and categories the non-functional requirements in to categories like database, web socket, backend technology, etc. As I have researched Naive Bayes is the better way to categorize but due to lack of dataset I have planned to go with Seed LDA for topic modeling. Would it be okay to use LDA or should I use something else?
You can try either LDA or clustering.
Based on my experiences, k-mean clustering could help you have a better visualization about what are you doing and what is happening.
With LDA, it could also be good. You can try it first since k-means take much more time.
I implemented an issue tracking system here using k-means, may you like to take a look. issue tracker

Extrapolation of sample to population

How to extrapolate a sample of 10,000 rows to the entire population (100,000) in python. I did agglomerative clustering on the sample in python, stuck with extrapolating the result to the entire population.
There is no general rule.
For hierarchical clustering, this very much depends on your linkage, and the clustering of a different sample or the whole population may be very different. (For a starter, try a different sample and compare!)
Generalizing a clustering result to new data is usually contradicting the very assumptions made for the clustering. This is not classification, but explorative data analysis.
However, if you have found good clustering results, and you have verified them to be desirable, then you can train a classifier on the cluster labels to predict the cluster label of new data.

how to determine the number of topics for LDA?

I am a freshman in LDA and I want to use it in my work. However, some problems appear.
In order to get the best performance, I want to estimate the best topic number. After reading "Finding Scientific topics", I know that I can calculate logP(w|z) firstly and then use the harmonic mean of a series of P(w|z) to estimate P(w|T).
My question is what does the "a series of" mean?
Unfortunately, there is no hard science yielding the correct answer to your question. To the best of my knowledge, hierarchical dirichlet process (HDP) is quite possibly the best way to arrive at the optimal number of topics.
If you are looking for deeper analyses, this paper on HDP reports the advantages of HDP in determining the number of groups.
A reliable way is to compute the topic coherence for different number of topics and choose the model that gives the highest topic coherence. But sometimes, the highest may not always fit the bill.
See this topic modeling example.
First some people use harmonic mean for finding optimal no.of topics and i also tried but results are unsatisfactory.So as per my suggestion ,if you are using R ,then package"ldatuning" will be useful.It has four metrics for calculating optimal no.of parameters. Again perplexity and log-likelihood based V-fold cross validation are also very good option for best topic modeling.V-Fold cross validation are bit time consuming for large dataset.You can see "A heuristic approach to determine an appropriate no.of topics in topic modeling".
Important links:
https://cran.r-project.org/web/packages/ldatuning/vignettes/topics.html
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4597325/
Let k = number of topics
There is no single best way and I am not even sure if there is any standard practices for this.
Method 1:
Try out different values of k, select the one that has the largest likelihood.
Method 2:
Instead of LDA, see if you can use HDP-LDA
Method 3:
If the HDP-LDA is infeasible on your corpus (because of corpus size), then take a uniform sample of your corpus and run HDP-LDA on that, take the value of k as given by HDP-LDA. For a small interval around this k, use Method 1.
Since I am working on that same problem, I just want to add the method proposed by Wang et al. (2019) in their paper "Optimization of Topic Recognition Model for News Texts Based on LDA". Besides giving a good overview, they suggest a new method. First you train a word2vec model (e.g. using the word2vec package), then you apply a clustering algorithm capable of finding density peaks (e.g. from the densityClust package), and then use the number of found clusters as number of topics in the LDA algorithm.
If time permits, I will try this out. I also wonder if the word2vec model can make the LDA obsolete.

Suitable data mining technique for this dataset

I'm working on a data mining project and would like to mine this dataset Higher Education Enrolments for interesting patterns or knowledge. My problem is figuring out which technique would work best for the dataset.
I'm currently working on the dataset using RapidMiner 5.0 and I removed two columns (E550 - Reference year, E931 - Total Student EFTSL) from the data as they would not be relevant to the analysis. The rest of the attributes are nominal except StudentID (integer) which I have used as my id. I'm currently using classification on it (Naive Bayes) but would like to get the opinion of others, hopefully those who have had more experience in this area. Thanks.
The best technique depends on many factors: type/distribution of training and target attribute, domain, value range of attributes, etc. The best technique to use is the result of data analysis and understanding.
In this particular case, you should clarify which is the attribute to predict.
Unless you already know what you are looking for, and know about the quality of the data source, you should always start by trying various exploratory analysis:
look at some of the first and second order statistics of all the
variables
generate histograms of each variable, to get an idea of the empirical
distribution of each
take a look at pairwise scatter plots of variables that might have
dependency
try other visualization that you might think of
These would give you a rough idea about what kind of pattern might be present and might be discoverable given the noise level. Then depending on what kind of pattern you are interested in, you could start trying various unsupervised pattern learning methods such as, PCA/ICA/factor analysis, clustering, or supervised methods, such as regression, classification.

Resources