How do I run the Spark logistic regression with categorical features using python? - apache-spark

I have a data with some categorical variables and I want to run a logistic regression using Mllib , it seems like the model support only continous variables.
Does anyone know how to deal with this please ?

Logistic regression, like the other linear models, takes as input an RDD whereas a LabeledPoint is a Double (the label) and the associated Vector (a double Array).
Categorical values (Strings) are not supported, however you could convert those to binary columns.
For example if you have a column RAG taking values Red, Amber and Green, you would add three binary column isRed, isAmber and isGreen of which only one of them is 1 (true) and the others are 0 (zero) for each sample.
See as further explanation: http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.DictVectorizer.html

Related

How to compute linear regression using multivariate least squares method without using scikit-learn library?

My question is classification of the iris dataset using multi-variate linear regression without using the scikit-learn library.
I have this formula that is needed to find the beta values for the dataset.
enter image description here
β^=(X′X)−1X′Y
This is the dataset in question: http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
How to compute the linear regression using this formula. I understand that linear regression is
Yi = β0 + β1X1i + ... + βkXki + ϵi
I have computed beta values using the above formula using matrix multiplication. How to find the linear regression equation now? I have assumed the first 4 columns as the A matrix and the label column as the Y matrix with values 1,2,3 respectively.
How do i compute the ϵi values. Do i assume them to be zero? Any help is appreciated. Thanks in advance.

Standardize or subtract constant to data for regression

I am attempting to create a prediction model using multiple linear regression.
One of the predictor variables I want to use is a percentage, so it ranges from 0 - 100. I hypothesize that when it’s <50% there will be a negative effect on the target variable and when >50% a positive effect.
The mean of the predictor variable isn’t exactly 50 in my data set so I am unsure if I centre or Standardize this variable, or just subtract 50 from it to create the split I am looking for.
I am very new to statistics and self teaching myself at the moment, any help is greatly appreciated.

Arbitary choosen values as std/mean for normalizatio. why?

I have a question regarding the z-score normalization method.
This method uses the z-score to normalize the values of the dataset and needs a mean/std.
I know that you are normally supposed to use the mean/std of the dataset.
But I have seen multiple tutorials on pytorch.org and the net who just use the 0.5 for mean/std which seems completely arbitrary to me.
And I was wondering why they didn't use the mean/std of the dataset?
Example Tutorials where they just use 0.5 as mean/std:
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
If you use the std/mean of your dataset to normalize the same dataset you are going to have after the normalization a mean of 0 and an std of 1.
Where the min/max values of the normalized dataset are not in a certain range.
If you use mean/std of 0.5 as a parameter for normalization of your dataset you are going to have a dataset in the range of -1 to 1.
And the mean of the normalized dataset will be close to zero and the std of the normalized dataset will be close to 0.5.
So to answer my question you use 0.5 as mean/std when you want that your dataset is in a range of -1 to 1.
Which would be beneficial when using, for example, a tanh activation function in a neural network.

Why does k=1 in KNN give the best accuracy?

I am using Weka IBk for text classificaiton. Each document basically is a short sentence. The training dataset contains 15,000 documents. While testing, I can see that k=1 gives the best accuracy? How can this be explained?
If you are querying your learner with the same dataset you have trained on with k=1, the output values should be perfect barring you have data with the same parameters that have different outcome values. Do some reading on overfitting as it applies to KNN learners.
In the case where you are querying with the same dataset as you trained with, the query will come in for each learner with some given parameter values. Because that point exists in the learner from the dataset you trained with, the learner will match that training point as closest to the parameter values and therefore output whatever Y value existed for that training point, which in this case is the same as the point you queried with.
The possibilities are:
The data training with data tests are the same data
Data tests have high similarity with the training data
The boundaries between classes are very clear
The optimal value for K is depends on the data. In general, the value of k may reduce the effect of noise on the classification, but makes the boundaries between each classification becomes more blurred.
If your result variable contains values of 0 or 1 - then make sure you are using as.factor, otherwise it might be interpreting the data as continuous.
Accuracy is generally calculated for the points that are not in training dataset that is unseen data points because if you calculate the accuracy for unseen values (values not in training dataset), you can claim that my model's accuracy is the accuracy that is been calculated for the unseen values.
If you calculate accuracy for training dataset, KNN with k=1, you get 100% as the values are already seen by the model and a rough decision boundary is formed for k=1. When you calculate the accuracy for the unseen data it performs really bad that is the training error would be very low but the actual error would be very high. So it would be better if you choose an optimal k. To choose an optimal k you should be plotting a graph between error and k value for the unseen data that is the test data, now you should choose the value of the where the error is lowest.
To answer your question now,
1) you might have taken the entire dataset as train data set and would have chosen a subpart of the dataset as the test dataset.
(or)
2) you might have taken accuracy for the training dataset.
If these two are not the cases than please check the accuracy values for higher k, you will get even better accuracy for k>1 for the unseen data or the test data.

SVM for Text Mining using scikit

Can someone share a code snippet that shows how to use SVM for text mining using scikit. I have seen an example of SVM on numerical data but not quite sure how to deal with text. I looked at http://scikit-learn.org/stable/auto_examples/document_classification_20newsgroups.html
but couldn't find SVM.
In text mining problems, text is represented by numeric values. Each feature represent a word and values are binary numbers. That gives a matrix with lots of zeros and a few 1s which means that the corresponding words exist in the text. Words can be given some weights according to their frequency or some other criteria. Then you get some real numbers instead of 0 and 1.
After converting the dataset to numerical values you can use this example: http://scikit-learn.org/dev/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC

Resources