This is the csv that im using https://gist.github.com/netj/8836201 currently, im trying to predict the variety which is categorical data with linear regression but somehow the prediction is very very inaccurate. While you know, the actual label is just combination of 0.0 and 1. but the prediction is 0.numbers and 1.numbers even with minus numbers which in my opinion is very inaccurate, what part did i make the mistake and what is the solution for this inaccuracy? this is the assignment my teacher gave me, he said we could predict the categorical data with linear regression not only logistic regression
import pandas as pd
from sklearn import model_selection
from sklearn.linear_model import LinearRegression
from sklearn import preprocessing
from sklearn import metrics
path= r"D:\python projects\iris.csv"
df = pd.read_csv(path)
array = df.values
X = array[:,0:3]
y = array[:,4]
le = preprocessing.LabelEncoder()
ohe = preprocessing.OneHotEncoder(categorical_features=[0])
y = le.fit_transform(y)
y = y.reshape(-1,1)
y = ohe.fit_transform(y).toarray()
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.2, random_state=0)
sc = preprocessing.StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
y_train = sc.fit_transform(y_train)
model = LinearRegression(n_jobs=-1).fit(X_train, y_train)
y_pred = model.predict(X_test)
df = pd.DataFrame({'Actual': X_test.flatten(), 'Predicted': y_pred.flatten()})
the output :
y_pred
Out[46]:
array([[-0.08676055, 0.43120144, 0.65555911],
[ 0.11735424, 0.72384335, 0.1588024 ],
[ 1.17081347, -0.24484483, 0.07403136],
X_test
Out[61]:
array([[-0.09544771, -0.58900572, 0.72247648],
[ 0.14071157, -1.98401928, 0.10361279],
[-0.44968663, 2.66602591, -1.35915595],
Linear Regression is used to predict continuous output data. As you correctly said, you are trying to predict categorical (discrete) output data. Essentially, you want to be doing classification instead of regression - linear regression is not appropriate for this.
As you also said, logistic regression can and should be used instead as it is applicable to classification tasks.
Related
I'm new to machine learning and wanted to understand how to evaluate the RMSE when the data is scaled.
I used the California housing dataset and trained it with SVR:
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
I then scaled the data for the SVR and trained the model:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
When I wanted to evaluate the RMSE the result was scaled so it didn't make a lot of sense to me:
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
rmse = np.sqrt(mean_squared_error(y_train, y_pred))
rmse was 0.976993881287582
How do I make sense of the result? (the y column is in tens of thousands of dollars)
I tried to y_pred by unscaling the data but the result did not make sense:
y_pred = lin_svr.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
So the question is, how do I interpret the RMSE when the data is scaled and is there a correct way to unscale it in order to make sense of it
Thanks!
Here you don't scale the target variable, so the unit of the rmse is just the same as the target variable. Because the target variable is in units of 100,000 dollars, rmse a measuring to define the difference between observed and predicted data. That means rmse = 0.976993881287582 => 97,699 dollars.
I was trying to apply some regressor to make an IMDB rating predict. This is what I tried:
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import train_test_split
data = pd.read_csv("D:/Code/imdb_project/movie_metadata.csv")
df = data[["duration","budget", "title_year","imdb_score"]]
df = df.dropna()
feature = np.array(df[["duration","budget","title_year"]])
rating = np.array(df["imdb_score"])
scaler = MinMaxScaler()
scaler.fit(feature)
X = scaler.transform(feature)
y = rating
x_train, x_test, y_train, y_test = train_test_split(X, y, train_size = 0.8, test_size = 0.2, random_state = 5)
regressor = DecisionTreeRegressor(criterion='mse')
regressor.fit(x_train, y_train)
regressor.score(x_test, y_test)
For clarification, my dataset contains 3 features: Budge, Release year, and duration, y is the IMDB rating.
When applying this regressor for the test data, I always receive a negative R square (it works just fine with the train data.) I understand that R square can be negative but I am still wondering if there is a way that I can improve it? The only way I know is normalizing the data and I did it before fitting the model.
Negative R^2 score means your model fits the data very poorly. In this case Decision tree may be too simple. Or maybe you've chosen wrong criterion.
I would recommend to try tune your model's hyperparameters or choose another one.
I have a simple linear regression model and i need to count the variance and the co-variance. How to calculate variance and co-variance using linear regression ?
Variance, in the context of Machine Learning, is a type of error that occurs due to a model's sensitivity to small fluctuations in the training set.
from sklearn.linear_model import LinearRegression
import numpy as np
X = np.array([2,3,4,5])
y = np.array([4,3,2,9] )
#train-test split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)
# Train the model using the training sets
model = LinearRegression()
model.fit(x_train, y_train)
y_predict = model.predict(X_predict)
Try this for the output vector that you get for variance and co-variance:
y_variance = np.mean((y_predict - np.mean(y_predict))**2)
y_covariace = np.mean(y_predict - y_true_values)
Note: Co-variance here is mean of change of predictions with respect to there true values.
I'm trying to predict stock prices through SVR using python. Given below is the code that I have used,
import pandas as pd
import numpy as np
from sklearn.svm import SVR
train= pd.read_csv("ntrain1.csv")
X_train = train.drop("Close Now",1)
Y_train = train["Close Now"]
clf = SVR(kernel= 'rbf', C=100000, gamma=0.2, epsilon = 0.1)
clf.fit(X_train, Y_train)
test= pd.read_csv("ntestbri.csv")
X_test = test.drop("Close Now",1)
Y_test = test["Close Now"]
y_prediksi = clf.predict(X_test)
y_prediksi_series = pd.Series(y_prediksi)
y_prediksi= pd.DataFrame()
y_prediksi["y_prediksi"] = y_prediksi_series
y_prediksi.to_csv("npredksibri3.csv")
rmse = np.sqrt( mean_squared_error( Y_test, y_prediksi ) )
rmse
The problem in this code is to generate a prediction with the same value of 4436.021668 and the RMSE value corresponding to the predicted result.
How do I fix this?
#maulita - not sure what the specific question is but if you are looking to improve your predictor, a best practice is to do the train test split on the training set. This allows you to assess the quality of your predictions and calibrate your predictor prior to loading the test dataset. Hope that helps - Sandeep
I have been using SVR to predict the value of a time series. My dataset is split into two which is train and test and using SVR with RBF kernel to predict the test dataset. While SVR has been perfectly modeled the train data set but always predict the average value of the test dataset.
Have been trying StandardScaller, Normalization and so on but always failed.
here is my code
X = np.array(x).reshape(-1,1)
Y = np.array(y).reshape(-1,1)
sc_y = StandardScaler()
Y = sc_y.fit_transform(Y)
Y = np.array(Y).ravel()
# Fit regression model
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.4, random_state=0, shuffle=False)
from sklearn.model_selection import cross_val_score
svr_rbf = SVR(kernel='rbf', C=10, gamma=9.9999999999999995e-08, epsilon=0.1)
print(X_train.shape)
svr_rbf.fit(X_train, Y_train)
y_rbf = svr_rbf.predict(X_train)
y_rbf1 = svr_rbf.predict(X_test)
and here is my result
The prediction is at the end where a constant value is shown.
Do you know what should I do to make the prediction better?