I'm new to the Machine learning domain and in Learn Regression i have some doubt
1:While practicing the sklearn learn regression model prediction method getting the below error.
Code:
sklearn.linear_model.LinearRegression.predict(25)
Error:
"ValueError: Expected 2D array, got scalar array instead: array=25. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample."
Do i need to pass a 2-D array? Checked on sklearn documentation page any haven't found any thing for version update.
**Running my code on Kaggle
https://www.kaggle.com/aman9d/bikesharingdemand-upx/
2: Is index of dataset going to effect model's score (weights)?
First of all you should put your code as you use:
# import, instantiate, fit
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X, y)
# use the predict method
linreg.predict(25)
Because what you post in the question is not properly executable, predict method is not static for the class LinearRegression.
When you fit a model, the first step is recognize which kind of data will be the input, in your case will be similar to X, that means that if you pass something with different shape of X to the model it will raise an error.
In your example X seems to be a pd.DataFrame() instance with only 1 column, this should be replaceable with an array of 2 dimension representing the number of examples by the number of features, so if you try:
linreg.predict([[25]])
should work.
For example if you were trying a regression with more than 1 feature aka column, let's say temp and humidity, your input would look like this:
linreg.predict([[25, 56]])
I hope this will help you and always keep in mind which is the shape of your data.
Documentation: LinearRegression fit
X : array-like or sparse matrix, shape (n_samples, n_features)
Related
I have a Gaussian mixture model defined in Pytorch as below:
mix = D.Categorical(torch.rand(2,5,1))
comp = D.Normal(torch.randn(1), torch.rand(1))
#print(mix.logits.shape,comp.batch_shape)
gmm2 = D.MixtureSameFamily(mix, comp)#multiv
What I want to achieve is to have a 5-component mixture of 2-component mixtures of univariate Gaussians. I need the Gaussians to be univariate, therefore I don't want to define them as bivariate. Essentially, I want to define a multivariate mixture of univariate Gaussians instead of a mixture of multivariate Gaussians.
Running the above code, the mixture model is constructed successfully, but when I want to sample from it I get the following error:
RuntimeError: Index tensor must have the same number of dimensions as input tensor
The error happens in this line samples = torch.gather(comp_samples, gather_dim, mix_sample_r) in the mixture_same_family class. I understand the problem with the gather function but am out of ideas how to fix it. I'd appreciate any feedback on this.
I am working with a kNN model that I have built and I would like to export as an .mlmodel file. I have done so already, but it is something that could use some work in terms of efficiency.
I have python3.6, sklearn 0.19.2, and the latest version of mlcoretools.
Initially, I training my model with x_train and x_test as array of float64 and y_train and y_test as Array of int8. The y values are either 0 or 1. Using:
import coremltools
coreml_model = coremltools.converters.sklearn.convert(model)
I get this error:
ValueError: Class labels must be all of type int or all of type string.
Fine. I change the y values to int32 and it works. But the reason I wanted int8 was for memory reasons in my app. Any reason why int8 won't work?
The other issue is with the output. Currently, with my labels they are 0 or 1. However, is there a way to have model output the strings go or stop instead of 1 or 0? Seems that within the documentation, in the input I can have a dict, but not for outputs. Ideally, something like this would be great for output, but I cannot get it to work: labels = {“stop” : 0, “go” : 1}
CoreML currently does not support int8 as an input data type.
If you want the model to predict strings, you should use labels that are strings. That said, it's possible to change the model so that it outputs strings instead of numbers.
You will have to edit the mlmodel file, grab the "spec" object, and fill in the stringClassLabels field of the spec.kNearestNeighborsClassifier.
I have a question regarding the implementation of a custom loss-function for my neural network.
I am currently trying to segment cells for a project and I decided to use a unet as it seems to work quite well. In order to improve my current model, I decided to follow the idea of the original paper of the unet (https://arxiv.org/abs/1505.04597) where they implemented a weight-map assigning thus more weight to pixels that are located in between cells that are tightly associated, as you can see in this picture: Example of a weight map.
I am currently using Keras for my unet and my problem is that I do not know how to give my weights to my model without creating any problem. My idea was to create a generator with the images and a 2-channeled array containing the labels in the first channel and the weights in the second channel, that way I can extract my weights and my labels easily in my custom loss function.
My code looks like that:
train_generator = zip(image_generator, label_generator, weight_generator)
for (img, label, weight) in train_generator:
img, label = adjustData(img, True, label)
label_weights = np.concatenate((label, weight),axis=3)
# This is the final generator
yield (img, label_weights)
As you can see, I construct the train_generator with three previously constructed generators, I adjust some things and then I yield my images and combined labels and weights.
Then, when I try to fit my model with fit_generator, I get this error: ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays.
I really do not know what to do and how to implement correctly what I want to do.
Thank you in advance for your answers.
I was planning to use sklearn linear_model to plot a graph of linear regression result, and statsmodels.api to get a detail summary of the learning result. However, the two packages produce very different results on the same input.
For example, the constant term from sklearn is 7.8e-14, but the constant term from statsmodels is 48.6. (I added a column of 1's in x for constant term when using both methods) My code for both methods are succint:
# Use statsmodels linear regression to get a result (summary) for the model.
def reg_statsmodels(y, x):
results = sm.OLS(y, x).fit()
return results
# Use sklearn linear regression to compute the coefficients for the prediction.
def reg_sklearn(y, x):
lr = linear_model.LinearRegression()
lr.fit(x, y)
return lr.coef_
The input is too complicated to post here. Is it possible that a singular input x caused this problem?
By making a 3-d plot using PCA, it seems that the sklearn result is not a good approximation. What are some explanations? I still want to make a visualization, so it will be very helpful to fix the issues in the sklearn linear regression implementation.
You say that
I added a column of 1's in x for constant term when using both methods
But the documentation of LinearRegression says that
LinearRegression(fit_intercept=True, [...])
it fits an intercept by default. This could explain why you have the differences in the constant term.
Now for the other coefficients, differences can occur when two of the variables are highly correlated. Let's consider the most extreme case where two of your columns are identical. Then reducing the coefficient in front of any of the two can be compensated by increasing the other. This is the first thing I'd check.
A silly question: after i train my SVM in scikit-learn i have to use predict function: predict(X) for predicting at which class belongs? (http://scikit-learn.org/dev/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.predict)
X parameter is the image feature vector?
In case i give an image not trained (not trained because SVM ask at least 3 samples for class), what returns?
First remark: "predict() returns image similarities with SVM in scikit learn" is not a question. Please put a question in the header of Stack Overflow entries.
Second remark: the predict method of the SVC class in sklearn does not return "image similarities" but a class assignment prediction. Read the http://scikit-learn.org documentation and tutorials to understand what we mean by classification and prediction in machine learning.
X parameter is the image feature vector?
No, X is not "the image" feature vector: it is a set of image feature vectors with shape (n_samples, n_features) as explained in the documentation you refer to. In your case a sample is an image hence the expected shape would be (n_images, n_features). The predict API was design to compute many predictions at once for efficiency reason. If you want to compute a single prediction, you will have to wrap your single feature vector in an array with shape (1, n_features).
For instance if you have a single feature vector (1D) called my_single_image_features with shape (n_features,) you can call predict with:
predictions = clf.predict([my_single_image_features])
my_single_prediction = predictions[0]
Please note the [] signs around the my_single_image_features variable to turn it into a 2D array.
my_single_prediction will be an integer whose meaning depends on the integer values provided by you when calling the clf.fit(X_train, y_train) method in the first place.
In case i give an image not trained (not trained because SVM ask at least 3 samples for class), what returns?
An image is not "trained". Only the model is trained. Of course you can pass samples / images that are not part of the training set to the predict method. This is the whole purpose of machine learning: making predictions on new unseen data based on what you learn from the statistical regularities seen in the past training data.