How to resolve bad input shape for Multilabel classification - python-3.x

I am trying to do multilabel classification with a TfIdfVectorizer transformed data with shape (218,1861) on tags which have the shape of (218,5).
I am getting a
ValueError: bad input shape (218, 5)
I pass my tags through the function pipeline below:
self.q_matrix = tf_idf.fit_transform(question_features)
y = MultiLabelBinarizer().fit_transform(tags)
clf.fit(self.q_matrix,y)
where clf is LinearSVC.

Looks like you are trying to fit 5 classifiers (as y has 5 columns). You can't do that with LinearSVC. As you can see in the documentation of the fit function, your target (y) should have just one column:
y : array-like, shape = [n_samples] Target vector relative to X
So if you would like to fit 5 classifiers (one for each column of y), you'll need to fit them separately.

Related

LSTM input shape through json file

I am working on the LSTM and after the pre-processing of data I get the data X in form of a list which contains the 3 lists of features and each list contains the sequence of 50 points in form of a list.
X = [list:100 [list:3 [list:50]]]
Y = [list:100]
since its a multivariate LSTM, I am not sure how to give all 3 sequences as an input to Keras-Lstm. Do I need to convert it in Pandas data frame?
model = models.Sequential()
model.add(layers.Bidirectional(layers.LSTM(units=32,
input_shape=(?,?,?)))
You can do do the following to convert the lists into NumPy arrays:
X = np.array(X)
Y = np.array(Y)
Calling the following after this conversion:
print(X.shape)
print(Y.shape)
should output: (100, 3, 50) and (100,), respectively. Finally, the input_shape of the LSTM layer can be (None, 50).
LSTM Call arguments Doc:
inputs: A 3D tensor with shape [batch, timesteps, feature].
You would have to transform that list into a numpy array to work with Keras.
As per the shape of X you have provided, it should work in theory. However you do have to figure out what the 3 dimensions of your array actually contain.
The 1st dimension should be your batch_size i.e. how many batches of data you have.
The 2nd dimension is your timestep data.
Ex: words in a sentence, "cat sat on dog" -> 'cat' is timestep 1, 'sat' is timestep 2 and 'on' is timestep 3 and so on.
The 3rd dimension represent the features of your data of each timestep.. For our sentence earlier, we can vectorize each word

Fit a Gaussian curve with a neural network using Pytorch

Suppose the following model :
import torch.nn as nn
class PGN(nn.Module):
def __init__(self, input_size):
super(PGN, self).__init__()
self.linear = nn.Sequential(
nn.Linear(in_features=input_size, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=1)
)
def forward(self, x):
return self.linear(x)
I figure I have to modify the model to fit a 2-dimensional curve.
Is there a way to fit a Gaussian curve with mu=0 and sigma=0 using Pytorch? If so, can you show me?
A neural network can approximate an arbitrary function of any number of parameters to a space of any dimension.
To fit a 2 dimensional curve your network should be fed with vectors of size 2, that is a vector of x and y coordinates. The output is a single value of size 1.
For training you must generate ground truth data, that is a mapping between coordinates (x and y) and the value (z). The loss function should compare this ground truth value with the estimate of your network.
If it is just a tutorial to learn Pytorch and not a real application, you can define a function that for a given x and y output the gaussian value according to your parameters.
Then during training you randomly choose a x and y and feed this to the networks then do backprop with the true value.
For a function y = a*exp(-((x-b)^2)/2c^2),
Create this mathematical equation, for some values of x, (and a,b,c), get the outputs y. This will be your training set with x values as inputs and y values as output labels. Since this is not a linear equation, you will have to experiment with no of layers/neurons and other stuff, but it will give you a good enough approximation. For different values of a,b,c, generate your data for that and maybe try different things like adding those as inputs with x.

Error while fitting the data in KNN in python

This is my code
clf = KNN(n_neighbors = 1)
clf.fit(train_x, train_y)
train_predict = clf.predict(train_x)
k = f1_score(train_predict,train_y)
print("Training F1 Score:",k)
test_predict = clf.predict(test_x)
k = f1_score(test_predict,test_y)
print("Test F1 score:",k)
I am getting the error
Found input variables with inconsistent numbers of samples: [668, 223]
The shape of the data is
train_x=(668, 24),
train_y=(223,24)
Please help me, Thanks in advance
If you look at the documentation of the fit function
Fit the model using X as training data and y as target values
Parameters:
X : {array-like, sparse matrix, BallTree, KDTree}
Training data. If array or matrix, shape [n_samples, n_features], or [n_samples, n_samples] if metric=’precomputed’.
y : {array-like, sparse matrix}
Target values of shape = [n_samples] or [n_samples, n_outputs]
It is clear that the shape of train_x and train_y doesn't match the requirements of the fit function. I am guessing that dimensions of the train_x and train_y are flipped in your case.
Therefore try:
clf.fit(train_x.T, train_y.T)

Keras custom loss function that depends on the input features

I have a multilabel classification problem with K labels and also I have a function, let's call it f that for each example in the dataset takes in two matrices, let's call them H and P. Both matrices are part of the input data.
For each vector of labels y (for one example), i.e. y is a vector with dimension (K \times 1), I compute a scalar value f_out = f(H, P, y).
I want to define a loss function that minimizes the mean absolute percentage error between the two vectors formed by the values f_out_true = f(H, P, y_true) and f_out_pred = f(H, P, y_pred) for all examples.
Seeing the documentation of Keras, I know that customized loss function comes in the form custmLoss(y_pred, y_true), however, the loss function I want to define depends on the input data and these values f_out_true and f_out_pred need to be computed example by example to form the two vectors that I want to minimize the mean absolute percentage error.
As far as I am aware, there is no way to make a loss function that takes anything other than the model output and the corresponding ground truth. So, the only way to do what you want is to make the input part of your model's output. To do this, simply build your model with the functional API, and then add the input tensor to the list of outputs:
input = Input(input_shape)
# build the rest of your model with the standard functional API here
# this example model was taken from the Keras docs
x = Dense(100, activation='relu')(input)
x = Dense(100, activation='relu')(x)
x = Dense(100, activation='relu')(x)
output = Dense(10, activation='softmax')(x)
model = Model(inputs=[input], outputs=[output, input])
Then, make y_true a combination of your input data and the original ground truth.
I don't have a whole lot of experience with the functional API so it's hard to be more specific, but hopefully this points you in the right direction.

Linear regression with pytorch

I tried to run linear regression on ForestFires dataset.
Dataset is available on Kaggle and gist of my attempt is here:
https://gist.github.com/Chandrak1907/747b1a6045bb64898d5f9140f4cf9a37
I am facing two problems:
Output from prediction is of shape 32x1 and target data shape is 32.
input and target shapes do not match: input [32 x 1], target [32]¶
Using view I reshaped predictions tensor.
y_pred = y_pred.view(inputs.shape[0])
Why there is a mismatch in shapes of predicted tensor and actual tensor?
SGD in pytorch never converges. I tried to compute MSE manually using
print(torch.mean((y_pred - labels)**2))
This value does not match
loss = criterion(y_pred,labels)
Can someone highlight where is the mistake in my code?
Thank you.
Problem 1
This is reference about MSELoss from Pytorch docs: https://pytorch.org/docs/stable/nn.html#torch.nn.MSELoss
Shape:
- Input: (N,∗) where * means, any number of additional dimensions
- Target: (N,∗), same shape as the input
So, you need to expand dims of labels: (32) -> (32,1), by using: torch.unsqueeze(labels, 1) or labels.view(-1,1)
https://pytorch.org/docs/stable/torch.html#torch.unsqueeze
torch.unsqueeze(input, dim, out=None) → Tensor
Returns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
Problem 2
After reviewing your code, I realized that you have added size_average param to MSELoss:
criterion = torch.nn.MSELoss(size_average=False)
size_average (bool, optional) – Deprecated (see reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
That's why 2 computed values not matched. This is sample code:
import torch
import torch.nn as nn
loss1 = nn.MSELoss()
loss2 = nn.MSELoss(size_average=False)
inputs = torch.randn(32, 1, requires_grad=True)
targets = torch.randn(32, 1)
output1 = loss1(inputs, targets)
output2 = loss2(inputs, targets)
output3 = torch.mean((inputs - targets) ** 2)
print(output1) # tensor(1.0907)
print(output2) # tensor(34.9021)
print(output3) # tensor(1.0907)

Resources