train the CNN model with 2D Matrix and the scaler value - conv-neural-network

I would like to make some doubts clear. I have labelled dataset which is number of 2D matrix as the input and one scaler value as an output. I am thinking to apply the Convolutional Neural Network Architecture, make the regression model for prediction.
My question is, can it be possible to train the CNN model with 2D Matrix and the scaler value?
My expected output is also a Scaler valueö

Related

My confusion matrix showing 16*16 instead of 8*8

cm = confusion_matrix(test_labels, prediction_RF)
print(cm)
sns.heatmap(cm, annot=True)
I'm using CNN as feature extractor and then feed the model into Random Forest. Previously I used the same procedure to on a dummy CNN model. It showed the output confusion matrix 8x8 (since I have 8 classes) when I try to see my Confusion Matrix on VGG16 model, I get 16x16 matrix. And I also get 0.0 accuracy on VGG16. But I'm still getting decent result. The matrix I get on VGG16 is given below.
Matrix on VGG16

How to do backpropagation only on a select few labels instead of all labels in a multilabel classification?

I am using a pretrained neural network (resnet) on multiple datasets.
This neural network would have in it's output all the labels that are present in all the datasets,that is, like an union of all labels.
For example:
If dataset A has labels x,y,z,w
Dataset B has labels -m,l,n,o,x,y
Dataset C has labels-w,z,m,o
Then the neural network would have all labels in it's final layer-that is->m,l,n,o,w,x,y,z.
Now depending on which dataset I have, I want the model to train only on the dataset's own labels and not do backpropagation on other labels.
How can this be achieved?
I am working in Pytorch.
Maybe use three loss functions each of which has different values for the argument pos_weight such that it has zeros corresponding to the classes not included in a dataset.
Why do you care if the network does backpropagation on other labels? That is how it is supposed to work.
If the idea is to reduce the number of output features from what they have in the pretrained network, just remove the last layer of the network and add in your own with the desired output features. Then train as you would.

Training features with different dimensions

I have two types of features that should be concatenated and trained, features are as follow:
Images with dimension (277,217,3)
vector of real numbers (40,1)
What I did is after training the model on all images, I extracted the output of predict() function of all samples, this outputs vector, then I concatenated this vector with the real number features. But the problem is when I trained the values of predict() alone, the result was lower than the base model's result.
So I am trying to add the features after the convolutional layers while the model is training
Thanks in advance!!

Initializing the weights of a model from the output of another model in keras for transfer learning

I trained a LeNet architecture on a first dataset. I want to train a VGG architecture on an other dataset by initializing the weights of VGG with weights obtained from LeNet.
All initialization functions in keras are predefined and I do not find how to customize them. For example :
keras.initializers.Zeros()
Any idea how I can set the weights?
https://keras.io/layers/about-keras-layers/
According to the Keras documentation above:
layer.set_weights(weights) sets the weights of the layer from a list of Numpy arrays
layer.get_weights() returns the weights of the layer as a list of Numpy arrays
So, you can do this as follows:
model = Sequential()
model.add(Dense(32))
... building the model's layers ...
# access any nth layer by calling model.layers[n]
model.layers[0].set_weights( your_weights_here )
Of course, you'll need to make sure you are setting the weights of each layer to the appropriate shape they should be.

Keras LSTM error: Input from layer reshape is incompatible with layer lstm

Using RapidMiner I want to implement an LSTM to classify patternes in a time series. Input data is a flat table. My first layer in the Keras operator is a core reshape from exampleset_length x nr_of_attributes to batch x time-steps x features. In the reshape parameter I specifically enter three figures because I want a specific amount of features and time-steps. The only way to achieve this is to specify also batch size, so in total three figures. But when I add a RNN LSTM layer an error is returned: Input is incompatible with layer lstm expected ndim=n found ndim=n+1. What’s wrong?
When specifying 'input_shape' for the LSTM layer, you do not include the batch size.
So your 'input_shape' value should be (timesteps, input_dim).
Source: Keras RNN Layer, the parent layer for LSTM

Resources