I'm trying to extract VGG16 features of images as part of a project. However, at the time of extracting the features, I am met with an RuntimeError: mat1 and mat2 shapes cannot be multiplied (512x49 and 25088x4096). The error is triggered at line 69 of vgg.py, at the x = self.classifier(x) instruction.
The simplest bit of code that I have found to reproduce the bug is the following:
import torchvision, torch
feature_extractor = torchvision.models.vgg16()
im_size = 224
a = torch.rand([3, im_size, im_size])
feature_extractor(a)
I don't think that the problem is the shape of the input tensor, since the error is raised pretty late in the forward function of VGG16. I can't think of a way to solve this. Does anybody know what I'm missing?
Not sure why the error appears sai late nor why the documentation doesn’t cover it, but the problem is indeed the tensor shape. The model expects one more index before all the other representing the mini batch. Therefore, the following code does not throw an error:
import torchvision, torch
feature_extractor = torchvision.models.vgg16()
im_size = 224
a = torch.rand([1, 3, im_size, im_size])
feature_extractor(a)
If you want to apply the model to a single image, use unsqueeze to apply an extra leading dimension:
a = torch.rand([3, im_size, im_size])
a = torch.unsqueeze(a,0)
feature_extractor(a)
Related
I'm trying make a transformer classification model by PyTorch and the input is torch.FloatTensor(float data). But, I'm getting hard time with dealing embedding layer since the data is float tensor, it's hard to choose vocal size. Moreover, even I use embedding layer by passing the input as long type, during training CUDA error: device-side assert triggered occurs which seems to be from out of range of the input. Is there any way to build transformer classifier that can get float data as input?
I'm not sure that the problem is the tensor type, you can try some simple code like:
import torch
from torch.nn import Transformer
model = Transformer().cuda()
src = torch.rand((10, 32, 512)).float().cuda()
tgt = torch.rand((20, 32, 512)).float().cuda()
print(model(src, tgt).shape)
and see that the output is correct as: torch.Size([20, 32, 512]). Please check out:
Inconsistency between the number of labels/classes and the number of output units
The input of the loss function may be incorrect.
Apologies in advance.
I am attempting to recreate this CNN (from the Keras Code Examples), with another dataset.
https://keras.io/examples/vision/image_classification_from_scratch/
The dataset I am using is one for retinal scans, and classifies images on a scale from 0-4. So, it's a multi-label image classification.
The Keras example used is binary classification (cats v dogs), though I would have hoped it wouldn't make much difference (maybe this is a big assumption on my part).
I skipped the 'image augmentation' part of the walkthrough. So, I have not created the
data_augmentation = keras.Sequential(
[
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.1),
]
)
part. So, instead of:
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = data_augmentation(inputs)
# Entry block
x = layers.Rescaling(1.0 / 255)(x)
.......
at the beginning of the model, I have:
def make_model(input_shape, num_classes):
inputs = keras.Input(shape=input_shape)
# Image augmentation block
x = keras.Sequential(inputs)
# Entry block
x = layers.Rescaling(1.0 / 255)(x)
.......
However I keep getting different errors no matter how much I try to change things around, such as "TypeError: Keras symbolic inputs/outputs do not implement __len__.", or "ValueError: Exception encountered when calling layer "rescaling_3" (type Rescaling).".
What am I missing here?
I have two sequential models that both do a pretty good job of classifying audio. One uses mfccs and the other wave forms. I am now trying to combine them into a third functional API model using one of the later Dense layers from each of the mfcc and wave form models. The example about how to get the intermediate layers in the Keras FAQ is not working for me (https://keras.io/getting-started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer).
Here is my code:
mfcc_model = load_model(S01_model_local_loc)
waveform_model = load_model(T01_model_local_loc)
mfcc_input = Input(shape=(79,30,1))
mfcc_model_as_layer = Model(inputs=mfcc_model.input,
outputs=mfcc_model.get_layer(name = 'dense_9').output)
waveform_input = Input(shape=(40000,1))
waveform_model_as_layer = Model(inputs=waveform_model.input,
outputs=waveform_model.get_layer(name = 'dense_2').output)
concatenated_1024 = concatenate([mfcc_model_as_layer, waveform_model_as_layer])
model_pred = layers.Dense(2, activation='sigmoid')(concatenated_1024)
uber_model = Model(inputs=[mfcc_input,waveform_input], outputs=model_pred)
This throws the error:
AttributeError: Layer sequential_5 has multiple inbound nodes, hence the notion of "layer input" is ill-defined. Use get_input_at(node_index) instead.
Changing the inputs to the first two Model statements to inputs=mfcc_model.get_input_at(1) and inputs=waveform_model.get_input_at(1) solves that error message, but I then get this error message:
ValueError: Graph disconnected: cannot obtain value for tensor Tensor("dropout_21_input:0", shape=(?, 79, 30, 1), dtype=float32) at layer "dropout_21_input". The following previous layers were accessed without issue: []
If I remove the .get_layer statements and just take the final output of the model the graph connects nicely.
What do I need to do to just get the output of the Dense layers that I want?
Update: I found a really hacky way of getting what I want. I pop'ed off the layers of the mfcc and wave form models until the output layers were what I wanted. Then the code below seems to work. I'd love to know the right way to do this!
mfcc_input = Input(shape=(79,30,1))
waveform_input = Input(shape=(40000,1))
mfcc_model_as_layer = mfcc_model(mfcc_input)
waveform_model_as_layer = waveform_model(waveform_input)
concatenated_1024 = concatenate([mfcc_model_as_layer, waveform_model_as_layer])
model_pred = layers.Dense(2, activation='sigmoid')(concatenated_1024)
test_model = Model(inputs=[mfcc_input,waveform_input], outputs=model_pred)
I would like to know if Keras can be used as an interface to TensoFlow for only doing computation on my GPU.
I tested TF directly on my GPU. But for ML purposes, I started using Keras, including the backend. I would find it 'comfortable' to do all my stuff in Keras instead of Using two tools.
This is also a matter of curiosity.
I found some examples like this one:
http://christopher5106.github.io/deep/learning/2018/10/28/understand-batch-matrix-multiplication.html
However this example does not actually do the calculation.
It also does not get input data.
I duplicate the snippet here:
'''
from keras import backend as K
a = K.ones((3,4))
b = K.ones((4,5))
c = K.dot(a, b)
print(c.shape)
'''
I would simply like to know if I can get the result numbers from this snippet above, and how?
Thanks,
Michel
Keras doesn't have an eager mode like Tensorflow, and it depends on models or functions with "placeholders" to receive and output data.
So, it's a little more complicated than Tensorflow to do basic calculations like this.
So, the most user friendly solution would be creating a dummy model with one Lambda layer. (And be careful with the first dimension that Keras will insist to understand as a batch dimension and require that input and output have the same batch size)
def your_function_here(inputs):
#if you have more than one tensor for the inputs, it's a list:
input1, input2, input3 = inputs
#if you don't have a batch, you should probably have a first dimension = 1 and get
input1 = input1[0]
#do your calculations here
#if you used the batch_size=1 workaround as above, add this dimension again:
output = K.expand_dims(output,0)
return output
Create your model:
inputs = Input(input_shape)
#maybe inputs2 ....
outputs = Lambda(your_function_here)(list_of_inputs)
#maybe outputs2
model = Model(inputs, outputs)
And use it to predict the result:
print(model.predict(input_data))
I am new to Keras and I created my own tf_idf sentence embeddings with shape (no_sentences, embedding_dim). I am trying to add this matrix as input to an LSTM layer. My network looks something like this:
q1_tfidf = Input(name='q1_tfidf', shape=(max_sent, 300))
q2_tfidf = Input(name='q2_tfidf', shape=(max_sent, 300))
q1_tfidf = LSTM(100)(q1_tfidf)
q2_tfidf = LSTM(100)(q2_tfidf)
distance2 = Lambda(preprocessing.exponent_neg_manhattan_distance, output_shape=preprocessing.get_shape)(
[q1_tfidf, q2_tfidf])
I'm struggling with how the matrix should be shaped. I am getting this error:
ValueError: Error when checking input: expected q1_tfidf to have 3 dimensions, but got array with shape (384348, 300)
I already checked this post: Sentence Embedding Keras but still can't figure it out. It seems like I'm missing something obvious.
Any idea how to do this?
Ok as far as I understood, you want to predict the difference between two sentences.
What about reusing the LSTM layer (the language model should be the same) and just learn a single sentence embedding and use it twice:
q1_tfidf = Input(name='q1_tfidf', shape=(max_sent, 300))
q2_tfidf = Input(name='q2_tfidf', shape=(max_sent, 300))
lstm = LSTM(100)
lstm_out_q1= lstm (q1_tfidf)
lstm_out_q2= lstm (q2_tfidf)
predict = concatenate([lstm_out_q1, lstm_out_q2])
model = Model(inputs=[q1_tfidf ,q1_tfidf ], outputs=predict)
predict = concatenate([q1_tfidf , q2_tfidf])
You could also introduce your custom distance in an additional lambda layer, but therefore you need to use a different reshaping in concatenation.