I have the following problem:
Input: a set of 6 images
Output: a probability for each image determining whether the image is the correct one out of the 6 images
I know how to create a CNN with keras, but not how to have multiple images as an input.
How would one solve this problem?
One way I can think of is to use a pre-trained model (VGG16 etc.) and extract out the vectors from some intermediate layer, then concat 6 vectors together then feed it into a neural network (or some other classification model) and train it as a multiclass classification task.
You can also use an Autoencoder and take the anomaly detection approach.
Related
I have to pre train a model for multi label classification. I'm pretraining with cifar10 dataset and I wonder if I have to use for the pre training
'categorical_crossentrpy' (softmax) or 'binary_crossentropy' (sigmoid), since in the first case I have a multi classification problem
You should use softmax because it gives you the probabilities for every class, no matter how many of them are there. Sigmoid, as you have written is used with binary_crossentropy and is used in binary classification (hence binary in the name). I hope it's clearer now.
I've implemented a neural network using Keras. Once trained and tested for final test accuracy, using a matrix with a bunch of rows containing features (plus corresponding labels), I have a model which I should be able to use for prediction.
How can I feed a single unseen example, meaning a feature vector to the model, to obtain a class prediction?
I've looked at their documentation here but could not find a method for it.
What you want is the predict method, it takes a batch of input samples and produces predictions, which are the outputs computer by your network. To feed a single example you can just put it inside a numpy ndarray wrapper.
When I read the paper "Convolutional Neural Networks for Sentence Classification"-Yoon Kim-New York University, I noticed that the paper implemented the "CNN-non-static" model--A model with pre-trained vectors from word2vec,and all words— including the unknown ones that are randomly initialized, and the pre-trained vectors are fine-tuned for each task.
So I just do not understand how the pre-trained vectors are fine-tuned for each task. Cause as far as I know, the input vectors, which are converted from strings by word2vec.bin(pre-trained), just like image matrix, which can not change during training CNN. So, if they can, HOW? Please help me out, Thanks a lot in advance!
The word embeddings are weights of the neural network, and can therefore be updated during backpropagation.
E.g. http://sebastianruder.com/word-embeddings-1/ :
Naturally, every feed-forward neural network that takes words from a vocabulary as input and embeds them as vectors into a lower dimensional space, which it then fine-tunes through back-propagation, necessarily yields word embeddings as the weights of the first layer, which is usually referred to as Embedding Layer.
I am trying to implement CNN on text classification task. I understand that CNN can extract and abstract features from pure text.
What if I have some additional very useful features that not in the text? How should I add those features into the CNN?
Currently, what I am doing is concatenating the convolution layer results with an additional feature vector. And then feed them to the hidden layers. Is this a right way to do it?
Thanks!
I have implemented character recognition using a library
but I still don't get how SVM theory works in training and prediction process, I just understand SVM is only finding the hyperplane
E.g., suppose I have a training image as follows
image from google, number zero
How do we find hyperplane for each training data like above?
How is the prediction process is done?
How can the SVM classify the data based on those hyperplane?
Thank you very much if you can help me
You can use opencv and python.Opencv has implemented svm and you can use it by function call.
SVM is machine leraning model for data classification.We can use SVM to classify images.the steps are
you must have a training dataset(a dataset of images whose labels are known)
Extract features [features are color,shape,hog,surf,sift etc..] from that images and store that,also store the assosiated labels
then train svm using these datas
Now you can use svm to predict labels of unkonwn images
this link will help you
First, It is a non linear separable problem you have to implement kernel SVM which projects them into higher dimensional space where it becomes linearly separable. You can use sklearn library to achieve the above.