ImageNet test dataset for keras applications models - keras

I have a model pretrained on ImageNet like this:
from keras.applications import resnet50
model = resnet50.ResNet50(weights='imagenet')
Is there any way to get test ImageNet dataset and their labels (which means data not used for training the above model)?

Original
Sadly ImageNet (from ILSVRC2012) never released the labels for the 100k images in the test dataset. You can use instead the 50k images from the Validation dataset, whose labels can be found in the Develpoment Kit (tasks 1 and 2).
You can download both the Validation images and the validation labels from http://www.image-net.org/challenges/LSVRC/2012/downloads
Updated!
There is an ImageNet Test set with 100k images available here, and although you cannot access the labels, you can instead predict the labels for all the images and submit your predictions for evaluation (top1, top5) here.

You can load a model with weights=None and download imagenet dataset.You can then split the dataset into train-test, train your model with train set and use test set for your purpose

Related

Image Classification model overfit

I have an image classification dataset that has 1020 training and 1020 validation images and the test image has more than 4000 images. I build a custom model to classify I use augmentation and dropout to reduce overfit but this method is not working I also try different regularization methods but it overfits again. The model has almost 440 layers with several skip connections. What will be the suggestion in that case?

How to get the imagenet dataset on which pytorch models are trained on

Can anyone please tell me how to download the complete imagenet dataset on which the pytorch torchvision models are trained on and their Top-1 error is reported on?
I have downloaded Tiny-Imagenet from Imagenet website and used pretrained resnet-101 model which provides only 18% Top-1 accuracy.
Download the ImageNet dataset from http://www.image-net.org/ (you have to sign in)
Then, you should move validation images to labeled subfolders, which could be done automatically using the following shell script:
https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh

Is there a way to create and train a model without transfer learning using tensorflow object-detection api?

I'm using faster_rcnn_resnet50 to train a model which will detect corrosions in images and I want to train a model from scratch instead of using transfer learning.
I don't know if this right but the reason I want to do this is that the already existing weights (which are trained on COCO) will affect my model trained on corrosion images.
One way I would like to do this is randomize or unfreeze the weights of the feature extractor on the resnet50 and then train the model on my images.
but there's no function or an option in the resnet50 config file to randomize or unfreeze weights.
I've made a new labelmap with a single label and tried it with transfer learning. It's working but I would like to have a model is trained just on my images and the previous weights shouldn't affect my predictions.
This is the first time I'm working with object detection and transfer learning. Will the weights of the pre-trained model on COCO affect my model which is trained on custom images of corrosion? How do you use tensorflow object-detection API without transfer learning?

LSTM model weights to train data for text classification

I built a LSTM model for text classification using Keras. Now I have new data to be trained. instead of appending to the original data and retrain the model, I thought of training the data using the model weights. i.e. making the weights to get trained with the new data.
However, irrespective of the volume i train, the model is not predicting the correct classification (even if i give the same sentence for prediction). What could be the reason?
Kindly help me.
Are you using the following to save the trained model?
model.save('model.h5')
model.save_weights('model_weights.h5')
And the following to load it?
from keras.models import load_model
model = load_model('model.h5') # Load the architecture
model = model.load_weights('model_weights.h5') # Set the weights
# train on new data
model.compile...
model.fit...
The model loaded is the exact same as the model being saved here. If you are doing this, then there must be something different in the data (in comparison with what it is trained on).

VGG16 trained on grayscale imagenet

I have found the VGG16 network pre-trained on the (color) imagenet database (as .npy). Is there a VGG16 network pre-trained on a gray-scale version of the imagenet database available?
(The usual 'tricks' for using the 3-channel filters of the conv1.1 layer on the gray 1-channel input are not enough for me. I am looking at incremental improvements of the network performance, so I need to see how the transfer learning behaves when the pre-trained model was 'looking' at gray-scale input).
Thanks!
Yes, there's this one:
https://github.com/DaveRichmond-/grayscale-imagenet
Greyscale imagenet trained model, and also a version of it that's finetuned on X-rays. They showed that Imagenet performance barely drops btw.
#GrimSqueaker gave you the code of this paper : https://openaccess.thecvf.com/content_eccv_2018_workshops/w33/html/Xie_Pre-training_on_Grayscale_ImageNet_Improves_Medical_Image_Classification_ECCVW_2018_paper.html
However, the model trained in it is Inception v3 not VGG16.
You have two options:
Use a colored pre-trained VGG16 model and duplicate one channel to the three channels
Train your VGG16 model on the ImageNet grayscaled dataset.
You may find this link useful:
https://github.com/zzangho/VGG16_grayscale

Resources