Using 1 channel images with Xception - python-3.x

I am trying to run an image classification model for radiographic images, that have only 1 channel. I was interested in using transfer learning to evaluate how models pre-trained on the ImageNet dataset will perform for radiographic images.
However, when trying to use the keras.applications.Xception() class, it states that only allows 3-channel images.
Looked for another model, but it seems like all the pre-trained models available in Keras require 3-channel images.
Do you have any suggestion in order to be able to use these models?
Thanks in advance.

Related

How can i do many things to configure data with keras

I am a beginner learning deep learning by Keras.
The ImageDataGenerator class in Keras and the flow_from_directory function made it easy to label images.
But all of themAs I configure the electory, there seems to be a limitation in configuring the dataset.
For example, if model want to learn ensemble by dividing the data into 5 folds,
then should i make many directory?
or to bootstrap the data and use the verification data as out of bag data, should I implement it myself?
I'd appreciate it if you could give me a little advice!
As a reference photo, I uploaded the process of labeling with the flow_from_directory function, (88 state classification model)

Pytorch image segmentation transfer learning

I am new in Pytorch. My question is: How do I apply transfer learning to a custom dataset? I am doing image segmentation on brain tumors. I can find examples which use U-net structure but I could not find examples using weights of the pre-trained models for a U-net image segmentation?
You could obtain pre-trained models in two ways:
Model weights or complete models shared in formats such .pt or .pth:
In this case, Saving and Loading Models is a good starting point. Copying from the tutorial there, you could load a model as
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
The other way is to load the model from torchvision. A list is available models is available at Torchvision Models. U-Net is not available yet. However, it is possible to load a pre-trained model as the encoder and write a separate decoder to form a U-Net with a pre-trained encoder.
In this case, the model object returned from the function calls shown in the API are already loaded with pretrained weights when pretrained=True.
For writing a custom dataloader, PyTorch data loaders may be a useful guide.

Is there a possibility to visualize intermediate layers in Keras?

I am using the DenseNet121 CNN in the Keras library and I would like to visualize the features maps when I predict images. I know that is possible with CNN we have made on our own.
Is it the same thing for models available in Keras like DenseNet?

Is there a way to create and train a model without transfer learning using tensorflow object-detection api?

I'm using faster_rcnn_resnet50 to train a model which will detect corrosions in images and I want to train a model from scratch instead of using transfer learning.
I don't know if this right but the reason I want to do this is that the already existing weights (which are trained on COCO) will affect my model trained on corrosion images.
One way I would like to do this is randomize or unfreeze the weights of the feature extractor on the resnet50 and then train the model on my images.
but there's no function or an option in the resnet50 config file to randomize or unfreeze weights.
I've made a new labelmap with a single label and tried it with transfer learning. It's working but I would like to have a model is trained just on my images and the previous weights shouldn't affect my predictions.
This is the first time I'm working with object detection and transfer learning. Will the weights of the pre-trained model on COCO affect my model which is trained on custom images of corrosion? How do you use tensorflow object-detection API without transfer learning?

While implementing yolo in darknet, should we train on image net data set?

I have installed darknet in ubuntu and is now trying to implement object detection using yolo v2 on my custom dataset. In the yolo paper, they have told that they have pretrained the network using image net dataset. So, my question is should we also pretrain the network?
Sorry if I'm blunting.
Can someone reply me?
For most cases, if your dataset has lots of similar feature in the pre-trained weight (e.g. person, car), you should use the pre-trained network such as darknet53.conv.74 or darknet19_448.conv.23.
But you can also train the network without using those pre-trained network (training from scratch), for example by removing the weight from the command :
./darknet detector train data/obj.data yolo-obj.cfg

Resources