I'm working on signature verification and there were a bunch of things I wanted to do using Keras/ OpenCV/ PIL but couldn't find relevant information. I have loaded the dataset folder using Keras.preprocessing.image_dataset_from_directory and now need to:
Crop the signature from the image stored in the dataset. There may be rectangular borders (or a side of the border) and the border pixels aren't the same in all images.
Resize the image and also take care of augmentation in the signature.
Example Images:
Since I'm working in Keras, I thought of working with its functions but couldn't find any. How can I auto crop/ extract a signature in the dataset I've loaded? About image augmentation, should I do this in this image preprocessing stage, or implement this in CNN model I am using? I am new to image processing and Keras.
Also, because of loading entire training folder as a dataset, the labels are "Genuine" and "Forged". However, there are multiple genuine and forged signatures of a person, and there are multiple people. How do I divide the data?
Organize your directories as follows
main_dir
-train_dir
``person1_fake_dir
```person1 fake image
```person1 fake image
---etc
``person1_real_dir
---person1 real image
---person1 real image
--- etc
--person2_fake_dir
--- person2 fake image
--- person2 fake image
--- etc
--person2_real_dir
---person2 real image
---person2 real image
---etc
.
.
.
--personN_fake_dir
---personN fake image
---personN fake image
---etc
--personN_real_dir
---personN real image
---personN real image
--- etc
-test_dir
same structure as train_dir but put test images here
-valid_dir
same structure as train_dir but put validation images here
If you have N persons then you will have 2 X N classes
You can then use tf.keras.preprocessing.image.ImageDataGenerator().flow_from_directory()
to input your data. Documentation is here. You don't have to worry about cropping the images just set the image size in flow to something like (256,256).
Code below show the rest of the code you need
data_gen=tf.keras.preprocessing.image.ImageDataGenerator(resize=1/255)
train_gen=data_gen.flow_from_directory(train_dir, target_size=(224,224), color-mode='grayscale')
valid_gen=data_gen.flow_from_directory(valid_dir, target_size=(224,224), color-mode='grayscale', shuffle=False)
test_gen=data_gen.flow_from_directory(test_dir, target_size=(224,224), color-mode='grayscale', shuffle=False)
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.CategoricalCrossentropy(), metrics='accuracy')
history=model.fit(train_gen, epochs=20, verbose=1)
accuracy=model.evaluate (test_gen)[1]*100
print ('Model accuracy is ', accuracy)
Note your model will not be able to tell fake from real in the general case. It should work for persons 1 through N. You could try putting all the fake images in one class directory and all the real images in another class directory and train it but I suspect it will not work well in telling real from fake for the general case.
Related
I’m currently working on object detection using yolov5. I trained a model with a custom dataset which has 3 classes = [‘Car’,‘Motorcycle’,‘Person’]
I have many questions related to yolov5.
All the custom images are labelled using Roboflow.
question1 : As you can see from the table that my dataset has mix of images with different sizes. Will this be a problem in training? And also assume that i’ve trained the model and got ‘best.pt’. Will that model work efficiently in any dimensions of images/videos.
question 2:
Is this directory model correct for training. Even i have ‘test’ directory but it seems that the directory is not at all used. The images in the ‘test’ folder is useless. ( I know that i’m asking dumb questions, please bare with me.)
Is it ok if place all my images like this
And should i need a ‘test’ folder?
question3: What is the ‘imgsz’ in detect.py? Is it downsampling the input source?
I’ve spent more than 3 weeks in yolo. I love it but i find some parts difficult to grasp. kindly provide suggestion for this questions. Thanks in advance.
"question1 : As you can see from the table that my dataset has mix of images with different sizes. Will this be a problem in training? And also assume that i’ve trained the model and got ‘best.pt’. Will that model work efficiently in any dimensions of images/videos."
As long as you've resized/normalized all of your images to be the same square size, then you should be fine. YOLO trains on square images. You can use a platform like Roboflow to process your images so they not only come out in the right structure (for your images and annotation files) but also resize them while generating your dataset so they are all the same size. http://roboflow.com/ - you just need to make a public workspace to upload your images to and you can use the platform free. Here's a video that covers custom training with YOLOv5: https://www.youtube.com/watch?v=x0ThXHbtqCQ
Roboflow's python package can also be used to extract your images programmatically: https://docs.roboflow.com/python
"Is this directory model correct for training. Even i have ‘test’ directory but it seems that the directory is not at all used. The images in the ‘test’ folder is useless. ( I know that i’m asking dumb questions, please bare with me.)"
Yes that directory model is correct from training. Its what I have whenever I run YOLOv5 training too.
You do need a test folder if you want to run inference against the test folder images to learn more about your model's performance.
The 'imgsz' parameter in detect.py is for setting the height/width of the images for inference. You set it at the value you used for --img when you ran train.py.
For example: Resized images to 640 by 640 when generating your images for training? Use (640, 640) for the 'imgsz' parameter (that is the default value). And that would also mean you set --img to 640 when you ran train.py
detect.py parameters (YOLOv5 Github repo)
train.py parameters (YOLOv5 Github repo)
YOLOv5's Github: Tips for Best Training Results https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results
Roboflow's Model Production Tips: https://docs.roboflow.com/model-tips
I have been working on a dataset in which the goal is to determine that which type of orientation is it. It is a classification problem in which for each record(for most of them) I am having 4 images - front facing, left facing, right facing and back facing product images.
I want to classify these images in the above 4 categories.
The dataset looks like this :
I have downloaded the images and put them in different folders according to their classes.
Methods I have applied:
Till now I have applied two methods to classify these images.
1) I have tried vgg16 directly to classify the images but it did not give me even 50% accuracy.
2) I converted those images into edge images with black background as:
This is done using canny edge detection. It was done because in the result I was getting images with similar color dresses, similar design dresses, etc.
On top of these I again applied vgg16, resnet50, inception models but nothing seemed to work.
Can you suggest some ideas that can work in my case and classify the images in a better way.
first of all yor data set has to be equally splited. For instance 80% train and 20% test. After that you have to balance these sets (train set 60% of class A images, 40% of class B images) the exact same for test set.
I have a hyperspectral image having 186 bands. What is appropriate way to generate ground truths so that I can use it to make training class to train a machine learning model. The image is as below:
We need to manually create masks or assign classes to regions of interest on any 2D image serving as the ground truth data (may need to convert it into the same type as the hyperspectral image data, containing only a single band information)
Goal : I want to build a model which can detect "number of faces" present in a picture.
What I have :
24533 Images to train, One CSV file which includes,
7 column - [Image_ID, width, height, xmin, ymin, xmax, ymax]
I want to build my own model by fitting this data to Keras Dense layer so that while passing any image or bunch of image it could give me a result with Image_ID & number of faces present in photo.
I have gone through a lot of documents including how to use YOLO, ImageAI, HAAR cascade XML Library but in every case, it is addressed how to use these libraries without building own model.
Please guide me on how to use the information I already have to build my own model without using the existing one.
I have 10 different types of images in a folder. After the prediction of Images using VGG16 Module in a folder, I got some levels for those Images. How can I match those levels to the images in my folder and how can I segregate the one type of images in one folder?
Not getting anything.
('n04536866', 'violin', 0.98542005),
('n03028079', 'church', 0.35847503),
('n02690373', 'airliner', 0.945028),
('n03642806', 'laptop', 0.52074945),
I´m getting predictions like this, now i want to match these levels with my images and filter out the one kind of images in one folder.
Please read some basics about neural networks and image classification. The result of your prediction is an n-dimensional vector, where n is the number of ground truth labels, and the components of the vector are the probability for each class. So from the example above the neural network assume, that the input image which was used for this prediction has a probability of 98,54% to show a violin.