I'm looking for examples of pytorch being used to classify non-MNIST digits. After hours of searching, it appears the algorithms are against me. Does anyone have a good example? Thanks.
I am posting this as answer since i do not have the rep to comment,
Please view the google street view dataset (SVHN). It is like MNIST but there is much more noise present in the data. Another option for you could be to use GANs and make more images which practically wouldn't have existed before. You could also try your hand at non - english mnist data-sets (though it moves away from your original goal).
Link to SVHN with pytorch: https://github.com/potterhsu/SVHNClassifier-PyTorch
Link to original SVHN: https://pytorch.org/docs/stable/torchvision/datasets.html#svhn
P.S. You could also try making a dataset on your own! This is quite fun to do.
Related
I would like to know a couple of things to clear my confusion. I want to work on a medical neuroimage MRI image scans dataset from the ADNI database.
Each Alzheimer's Disease (AD) MRI image scan has multiple slices.
Do I have to separate each image scan slice and label each of them as AD or combine all image scan slices as a one-image scan and label it for classification?
Most of the medical neuroimage DICOM, NfINT, NII, etc., format. Is it mandatory to convert them to png or jpg for the CNN network model or keep it in NfNIT or nii format?
I have read several existing papers on neuroimaging regarding Alzheimer's disease but did not find the above question answer. Even I have sent an email to the research paper writer in reply; I got they can not help on this as they are very busy and mention their sincere apology for that.
It will be very helpful if anyone has the answer to clear my confusion and thought.
Thank you.
You can train with NIfTI, using, for example, TorchIO. There's no need to separate each slice, you can use the 3D image as is.
You can find some examples in the documentation.
Disclaimer: I'm the main developer of TorchIO.
I am doing a License Plate Recognition system using python. I browsed through the net and I found many people have done the recognition of characters in the license plate using kNN algorithm.
Can anyone explain how we predict the characters in the License Plate using kNN ?
Is there any other algorithm or method that can do the prediction better ?
I am referring to this Git repo https://github.com/MicrocontrollersAndMore/OpenCV_3_License_Plate_Recognition_Python
Well, I did this 5 years ago. I will suggest you that, maybe right now is so much better to do this using ML Classifier Models, but if you want to use OpenCV. OpenCV has a pretty cool way to make ANPR using an OCR.
When I did it, I used a RasberryPi for processing and capture images and with c++ run openCV in another computer. I recommend you check this repo and if you're interested look for the book reference there. I hope my answer helps you to find your solution.
https://github.com/MasteringOpenCV/code.
I have a task to compare two images and check whether they are of the same class (using Siamese CNN). Because I have a really small data set, I want to use keras imageDataGenerate.
I have read through the documentation and have understood the basic idea. However, I am not quite sure how to apply it to my use case, i.e. how to generate two images and a label that they are in the same class or not.
Any help would be greatly appreciated?
P.S. I can think of a much more convoluted process using sklearn's extract_patches_2d but I feel there is an elegant solution to this.
Edit: It looks like creating my own data generator may be the way to go. I will try this approach.
How to use the Inception V3 tensorflow module to train with our own requirement dataset images. Say for example I want to train the Inception V3 module with the different cool drinkcompany brands Pepsi, Sprite etc.. How it can be achieved..??
In the link https://github.com/tensorflow/models/tree/master/inception they have explained with the ImageNet. I am bit confused with that. Please explain the stuff.
I suggest you to check Transfer Learning. which consists in retrain only the last layers with new categories
How to Retrain Inception's Final Layer for New Categories
Baptiste's answer linking to the Tensorflow site is good. This is a very broad question and his link is a good start.
If you'd like something a little more step-by-step then the Tensorflow for Poets tutorial is basically the same but doesn't require the use of Bazel commands. It initially uses flowers but you can use whatever dataset you want.
There are many other examples and tutorials on the web. I found some more with a quick search including this page and this video.
Good Luck!
I have a dataset where lot of names are written like man1sh instead of manish, vikas as v1kas.
How can one correct these names in nlp?
Any help is appreciated.
Try the Deep Neural Network based spell correction https://medium.com/#majortal/deep-spelling-9ffef96a24f6 this method is the state of the art method at the moment. Here is the code https://github.com/MajorTal/DeepSpell and some one already made an improvement over it https://hackernoon.com/improving-deepspell-code-bdaab1c5fb7e.I am not able to find the paper but there is also a paper published that does character level deep neural network for edit distance with good results and a public dataset.
For the above methods, like for all Machine Learning solutions, you need data for training. If you don't have data for your case then the old simple edit distance methods http://norvig.com/spell-correct.html are the only way.