I would like to know a couple of things to clear my confusion. I want to work on a medical neuroimage MRI image scans dataset from the ADNI database.
Each Alzheimer's Disease (AD) MRI image scan has multiple slices.
Do I have to separate each image scan slice and label each of them as AD or combine all image scan slices as a one-image scan and label it for classification?
Most of the medical neuroimage DICOM, NfINT, NII, etc., format. Is it mandatory to convert them to png or jpg for the CNN network model or keep it in NfNIT or nii format?
I have read several existing papers on neuroimaging regarding Alzheimer's disease but did not find the above question answer. Even I have sent an email to the research paper writer in reply; I got they can not help on this as they are very busy and mention their sincere apology for that.
It will be very helpful if anyone has the answer to clear my confusion and thought.
Thank you.
You can train with NIfTI, using, for example, TorchIO. There's no need to separate each slice, you can use the 3D image as is.
You can find some examples in the documentation.
Disclaimer: I'm the main developer of TorchIO.
Related
I am using COBRE brain MRI dataset containing Nifti files. I can visualize them but could not understand how to use them in deep learning with the correct format. I read Nilearn documentation but they have used only one example of .nii file for 1 subject. The question is how to give 100 .nii files to a CNN?
The second thing is how to determine which slice of the file should be used? Should it be the middle of them? Nifti file consists of 150 slices for each subject's brain.
The third thing is how to provide the model with labels? The dataset doesn't contain any mask. How to give the model specific label for a specific file? Should I create a csv file with path of .nii files and their associated label?
Please explain me or suggest me some resources for the same.
hi i recently got into processing of nii files for one of my projects. i could get a break though to some level (preprocessing) not yet to model level.
for your second question, usually an expert visualise the niis and provide the location(s) of the roi(region of interest)
I am currently in the process of parsing the nii into csv format with labels. so the answer to your third question is , we lable the coordinates (x,y,z,c,t) as per the roi locations . (i may need to correct this understanding as i advance on need basis but for now this is the approach to feed the dataset to model i am goin to follow)
Disclaimer: Complete beginner with neural networks & audio representation. Please bear with me.
I have this idea for my bachelor's thesis (MIR) that involves applying a beat-like time-based pattern to constrain where a CNN-based acoustic model finds onsets/offsets. The problem is that I'm having a hard time figuring out how to implement this concept.
The initial plan was to just insert both the spectrogram and the pattern into the CNN and hope it processes it, but I don't know what format the pattern should be in. I know CNNs are best at processing images but the initial format of said pattern is "time-based" (beats per minute/second). Can this number be represented as an image to be compared to the spectrogram? If so, in what format? Or should I handle this problem in a different way? Thank you in advance!
I have a project where I need to analyze a text to extract some information if the user who post this text need help in something or not, I tried to use sentiment analysis but it didn't work as expected, my idea was to get the negative post and extract the main words in the post and suggest to him some articles about that subject, if there is another way that can help me please post it below and thanks.
for the dataset i useed, it was a dataset for sentiment analyze, but now I found that it's not working and I need a dataset use for this subject.
Please use the NLP methods before processing the sentiment analysis. Use the TFIDF, Word2Vector to create vectors on the given dataset. And them try the sentiment analysis. You may also need glove vector for the conducting analysis.
For this topic I found that this field in machine learning is called "Natural Language Questions" it's a field where machine learning models trained to detect questions in text and suggesting answer for them based on data set you are working with, check this article for more detail.
I have a dataset where lot of names are written like man1sh instead of manish, vikas as v1kas.
How can one correct these names in nlp?
Any help is appreciated.
Try the Deep Neural Network based spell correction https://medium.com/#majortal/deep-spelling-9ffef96a24f6 this method is the state of the art method at the moment. Here is the code https://github.com/MajorTal/DeepSpell and some one already made an improvement over it https://hackernoon.com/improving-deepspell-code-bdaab1c5fb7e.I am not able to find the paper but there is also a paper published that does character level deep neural network for edit distance with good results and a public dataset.
For the above methods, like for all Machine Learning solutions, you need data for training. If you don't have data for your case then the old simple edit distance methods http://norvig.com/spell-correct.html are the only way.
I am new to computer vision, and now I am do some research on object detection. I have read papers about faster RCNN and RFCN, also read YOLO. It seems the biggest problem is the speed? And all of them use image data data only. Are there any models that combines text and image data? Which means we can use the information from text to help detection when the training data is small. For example, when the training data is small, the model cannot tell dogs and cats clearly, but the model could tell there is a bone near that object, and the model gets some information from text that the object near a bone is most likely a dog, thus the model now could tell what the object is. Does this kind of algorithm exist? I haven't found them, hope you could help me. Thanks a lot.
It seems you have mostly referred to research on Deep Networks for Object Detection. Prior to the success of deep networks, researchers were looking to to the possibility of using text with image features to implement ideas similar to yours. You might want to refer to papers from ACM Multimedia and IEEE TMM, especially those before 2014.
The problem was that those approaches could not perform as well as the simplest of the deep networks that use only images. There is some work on combining both images and text, such as this paper. I am sure at least some researchers are already working on this.