Creating a whole brain mask of added 2 nifti images - add

In my analysis to create a whole brain mask for all my subs summed togeather and extract fractional anisotropy out of it, I started with segmenting structural image of each subject into white and gray matter, i added the 2 nifti images (gray matter + white matter) of same subject together with fslmaths -add and want to creat a binarized mask out of them with fslmaths. (In order to add all masks later together so i have a general mask of all subs)
I chose threshold also as an option. The result of this command is a black nifti image, no matter how I put the threshold level. Can you give me an advice please? I could assume that the resulted nifti image from first process (adding) is the problem, but cant process forward.
thanks!

Related

Get a single TIFF using multiple TIFFs and masks

I'm working with satellite imagery (from Sentinel-2), in particular with cloud detection and cloud cleaning.
I got a batch of images of the same area, but in different periods:
From these images, you can see that the position of the clouds is always different.
I also have the mask for each image, where the black areas represent clouds:
These masks are not perfect, but this is not a problem.
What I want to do is to use the mask to cut all the white portions (so get the land and exclude the clouds), and then fill these cuts with a black portion of another image (fill the "hole" in the image with a part of another image without clouds).
Imagery is in TIFF format, while masks are in JPG format.
I'm using Python with libraries like Rasterio, numpy and scikit-image, so a Pythonic solution would be appreciated.

Apply masking on selective image regions using opencv python

I have an ML model that I am using to create a mask to separate background and object. The problem is that the model is not that accurate and we still get regions of background around the edges.
The background could be any color but it is not uniform as you can see in the image.
This is the model output.
I was wondering if there is a way I could apply masking only around the edges so it doesn't affect other parts of the object which have been extracted properly. Basically I only want to trim down these edges which contain the background so any solutions using python are appreciated.
I'm really sorry for not being at the liberty to share the code but I'm only looking for ideas that I can implement to solve this problem.
You can use binary erosion or dilation to "grow" the mask so that it covers the edge
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_erosion.html
as for "apply masking only around the edges" (this is not the same mask I was writing about above), you can flag pixels that are close to the edge by iteration over a mask and finding where there is a 0 that has a neighbouring 1 or vice versa.

How to create your own dataset with Python for Deep Learning (Keras / Tensorflow) in Road Line detection

I am given the task to find road lines on an image for a class project.
I want to start writing Convolutional Neural Network to do the task, but I am not sure how to create a dataset.
Let's say I have to find lines on this image (originally I have been given arround 1000 images of traffic where road lines could be detected):
To be able to do that I have to create a dataset. What to do? Should I take some random images and cut regions where I can see the road lines? What size should the training images be? How would I label the line to stand out from the background?
Also, I presume cutting lines from an image is an okay way when the line is segmented, but I cannot do that for a full line, can I?
It depends a lot on the assignment details. What does "find road lines on an image" mean?
Depending on the answer to the above question, you could divide the image in a 4x4 or 5x5 grid and try to find the cells on that grid that contain road lines.
To accomplish that you could manually label some of the cells (you might want to create a small GUI to facilitate this part) and train your CNN with the labeled data.

Reducing / Enhancing known features in an image

I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.

Turn an image into lines and circles

I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.

Resources