I am working on breast region segmentation using Huang Thresholding. The original and result image is provided here:
As you can see the mask edges are not smooth enough but it's accepted in this sample. Next is another sample with the mask edges pretty jagged:
In the attached picture I already implemented some preprocessing to smoothing the edge by using close operation with the following codes (I tried it with the median filter but not much effect).
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
closed = cv2.morphologyEx(canvas.copy(), cv2.MORPH_CLOSE, kernel, iterations=3)
I also tried the solution provided here:
but not satisfying enough for me in this case.
Can anybody help me or suggest me a method to smooth the edges of the mask? Here I have provided the mask images for sample1:
and for sample2:
.
FYI, I planned to bitwise_and the image with the mask so I can remove the background images. Background images in mammograms sometimes not really black background and contain too much noise which you don't want when enhancing the image after.
`
Related
I have an ML model that I am using to create a mask to separate background and object. The problem is that the model is not that accurate and we still get regions of background around the edges.
The background could be any color but it is not uniform as you can see in the image.
This is the model output.
I was wondering if there is a way I could apply masking only around the edges so it doesn't affect other parts of the object which have been extracted properly. Basically I only want to trim down these edges which contain the background so any solutions using python are appreciated.
I'm really sorry for not being at the liberty to share the code but I'm only looking for ideas that I can implement to solve this problem.
You can use binary erosion or dilation to "grow" the mask so that it covers the edge
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_erosion.html
as for "apply masking only around the edges" (this is not the same mask I was writing about above), you can flag pixels that are close to the edge by iteration over a mask and finding where there is a 0 that has a neighbouring 1 or vice versa.
I am here because I am in need of some advise...
I am working with face detection. I already tried some methods like de DLIB Detector, HoG, among others...
For now, I started to use the OpenCV DNN Detection based in the ResNet .caffemodel, but after a lot of attempts I realize that this model it is not very good for images over than 300x300 (HxW).
Note that my images are 1520x2592 (HxW). When I apply the resize, almost all information of the faces are lost because the faces in the original image are about 150x150 pixels, when resized for detection using DNN their size is about 30x20 (approx.).
Some approaches I already tried:
- Split figure in sub-figures
- Background subtraction
What I need to reach:
- Fast detection
- Reduce the number of lost faces (not detected)
Challenge:
- Big image with small faces in it
- A lot of area in the image not being used (but I can't change the location of the camera)
SSD-based networks are fully convolutional that means you can vary input size. Try to pass inputs of different sizes and choose one which give satisfying performance and accuracy. There is an example here: http://answers.opencv.org/question/202995/dnn-module-face-detection-poor-results-open-cv-343/
input = blobFromImage(img, 1.0, Size(1296, 760)); // x0.5
or
input = blobFromImage(img, 1.0, Size(648, 380)); // x0.25
I'm trying to threshold an image using Otsu's method in Opencv:
Although when I threshold it, some parts of the picture are completely surrounded by white and creates and ends up in Opencv not detecting all the contours in the image. This is what I get when I do Otsu's method thresholding usingret,thresh=cv2.threshold(blurred,0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU):
EDIT:
Some people have asked for the code I am using so here it is:
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
cv2.imshow('Input Image', image)
cv2.waitKey(0)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.adaptiveThreshold(blurred,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv2.THRESH_BINARY_INV,81,2)
#ret, thresh = cv2.threshold(blurred,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)
kernel = np.ones((5,5),np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
#thresh_value = 70
#ret,thresh= cv2.threshold(blurred,thresh_value,255,cv2.THRESH_BINARY)
Now it makes some checkered noise:
You do not need to manually find a sweet spot! Let OpenCV do it for you!
OpenCV has an adaptive thresholding algorithm exactly from problems like this, called adaptiveThreshold
This function divides the image into multiple sub-images, and thresholds each one individually. This means that it will find a nice threshold value for each part of the image and give you a nice and uniformly lit image. See this example.
Try this:
th3 = cv.adaptiveThreshold(blurred,255,cv.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv.THRESH_BINARY,11,2)
Update:
Functions like these do not work perfectly out of the box. If it still creates artefacts like salt and pepper noise, you can try:
Significantly increasing the blockSize. This can ensure that each block has a letter inside, which will hopefully mean the threshold will be chosen better. (e.g. Dividing the image into 25 blocks instead of 100. A blocksize of 11 pixels is very small.)
First apply a blurring filter to ease out the bad spots creating the seasoning noise. (With the image name blurry I imagine that you've done this already.
First the simple threshold function to just removes some noise. For example setting all pixels above 5 and below 100 equal to zero. Then after that apply the adaptiveThreshold.
Follow #Mark`s advice by subtracting a blurred image from the original image. (See this thread)
I hope this helps!
Instead of using Otsu's method try global thresholding method.
thresh_value = 50
ret,thresh= cv2.threshold(blurred,thresh_value,255,cv2.THRESH_BINARY)
change the thresh_value parameter until you get the result you want.
Get to know more about thresholding techniques please refer the documentation.
I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.
I am currently working on a program to detect coordinates of pool balls in an image of a pool table taken from an arbitrary point.
I first calculated the table corners and warped the perspective of the image to obtain a bird's eye view. Unfortunately, this made the spherical balls appear to be slightly elliptical as shown below.
In an attempt to detect the ellipses, I extracted all but the green felt area and used a Hough transform algorithm (HoughCircles) on the resulting image shown below. Unfortunately, none of the ellipses were detected (I can only assume because they are not circles).
Is there any better method of detecting the balls in this image? I am technically using JavaCV, but OpenCV solutions should be suitable. Thank you so much for reading.
The extracted BW image is good but it needs some morphological filters to eliminate noises then you can extract external contours of each object (by cvFindContours) and fit best ellipse to them (by cvFitEllipse2).