Mask generation for segmentation - python-3.x

I have this kind of images (255 exactly):
Example of delineation
I have to generate a mask for each of them automatically. For example, set all pixels in the bleu outline to zeros and set the others to one.
Does someone know a useful python's tool to achieve this task? Knowing that I know the exact position of the blue outline of the image above.
Thank you for your help!

Related

Apply masking on selective image regions using opencv python

I have an ML model that I am using to create a mask to separate background and object. The problem is that the model is not that accurate and we still get regions of background around the edges.
The background could be any color but it is not uniform as you can see in the image.
This is the model output.
I was wondering if there is a way I could apply masking only around the edges so it doesn't affect other parts of the object which have been extracted properly. Basically I only want to trim down these edges which contain the background so any solutions using python are appreciated.
I'm really sorry for not being at the liberty to share the code but I'm only looking for ideas that I can implement to solve this problem.
You can use binary erosion or dilation to "grow" the mask so that it covers the edge
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_erosion.html
as for "apply masking only around the edges" (this is not the same mask I was writing about above), you can flag pixels that are close to the edge by iteration over a mask and finding where there is a 0 that has a neighbouring 1 or vice versa.

How to find bright areas inside an image (and also how to find shaded areas in an image)

I have an image:
How do I find areas of different intensity in an image? How do I find all the bright areas that differ to the original brightness, and contrary-wise, how to find the dark areas, originating from shadows in this case?
Human eye realises the change in brightness, but how would a program do that?
Find bright and dark spots in one picture:
There are multiple approaches to this. I am gonna suggest just a couple of them here.
You can find the mean of the RGB values of the image and use the lower 10% of the pixels which vary the most from the mean as darker pixels and the highest 10% of the pixels which vary the most from the mean as brighter pixels.
You can set a predefined threshold for a bright pixel, lets say RGB=[220,220,220] and dark pixel as RGB=[30,30,30] and iterate through the image and classify the pixels accordingly.
You can also look into dynamic thresholding for the second method and your approach to the problem can be optimised accordingly.
Find changes in bright and dark spots:
There are multiple ways to handle this as well. One approach can be the mean-value subtraction technique. The human eye responds to change with respect to the previous image which was perceived. The program needs to do the same where it needs to compare the changes to the previously captured frame(s). Look into temporal filtering to get a further idea about this..
Hope this helped

Reducing / Enhancing known features in an image

I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.

Detect red square in image with OpenGL-ES

I need to write a program that will detect a red square in an image. I would like to do this on my GPU using OpenGl-ES. I have no experience with GPU programming, and haven't found the answer through Google so far.
Is it possible to do this using OpenGL? Does OpenGL-ES give access to the whole matrix of pixels as well as their location in the matrix, allowing a program to go through the pixels, and check the color value of each one as well as their locations in the matrix?
Thank you.
Above all, you are confused to call a few terms. There is 'no matrix of pixels'
If what you meant by that is Convolution, yes, you can put the convolution on Fragment shader to detect edges. However, there is no returning datas, and no way to access each pixel to get the color value. Convolution would work if you just want the shader to draw of square's edge. But if you want to know if a red square exist in the camera frame it must be calculated in CPU not in GPU.

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Resources