Can the skimage RAG support the gray images? For example, in RAG merging, it states This example constructs a Region Adjacency Graph (RAG) and progressively merges regions that are similar in color. But can we merge regions in terms of pixel intensity for the gray images?
Yes, the RAGs are very flexible. You can see the source code for rag_mean_color here. Unfortunately, you can see that 3 channels are assumed in this line. It would not be hard to change it to vary according to the number of channels in the input image. In the meantime, though, you can convert a grayscale image to a "color" image (the grayscale repeated three times) by using skimage.color.gray2rgb, and then use that example directly.
Note also other RAG examples in the documentation, such as those based on region boundary values.
Related
I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.
This example allows the classification of images with scikit-learn:
http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html
However, it is important that all the images have the same size (width and height, as written in the comments).
How can I modify this code to allow classification of images with different sizes?
You will need to define your own Feature Extraction.
In example from above, every pixel is represent a feature. If your images of different sizes, most trivial (but certainly not the best) thing that you can do is pad all images to the size of largest image with, for example, white pixels.
Here an example how to add boarders to image.
I have stitched two images but in the final image there is a visible seam. I am trying to use Alpha blending to remove that seam. I know Alpha blending is applied using the cvAddweight() function, but in this the function parameters are two images,alpha, beta , gamma and desitination . I am taking gamma=0, alpha=0.6, beta=0.4. What will be my two input source images and the destination image as the last part of my code is this->
IplImage* WarpImg = cvCreateImage
(cvSize(T1Img->width*2, T1Img->height*2), T1Img->depth, T1Img- >nChannels);
cvWarpPerspective(T1Img, WarpImg, &mxH);
cvSetImageROI(WarpImg, cvRect(0, 0, T2Img->width, T2Img->height));
cvCopy(T2Img, WarpImg);
cvResetImageROI(WarpImg);
cvNamedWindow("WarpImg Img",1);
cvShowImage("WarpImg Img", WarpImg);
cvSaveImage("save.jpg",WarpImg);
My final Image is
I have to admit, I dont think alpha blending is your answer. The seem is there due to the difference in lighting / exposure. Alpha blending is a way of essentially having one image visible through another by means of weighted averaging the two images colors together. Your right and your left images are backed by black. If you simply alpha blend then you are essentially going to be weighting your images with a black background. The resultant effect will simply be a darkening of both images.
2 potential other methods might be to look at the average color of both images at the seem, and adjust one up or down by 50% of the difference in brightness, and the other opposite by the other 50% (one goes up and the other down and the 50% makes it so that the overall brightness jump by either is only 50% of the difference).
The other might do a more complex image histogram technique where you try to widen or shrink the histogram of one side' image to the other as well as align them, and re-asign your color (in this case grayscale) via the new histograms.
Pyramid/multiband blending should do a good enough job for you scenario. Try enblend: http://enblend.sourceforge.net
I need to write some code (for a web.py webapp with a straight-HTML/JS client) that will generate a visual representation of a set of point-values. Each point has an X and Y coordinate, and the value is an integer. If I can use SVG to do this, then I can scale the image client-side with no extra code. Can I actually do this? I am concerned about a couple of things:
The points don't necessarily have any relation to each other. They aren't necessarily in a grid, nor can we say how many points are nearby, etc.
Gradients are primarily one-direction, and multiple gradients on the same shape seems to be a foreign concept.
Fills require an existing image, at which point, I'd be better off generating the entire image server-side anyway.
Objects always have a layering, even if it isn't specified, which can change how the image is rendered.
If it helps, consider a situation where we have a point surrounded by 5 others, where one of them is a bit closer than the others (exact distances and sizes can be adjusted). All six of the points have different colors (Red, Green, Blue, Cyan, Magenta, Yellow, with red in the center and Yellow being slightly closer), and the outer five points are arranged roughly in a pentagon. Note that this situation is not the only option, just a theoretically possible situation.
Can I do this with SVG, or should I render an image server-side?
EDIT: The main difficulty isn't in drawing the points, it is in filling the space between the points so that there is no whitespace, and color transitions aren't harsh/unpredictable if you know the data.
I don't entirely understand the different issues you are having with wanting to use svg. I am currently using the set up you are describing to render X-Y scatter plots and gaussian curves and found that it works great.
Regarding the last point about object layering, you have to be particularly careful when layering objects with less than 100% opacity which are different colors. The way the colors "add" depends on the order in which you add the objects to your svg drawing.
Thankfully you can use different filters to overlay the colors without blending them. Specifically I am using the FeComposite filter element. There is a good example of its usage here:
http://www.w3.org/TR/SVG/filters.html#feCompositeElement
how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/