opencv2: Circle detection not detecting the obvious ones - python-3.x

Problem
I'm trying to use opencv2 to detect PlayStation Move Motion Controllers in still images. In an attempt to increase the contrast between the orbs and the backgrounds, I decided to modify the input image to automatically scale the brightness level between the image's mean level and 96 above for each channel, then when converting to grayscale, taking the maximum value instead of the default transform, since some orbs are saturated but not "bright".
However, my best attempts at adjusting the parameters seems to not work well, detecting circles that aren't there over the obvious ones.
What can I do to improve the accuracy of the detection? What other improvements or algorithms do you think I could use?
Samples
In order of best to worst:
2 Wands, 1 Wand detected (showing all 2 detected circles)
2 Wands, 1 Wand detected with many nonexistent circles (showing top 4 circles)
1 Wand (against a dark background), 6 total circles, the lowest-ranked of which is the correct one (showing all 6 circles)
1 Wand (against a dark background), 44 total circles detected, none of which are that Wand (showing all 44 circles)
I am using this function call:
cv2.HoughCircles(img_gray,cv2.HOUGH_GRADIENT,
dp=1, minDist=24, param1=90, param2=25,
minRadius=2, maxRadius=48)
All images are resized and cropped to 640x480 (the resolution of the PS3 Eye). No blur is performed.

I think hough circles is the wrong approach for you, as you are not really looking for circles. You are looking for circular areas with strong intensity. Use e.g. blob detection instead, I linked a guide:
https://www.learnopencv.com/blob-detection-using-opencv-python-c/
In the blob detection, you need to set the parameters to get a proper high-intensity circular area.

as the other user said, hough circles arent the best approach here because hough circles look for perfect circles only. whereas your target is "circular" but not a circle (due to motion blur, light bleed/reflection, noise etc)
I suggest converting the image to HSV then filtering by hue/color and intensities to get a binary threshold instead of using grayscale directly (that will help remove background & noise and limit the search area)
then using findContours() (faster than blob detection), check for contours of high circularity and expected size/area range and maybe even solidity.
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour,True)
circularity = 4*np.pi*area / (perimeter**2)
solidity = area/cv2.contourArea(cv2.convexHull(contour))
your biggest problem will be the orb contour merging with the background due to low contrast. so maybe some adaptive threshold could help

Related

Direct3D with sRGB - Gamma Corrected Colors

I'm coming from a Direct3D 9 background, and recently switched my custom game engine over to Direct3D12. After some research, it looked like using one of the *_SRGB formats was the way to go, because it corrected the gamma level.
Immediately, I noticed that everything nearly doubled in brightness, which was unexpected. When correcting a curve, I would expect some values to be brighter and some to be darker, but everything just appears brighter. However, I just accepted it and moved on. But now I'm noticing some other strange issues, and I'm not sure what's going on. Maybe someone could help me understand what I'm missing?
When I draw a primitive with a color value in either HLSL or C++, such as color(128,128,128,255) or float4(0.5,0.5,0.5,1.0), the resulting color I see on the screen is actually RGB 188,188,188. Is this to be expected? I'm reading the values of these colors in Adobe Photoshop 2022, which is in SRGB mode. Should the values not match up if both applications are using SRGB?
128 to 188 is really strange, but 0.5 to 0.73 is even stranger. How do I manually construct a color that comes out the way I constructed it? For example, one might use 0.5 to scale by "half brightness", but 0.73 is definitely not half brightness. It's almost white.
If our textures are painted on a PC, such as in Substance Painter or Photoshop, what is the point of converting all of these colors? If the artist can see the same color space that will be used to render, why tell the display to show us something else?
Before I switched to sRGB, I modeled in Blender, and my textures always looked the same between Blender and my game engine. If I start using sRGB, I'm worried that will not be the case. How are artists making that work?
Images that I've seen that were gamma corrected are often brighter and washed out. And images that were not gamma correct are usually dark and rich. Does gamma correction cause some type of saturation loss in darker color?
I appreciate any guidance. I've done research on this topic, but most of the information goes on endlessly about linear color space. Linear is nice, because it makes math easier, but half of the stuff we deal with in a 3D app is non-linear. At this point, I'm not sure its worth it.
1&2: Gamma correction is designed to convert between light intensity as it exists in the real world, i.e. the amount of photons that hit a camera sensor, and how human beings perceive light. So if a light source is emitting 50% less photons, we see it as as around 74% of the light (individual curves may vary) so 128 should become 188.
3&4: The point of linear color is to allow us to process images in a space where an increase in the number of photons is linearly related to the increase in the intensity values. Then the linear colors are gamma corrected before presenting them to the user. When you work in those programs, you are looking at gamma corrected images.
Basically, people don't look at linear color spaces. They look wrong to us. They only exist to allow the computer to do some processing. If you have shaders that do work in linear color space, saving your images in a linear format so that they don't have to remove the gamma, do the processing, and then reapply the gamma can have performance benefits.
The problem may be that you are gamma correcting images that are already gamma corrected. If the images look right to you, they may be gamma corrected, if they look dark, with the lighter areas seemingly emphasized, they may be linear. If you are adding colors/images that look right to you, before gamma is applied, you will have to put the colors/images through inverse gamma correction.
How Applications Display/Convert Color Spaces: (Edit)
Photoshop interprets what the numbers in an image mean through the currently applied color space. It is possible to both "assign" a color space, which changes how photoshop interprets the numbers, and "convert" a color space, which changes the numbers so that they look the same (or as close as possible) when interpreted through the new color space.
This first image is in the sRGB color space. I've painted a gray dot with the values of (127,127,127).
In this second image I have converted the image to a linear color space. It looks almost the same, because photoshop always applies gamma correction so that it looks right to you, but the first dot now has the value (54,54,54). I've added a second dot with the values (127,127,127) in this color space.
In this third image, I have assigned the sRGB color space. Now photoshop thinks the numbers are in the sRGB color space, so it thinks the image already has gamma correction, and is showing us something like the way linear color space looks.
For the final image, I did everything the opposite direction, drawing a dot with a value of (127,127,127), then converting back. The last dot now has a value of (187,187,187)

Godot pixelated image

Godot 2d project, I created at 640 640 png using Gimp.
Imported those PNG's to Godot as texture(2d).
After setting scale to 0.1, I resized those images to 64 x 64 in godot scene.
When I initiate this image in my main scene, I get this pixelated disgusting result.
Edit : Dont be confused with rotated red wings, I did it at runtime. Its not part of the question.
My window size is 1270 x 780
Stretch Mode is viewport.
I tried changing import settings etc.
I wonder is it not possible to have a sleek image in this sizes?
Disclaimer: I haven’t bothered to fire up Godot to reproduce your problem.
I suspect you are shooting yourself in the foot by that scale 0.1 bit. Remember, every time you resample (scale) an image there is loss.
There are three things to do:
Prepare your image(s) to have the same size (resolution) as your intended target display. In other words, if your image is going to display at 64×64 pixels, then your source image should be 64×64 pixels.
When importing images, make sure that Filter is set to ☑ On. If your image contains alpha, you may also wish to check the Fix Alpha Border flag.
Perform as few resampling operations as possible to display the image. In your particular case, you are resampling it to a tenth of its size before again resampling it up to the displayed size. Don’t do that. (This is probably the main cause of your problem.) Just make your sprite have the natural size of the image, and scale the sprite only if necessary.
It is also possible that you are applying some other filter or that your renderer has a bad resampler attached to it, but your problem description does not lead me to think either of these are likely.
A warning ahead: I'm not into godot at all, but I have some experience with image resizing.
Your problem looks totally related to pure image resizing. If you scale down an image in one go by any factor smaller than 0.5 (this means make it smaller than half its original size), you will face this effect of an ugly small image.
To get a nice and smooth result, you have to resize it multiple times:
Always reduce the image size by half, until the next necessary step is bigger than scaling by 0.5.
For your case (640x640 to 64x64) it would need these steps:
640 * 0.5 => 320
320 * 0.5 => 160
160 * 0.5 => 80
80 * 0.8 => 64
You can either start with a much smaller image - if you never need it with that big size in your code, or you add multiple scaled resolutions to your resources, or you precalculate the halfed steps before you start using them and then just pick the right image to scale down by the final factor.
I'm not very experienced with godot and haven't opened it right now, but I could imagine that you shouldn't scale the image to 0.1 because then there will be a loss in image quality.

Detecting random patterns in OpenCV

I want to apply Unsupervised learning on images through OpenCV and python to detect and categorise some special patterns in image and form different clutters.
If this image is example how I can detect the yellow pattern?
Very interesting problem. If circle detection is showing good matches, you can consider the difference of color histograms in patches inside and outside the circles. Also worth investigating is the difference in edge histograms in small windows on the image.
To check the inside of the circles, you can take a square that is about 1.4 times wide as the circle radius, with the same center as the circle's center. For the outside, take a few squares about this size but are located further than the radius in x and y directions. I think approximate values like these should do.

Detecting center and area of shapes in an image

I am working with GD library, and I'm looking for a way to detect the nearest pixel to the middle center of shapes, as well as total area used by each shape in a monochromic black-and-white image.
I'm having difficulty coming up with an efficient algorithm to do this. If you have done something similar to this in the past, I'd be grateful for any solution that would help.
Check out the binary image library
Essentially, Otsu threshold to separate out foreground from background, then label connected components. That particular image looks very clean but you might need morph ops to clean it up a bit and get rid of small holes and other artifacts.
Then you have area trivially (count pixels in component) or almost as trivially (use the weighted area function that penalises edge pixels). Centre is just mean.
http://malcolmmclean.github.io/binaryimagelibrary/
#MalcolmMcLean is right but there are remaining difficulties (if you are after maximum accuracy).
If you threshold with Otsu, there are a few pairs of "kissing" dots which will form a single blob using connected component analysis.
In addition, Otsu threshoding will discard some of the partially filled edge pixels so that the weighted averages will be inaccurate. A cure would be to increase the threshold (up to 254 is possible), but that worsens the problem of the kissing dots.
A workaround is to keep a low threshold and dilate the blobs individually to obtain suitable masks that cover all edge pixels. Even so, slight inaccuracies will result in the vicinity of the kissings.
Blob splitting by the watershed transform is also possible but more care is required to handle the common pixels. I doubt that a prefect solution is possible.
An alternative is the use of subpixel edge detection and least-squares circle fitting (after blob detection with a very low threshold to separate the dots). By avoiding the edge pixels common to two circles, you can probably achieve excellent results.

A better Greyscale algorithm

I'm trying to create a spectral image with a constant grey-scale value for every row. I've written some fantastically slow code that basically tries 1000 different variation between black and white for a given hue and it finds the one whose grey-scale value most closely approximates the target value, resulting in the following image:
On my laptop screen (HP) there is a very noticeable 'dip' near the blue peak, where blue pixels near the bottom of the image appear much brighter than the neighbouring purple and cyan pixels. On my second screen (Acer, which has far superior colour display) the dip is smaller, but still there.
I use the following function to compute the grey-scale approximation of a colour:
Math.Abs(targetGrey - (0.2989 * R + 0.5870 * G + 0.1140 * B))
when I convert the image to grey-scale using Paint.NET, I get a perfect black to white gradient, so that part of the code at least works.
So, question: Is this purely an artefact of the display qualities of my screens? Or can the above mentioned grey-scale algorithm be improved upon to give a visually more consistent result?
EDIT: The problem seems to be mostly monitor calibration. Not, I repeat not, a problem with the code.
I'm wondering if its more to do with the way our eyes interpret the colors, rather than screen artifacts.
That said... I am using a very-high quality screen (Dell Ultrasharp, IPS) that has incredible color reproduction and I'm not sure what you mean by "dip" in the blue peak. So either I'm just not noticing it, or my screen doesn't show the same picture and it more color-accurate.
The output looks correct given the greyscale conversion you have used (which I believe is the standard one for sRGB colour spaces).
However - there are lots of tradeoffs in colour models and one of these is that you can get results which aren't visually quite what you want. In your case, the fact that there is a very low blue weight means that a greater amount of blue is needed to get any given greyscale value, hence the blue seems to start lower, at least in terms of how the human eye perceives it.
If your objective is to get a visually appealing spectral image, then I'd suggest altering your function to make the R,G,B weights more equal, and see if you like what you get.

Resources