I was looking for an algorithm to calculate the perceived brightness of colors (I was using the naïve simple mean). I found interesting answers here, but I ran into a problem: I'm dealing with RGBA colors. When A = 255, it's equivalent to RGB and that thread works perfectly. However, if we consider RGBA(255, 255, 255, 0), those formulas give me maximum brightness, but not even there's a color.
I would be thankful if anyone can point me any theory that I could rely to do this.
Related
I am writing a ray tracer. So far, I have diffuse and specular lighting, and I am planning to implement reflection and refraction, too.
So far I have used white lights, where I calculated the surface color like this: surface_color * light_intensity, divided by the proper distance^2 values, since I am using point light sources. For specular reflection, it's light_color * light_intensity. Afaik, specular reflection doesn't change the light's color, so this should work with different color light sources, too.
How would I calculate the color reflected from a diffuse surface when the light source is not white? For example, (0.7, 0.2, 0) light hits (0.5, 0.5, 0.5) surface. Also, does distance factor in differently in this case?
Also, how would I add light contributions at a single point from different color light sources? For example, (1, 0.5, 1) surface is lit by (0.5, 0.5, 1) and (1, 0.7, 0.2) lights. Do I simply calculate both (distances included) and add them together?
I've found that RGB is a poor color space to do lighting calculations in because you have to consider a bunch of special cases to get anything that looks realistic or behaves the way you would expect it to.
With that said, it may be conceptually easier to do your lighting calculations in HSL rather than RGB. Depending on the language and toolkit you're using, this should be part of the standard library/distribution or an available toolkit.
A more physically accurate alternative would be to implement spectral rendering, where instead of your tracing functions returning RGB values, they return a sampled spectral power distribution. SPDs are more accurate and easier to work with than keeping track of a whole bunch of RGB blending special cases, at the cost of a slight but noticeable performance hit (especially if left unoptimized). Specular highlights and colored lights are a natural consequence of this model and don't require any special handling in the general case.
Both lighten and tint seem to make a color lighter (closer to white). Why does LESS define both?
From the LESS documentation:
lighten(#color, 10%); // return a color 10% points *lighter*
tint(#color, 10%); // return a color mixed 10% with white
How one site defines tint (note the use of the word “lighter”):
If you tinted a color, you've been adding white to the original color.
A tint is lighter than the original color.
From this thread that was asking for tint comes this comment:
Tint/shade is not the same thing as lighten/darken. Tint and shade are
effectively mixing with white and black respectively, whereas
lighten/darken are manuipulating the luminance channel independently
of hue and saturation. The former can produce hue shifts, whereas the
latter does not. That's not to say it's not useful, just that it's not
the same thing. Mathematically it's that linear changes in RGB space
do not necessarily correspond with linear changes in HSL space, though
in practice they will produce fairly similar results.
There is a slight difference in the math behind the two.
Both functions produce a 'lighter' color somehow but use different methods to do so.
Take a look at the source to see how they work:
tint: function(color, amount) {
return this.mix(this.rgb(255,255,255), color, amount);
},
lighten: function (color, amount) {
var hsl = color.toHSL();
hsl.l += amount.value / 100;
hsl.l = clamp(hsl.l);
return hsla(hsl);
},
So tint is mixing in white (as stated by the documentation) and lighten increases the lightness of in the HSL color model.
Here’s a demonstration of both functions.
It seems lighten and darken reach white and black, respectively, much faster than tint and shade.
To my untrained eye, it also appears that lighten and darken can alter the hue, whereas tint and shade do not.
I've heard that if you need to do a color segmentation on your software (create a binary image from a colored image by setting pixels to 1 if they meet certain threshold rules like R<100, G>100, 10< B < 123) it is better to first convert your image to HSV. Is this really true? And why?
The big reason is that it separates color information (chroma) from intensity or lighting (luma). Because value is separated, you can construct a histogram or thresholding rules using only saturation and hue. This in theory will work regardless of lighting changes in the value channel. In practice it is just a nice improvement. Even by singling out only the hue you still have a very meaningful representation of the base color that will likely work much better than RGB. The end result is a more robust color thresholding over simpler parameters.
Hue is a continuous representation of color so that 0 and 360 are the same hue which gives you more flexibility with the buckets you use in a histogram. Geometrically you can picture the HSV color space as a cone or cylinder with H being the degree, saturation being the radius, and value being the height. See the HSV wikipedia page.
I have a car detecting project in OpenCV2.3.1 and visual C++.
In foreground segmentation, there's reflections due to illumination.
And this (reflections) become part of the foreground after the background
has been removed.
I need suggestions or ideas on how to remove this noise. As it causes some
foreground objects to be connected up as one object, like seen when using
findContours and drawContours functions. See image parts highlighted in red
on attached image. I think this will simplify the blob detection stage.
*note - I am not allowed to use built-in cvBlobLib in OpenCV
Issue here is that part of a glare can be either background or corresponding car.
Here is what I would do.
I believe you would not have a big problem with identifying glare parts by binarizing and thresholding or in a similar way.
Once identified all pixels of glares, I would replace each of the glare pixels with nearest non-glare pixel in the same row of the image. That way, a glare will be filled with car and background. With this method, then you would be able to detect cars without much problem.
maybe try to convert the image to HSV then filter high V amounts
IplImage imgHSV = cvCreateImage(cvGetSize(imgInput), 8, 3);
IplImage imgThreshold = cvCreateImage(cvGetSize(imgHSV), 8, 1);
cvInRangeS(imgHSV, cvScalar(0, 0, 90, 0), cvScalar(0, 0, 100, 0), imgThreshold);
..adjust scalars as needed to remove glare
How can I convert a grayscale value (0-255) to an RGB value/representation?
It is for using in an SVG image, which doesn't seem to come with a grayscale support, only RGB...
Note: this is not RGB -> grayscale, which is already answered in another question, e.g. Converting RGB to grayscale/intensity)
The quick and dirty approach is to repeat the grayscale intensity for each component of RGB. So, if you have grayscale 120, it translates to RGB (120, 120, 120).
This is quick and dirty because the effective luminance you get depends on the actual luminance of the R, G and B subpixels of the device that you're using.
If you have the greyscale value in the range 0..255 and want to produce a new value in the form 0x00RRGGBB, then a quick way to do this is:
int rgb = grey * 0x00010101;
or equivalent in your chosen language.
Conversion of a grayscale to RGB is simple. Simply use R = G = B = gray value. The basic idea is that color (as viewed on a monitor in terms of RGB) is an additive system.
http://en.wikipedia.org/wiki/Additive_color
Thus adding red to green yields yellow. Add in some blue to that mix in equal amounts, and you get a neutral color. Full on [red, green, blue] = [255 255 255] yields white. [0,0,0] yields monitor black. Intermediate values, when R=G=B are all equal will yield nominally neutral colors of the given level of gray.
A minor problem is depending on how you view the color, it may not be perfectly neutral. This will depend on how your monitor (or printer) is calibrated. There are interesting depths of color science we could go into from this point. I'll stop here.
Grey-scale means that all values have the same intensity. Set all channels (in RGB) equal to the the grey value and you will have the an RGB black and white image.
Woudln't setting R,G,and B to the same value (the greyscale value) for each pixel get you a correct shade of gray?
You may also take a look at my solution Faster assembly optimized way to convert RGB8 image to RGB32 image. Gray channel is simply repeated in all other channels.
The purpose was to find the fasted possible solution for conversion using x86/SSE.