Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This may be a very simple matter, but I have conflict with that. I have split a particular binary image (28 x 28) into (4 x 4) samples. Now I want to calculate the pixel densities of each sample (I use those density values as features in a OCR application). As I know density defines the number of pixels in a particular area, like 7 pixels per square inch. Is that the same in here? all of my samples have 4 pixels. Is there are relationship between Moment->m00 and pixel density? Can someone explain this? plz help
A "density" depicts how much of "a thing" corresponds to a "small fraction of space".
In terms of images, that might be i.e. "amount of colour" that a "fragment of image" holds.
For black and white or grayscale images that are held as pixel arrays that could simply mean an average pixel value.
For example, if your image is black and white (that is, pixels have value either 0 or 1): If your sample is a 4x4 square, then its area is 16. In this area you can have from 0 to 16 pixels, what would be respectively 0.0 and 1.0. Here a 4 black pixels and 12 white could indicate a density of 4/16 = 0.25 (or 12/16 = 0.75, depending on which pixels you treat as "empty" (black or white?)).
For example, if your image is grayscale (that is, pixels have values in range of 0..255 which describes how white they are): If your sample is a 4x4 square, then its area is 16. In this area you can have from 0 to 16 pixels, what would be respectively 0% and 100%. All but four pixels "empty", and those four have values 100,100,50,50 gives you density of (100+100+50+50)/255/16 = 0.073 . Mind that pixels have min=0 and max=255 values. If your pixels have different value ranges, adjust appropriately.
In terms of OpenCV, I'd assume that moment->m00 is a "spacial image m=0,n=0 moment". So, you might want to review i.e. http://software.intel.com/sites/products/documentation/hpc/ipp/ippi/ippi_ch11/ch11_image_moments.html
Looking at that document and the formulas, I think that you will find your density either in m00 or in m11 fields. I think that since m and n are 0, then m00 will be equal to 1(one), and the m11 will hold the average pixel value of the 2D image, but I've not tried/checked so I am not 100% sure.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Could someone explain Gouraud shading to me? I can go ahead and Google "gouraud shading", but it doesn't make much sense to me. I have 3 vertices with an (x, y) position and an int[r,g,b] color. I want to linearly interpolate (not sure what this means) the colors of the vertices to shade in the triangle. What is the logic for doing so?
You will perform a bi-linear interpolation.
Scan the triangle from top to bottom, following the rows of pixels. Every row will intersect the triangle twice, along two distinct edges.
You will first perform two linear interpolations along these edges, computing a mixture of the RGB components at the vertices, weighted with the distances to these (weight Db/(Da+Db) for color a and Da/(Da+Db) for color b).
Then you will scan the pixels between the intersections, performing another linear interpolation between the two colors you just computed.
This way you will fill the triangle with a smooth gradient, in a way that will make it continuous with neighboring triangles, if any.
I have a graphics format where for each pixel the previous pixel's RGB value is used (for the first pixel on a line black is used), and then red, green or blue can be modified, or, a pixel can be set to any gray value (previous pixel's value isn't used). All this has been implemented (the easy part).
What would be the best way to convert 24 bit images to this format in the highest possible quality?
Any thoughts are appreciated.
I have more then 1 week reading about selective color change of an image. It meand selcting a color from a color picker and then select a part of image in which I want to change the color and apply the changing of color form original color to color of color picker.
E.g. if I select a blue color in color picker and I also select a red part in the image I should be able to change red color to blue color in all the image.
Another example. If I have an image with red apples and oranges and if I select an apple on the image and a blue color in the color picket, then all apples should be changing the color from red to blue.
I have some ideas but of course I need something more concrete on how to do this
Thank you for reading
As a starting point, consider clustering the colors of your image. If you don't know how many clusters you want, then you will need methods to determine whether to merge or not two given clusters. For the moment, let us suppose that we know that number. For example, given the following image at left, I mapped its colors to 3 clusters, which have the mean colors as shown in the middle, and representing each cluster by its mean color gives the figure at right.
With the output at right, now what you need is a method to replace colors. Suppose the user clicks (a single point) somewhere in your image, then you know the positions in the original image that you will need to modify. For the next image, the user (me) clicked on a point that is contained by the "orange" cluster. Then he clicked on some blue hue. From that, you make a mask representing the points in the "orange" cluster and play with that. I considered a simple gaussian filter followed by a flat dilation 3x5. Then you replace the hues in the original image according to the produced mask (after the low pass filtering, the values on it are also considered as a alpha value for compositing the images).
Not perfect at all, but you could have a better clustering than me and also a much-less-primitive color replacement method. I intentionally skipped the details about clustering method, color space, and others, because I used only basic k-means on RGB without any pre-processing of the input. So you can consider the results above as a baseline for anything else you can do.
Given the image, a selected color, and a target new color - you can't do much that isn't ugly. You also need a range, some amount of variation in color, so you can say one pixel's color is "close enough" while another is clearly "different".
First step of processing: You create a mask image, which is grayscale and varying from 0.0 to 1.0 (or from zero to some maximum value we'll treat as 1.0), and the same size as the input image. For each input pixel, test if its color is sufficiently near the selected color. If it's "the same" or "close enough" put 1.0 in the mask. If it's different, put 0.0. If is sorta borderline, put an in-between value. Exactly how to do this depends on the details of the image.
This might work best in LAB space, and testing for sameness according to the angle of the A,B coordinates relative to their origin.
Once you have the mask, put it aside. Now color-transform the whole image. This might be best done in HSV space. Don't touch the V channel. Add a constant to S, modulo 360deg (or mod 256, if S is stored as bytes) and multiply S by a constant chosen so that the coordinates in HSV corresponding to the selected color is moved to the HSV coordinates for the target color. Convert the transformed S and H, with the unchanged L, back to RGB.
Finally, use the mask to blend the original image with the color-transformed one. Apply this to each channel - red, green, blue:
output = (1-mask)*original + mask*transformed
If you're doing it all in byte arrays, 0 is 0.0 and 255 is 1.0, and be careful of overflow and signed/unsigned problems.
I have a list of several different "random" colors values (no less than 1 and no more than 8 colors). (Random means that there is no telling of their mutual "contrast".)
Colors are given as RGB values (possible simplification: as H values in HSL model, or in some other color system of choice — I have some degree of control of how original colors are generated).
I need to compute a single one color value that is the most "contrast" (i.e. visually distinguishable) from all colors from the list.
A practical criteria for the contrast, for the case with 8 colors:
If we draw 9 squares, filled with our colors as follows:
[1][2][3]
[4][X][5]
[6][7][8]
Color of square X must be clearly distinguishable from all adjacent colors.
Possible simplification: reduce maximum number of colors from 8 to 4 (squares 2, 4, 5, 7 in the example, ignore diagonals).
I think the best solution could be:
maximize hue difference with all the colors (simple linear optimization)
maximize lighting
maximize saturation
http://www.colorsontheweb.com/colorcontrasts.asp
Edit: with linear programming, you could give lower significance to the diagonal colors.
Edit2: What maximization means:
You want to maximize the hue contrast, that means the sum of all |Hi - result|, where Hi stands for Hue of color i, is to be maximized. You can even create conditions for minimum difference, e.g. |Hi - result| > Hmin. The actual calculation can be done by giving the equations to the linear optimization algorithm or you can try all hue values between 0.0 and 1.0 stepping by 0.05 and saving the best result.
http://en.wikipedia.org/wiki/Linear_programming.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Similarity Between Colors
I know it's not a programming question but I think the understanding of the color models is always bound to a programmer's life.
So we were having an argument about a certain color in the office. I was saying that a particular color was more near to pink and a colleague said it was more close to purple.
The question is how can I measure the distance of a color from another color?
Example:
Pink=(255, 192, 203) -->A
Purple=(128, 0, 128) -->B
Color in question=(232,143,253)-->C
The A or B is closest to C?
A simple method is to calculate the Euclidean distance in the RGB cube using the formula:
√((r2-r1)2 + (g2-g1)2 + (b2-b1)2)
However this won't accurately measure the human perception of closeness. For example, the human eye is more sensitive to some colours than others. To take this into account you will need to look at some research on the topic of human perception of colour. This Wikipedia page has some good starting points: Color difference