object detection wood piles in image opencv c++ - object

I want to search wood piles in image. Secondly wantbto remove other areas aground the wood piles e.g grass , sky and trees.
I have used sobel on grayscale, h and v I can see the wood piles in circle but failed to remove otherness grass etc.

Related

Phong illumination produces black

I guess I am somehow stuck with a basic question where I just don't get the correct answer.
The Phong illumination model contains an ambient, diffuse and specular part.
Each part contains a multiplication of the color of light (ambient or source) with a coefficient (ambient, diffuse, specular)): I * coe
The light and the coefficents consist of the r,g,b color channels:
I_r * coe_r
I_g * coe_g
I_b * coe_b
Assuming a light would be green (0,1,0) and the coefficient (doesn't matter which) is blue (0,0,1) the result would be black (0,0,0).
How does this make any sense?
A blue object only reflects blue light. If you light it using white light, which contains all colors, it reflects only the blue light, so that is why it appears blue to the viewer. If you shine a light that has no blue component on a blue object, no light will be reflected.
In real life, lights and pigments are never "pure", and a object will not appear completely black in these situations. However, in the world of computer graphics, this can happen easily.

Proper way of calculating lighting

I am implementing lighting in a 3D engine, and have discovered a flaw in the way I calculate lighting. In the combine shader, I get the diffuse color and the lighting calculations.
return diffuseColor * (diffuseLight + ambient);
As far as I can tell, this is the standard way of doing things.
But, for example, what if the color is 0,1,0, and the light is 1,0,0. (Ignore ambient in this example)
The result would be 0,0,0. But in real life, if I get a pure green bit of paper, and shine a tinted red light onto it, it should turn out yellow. Or at least not black.
How do other games solve this problem?
A green paper will absorb red light and appear black, not yellow.
Addition means you shine a white paper with red and green light. Then the paper appears yellow.
http://www.physics.wisc.edu/museum/Exhibits-2/Light_Optics/ColorObj/Color_index.html
It should be (for each RGB component)
return (diffuseColor * diffuseLight) + ambient;
The ambient component is constant and is added after all of the other lighting calculations have been done, and doesn't depend on the light direction or viewing direction.
As Julio says, a pure green paper will look black under pure red light.
When something looks green, it's because it absorbs all other colours from the spectrum and what's left (i.e. green) is reflected. If you shine a red light on it there's no green to get reflected.

Banding with alpha transparency in XNA WP7

Having some issues with smooth alpha gradients in texture files resulting in bad banding issues.
I have a 2D XNA WP7 game and I've come up with a fairly simple lighting system. I draw the areas that would be lit by the light in a separate RenderTarget2D, apply a sprite to dim the edges as you get further away from the light, then blend that final lighting image with the main image to make certain areas darker and lighter.
Here's what I've got so far:
As you can see, the banding is pretty bad. The alpha transparency is quite smooth in the source image, but whenever I draw the sprite, it gets these huge ugly steps between colors. Just to check I drew the spotlight mask straight onto the scene with normal alpha blending and I still got the banding.
Is there any way to preserve smooth alpha gradients when drawing sprites?
Is there any way to preserve smooth alpha gradients when drawing sprites?
No, you cannot. WP7 phones currently use 16 bit color range system. One pixes got: 5 red bits, 5 blue, 6 green (humans see a wider spectrum of green color).
Found out that with Mango, apps can now specify that they support 32bpp, and it will work on devices that support it!
For XNA, put this line at the top of OnNavigatedTo:
SharedGraphicsDeviceManager.Current.PreferredBackBufferFormat = SurfaceFormat.Color;
For Silverlight add BitsPerPixel="32" to the App element in WMAppManifest.xml.

Most "stable" color representation : RGB? HSV? CIELAB?

There are several color representations in computer science : the standard RGB, but also HSV, HSL, CIE XYZ, YCC, CIELAB, CIELUV, ... It seems to me that most of the times, these representation try to approximate human vision (colors perceptually identical should have similar representations)
But what I want to know is which representation is the most "stable" when it comes to pictures. I have an object, let's say a bottle of Coke, and I have thousands of pictures of this bottle, taken under very different circumstances (the main difference would be the how light or dark the picture is, but there's orientation, etc...)
My question is : what color representation will empirically give me the most stable representation of the colors of the bottle? The "red" color of the label should not vary too much. Well, I'll know it will vary, but I would like to know the most "stable" representation.
I've been taught that HSV is better than RGB for these kind of things, but I have no clue for the rest.
Edit (technical details) : I take a particular point of the bottle. I pick the corresponding pixels in a thousand pictures of this point. I now have a cloud of points, that depend on the representation. I want the representation that minimizes the "size" of this cloud, for example the one that minimizes the mean distance of the points of the cloud to its barycenter.
You might want to check out http://www.cs.harvard.edu/~sjg/papers/cspace.pdf, which proposes a new colorspace apparently designed to address this precise question.
I'm not aware of a colourspace that does what you want, but I do have some remarks:
RGB closely matches the way colours are displayed to us on monitors. It is one of the worst colourspaces available in terms of approximating human perception.
As for the other colourspaces: Some try to make sure colours that are perceptually close together are also close together in the colourspace. Others also try to ensure that perceptually similar differences in colour also produce similar differences in the colourspace, regardless of where in the colourspace you are.
The first means that if you think the difference in colour between blue A, and blue B is similar to the difference in colour between the blue A and blue C, then in the colourspace the distance between blue A and blue B will be similar to the distance between blue A and blue C, and they will all three be close together in the colourspace. I think this is called a perceptually smooth colourspace. CIE XYZ is an example of this.
The second means that if you think the difference in colour between blue A and blue B is similar to the difference in colour between red A and red B then in the colourspace the distance between blue A and blue B will be similar to the difference between red A and red B. This is called a perceptually uniform colourspace. CIE Lab is an example of this.
[edit 2011-07-29] As for your problem: Any of HSV, HSL, CIE XYZ, YCC, CIELAB, CIELUV, YUV separate out the illumination from the colour info in some way, so those are the better options. They provide some immunity from illumination changes, but won't help you when the colour temperature changes drastically or coloured light is used. XYZ and YUV are computationally less expensive to get to from RGB (which is what most cameras give you) but also less "good" than HSV, HSL, or CIELAB (the latter is often considered one of the best, but it is also one of the most difficult).
Depending on what you are searching for you could calibrate the color balance of the images. For example: suppose you are matching coca cola logos: You know that the letters in the logo are always white. So if they are not in your image you can use the colour they have to correct that, which gives you information about the other colours.
Our perception of the color of something is mostly determined by its hue; a colorspace such as HSV which gives a single value representing hue will work best.
The eye is a remarkable instrument though, and knowing the color of a single point is not enough. If the entire scene has a yellow or blue tint to it, the eye will compensate and your perception will be of a purer color - the orange Coke bottle will appear to be redder than it is. Likewise with darkness and brightness. If possible, you should try to compensate the image before taking the color sample.

How would you store complex NES sprites, such as from the original Final Fantasy?

I know that NES had 4-color sprites (with 1 usually being transparent Edit: according to zneak, 1 color is always transparent). How then did the original Final Fantasy have so many sprites with 4 colors + transparent? (Example sprite sheet -- especially look at the large ones near the bottom.)
I understand that you can layer sprites to achieve additional colors (For example: Megaman's layering gives him 6 colors: body=3+trans, face=3+trans). It's odd that these FF ones are all exactly 4 colors + transparent. If FF used similar layering, why would they stop at 4+1 instead of taking advantage of 6+1?
Is there another method of displaying sprites that gives you an additional color?
Also interesting is the fact that the big sprites are 18x26. Sprites are 8x8 (and I think I read somewhere that they're sometimes 8x16) but both 18 and 26 are [factor of 8] + 2. Very strange.
As far as I know, 1 isn't usually transparent: it always is.
As you noted, sprites are either 8x8 or 8x16 (this depends on bit 6 of the PPU control register mapped to memory address 0x2000 in the CPU's address space). Character sizes not being a multiple of 8 simply means there are wasted pixels in one or more of the constituting sprites.
For the colors, I beg to differ: the last sprite at the bottom, with the sword raised, has these 8 colors:
Final Fantasy sprite 8 colors: black, brown, beige, sky blue, navy, dark turquoise, turquoise, cyan http://img844.imageshack.us/img844/2334/spritecolors.png
I believe this is more an artistic choice, because each 8x8 block is limited to 3 opaque colors; maybe it just was more consistent to use fewer colors.
I found the answer. I finally broke down and downloaded the ROM and extracted the bitmaps with NAPIT. (btw: staring at extracted ROM bitmaps is really bloody hard on your eyes!)
I matched a few bitmaps and end-results here.
Each character has a color that is mostly relegated to top part of the sprite so I chased that idea a while. It turns out that's a red herring. Comparing the in-game sprites vs. the color masks, you can see that black and transparent use the same color mask. Therefore, IF a black outline is shown, then it must be on a separate layer. However, despite the black outlines on the sprite-sheet, I can't find any real examples of black outlines in the game.
Here's a video on YouTube with lots of good examples. When you are on the blue background screen (# 0:27), the outlines and the black mage's face are the blue of the background (ie: there is no black outline, it's transparent). In combat, the background is black. # 1:46 a spell is cast that makes the background flash grey. All black areas, including outlines and black eyes, flash grey. Other spells are also cast around this part of the video with different colors of flashes. The results are the same.
The real answer is that the black outlines on the sprite sheet don't seem to exist in the game. Whoever made the sprite sheet took the screenshots with a black background and scrubbed the background away.
You might want to check out Game Development StackExchange instead of here.
I've just had a quick glance at the sprite sheet, but it looks to me that sprites with more than 3 colors + 1 transparent either have weapons or use 3 colors + a black outline. Also, if you could show that sprite sheet with a grid separating tiles...
Maybe the extra 2 colors were reserved for the weapons.

Resources