Fill 2D area bound by vertices in XNA - colors

I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.

Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.

Related

How can i create an image morpher inside a graphics shader?

Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...

Directx 11 spheres

I'm looking for an efficient way to display lots of spheres using directx 11. The spheres are defined by (x,y,z,r) where (x,y,z) are coordinates in space and r is the radius. I want to display only the spheres that can be seen, meaning that spheres that are not in the field of view and spheres that are too small to be seen wouldn't be drawn. However, if a group of spheres smaller than one pixel is at least as big as one pixel, then I want to display the most predominant color. Spheres have only one color and different levels of transparency. Any help would be appreciated and incomplete answers are acceptable.
You need several things. First an indexed unit sphere geometry, second a buffer to store the sphere instance properties ( position, radius and color ) and third a small buffer for the API parameters yet to come. The three combines in a single 'ID3D11DeviceContext::DrawIndexedInstancedIndirect'
The remaining question is "how to feed the instance buffer ?". cpu is easy, just apply frustum culling, sort back to front because of the transparency and apply a merge based on the screen projection, update the buffer and use 'ID3D11DeviceContext::DrawIndexedInstanced'.
gpu version will do the same thing with compute shaders but will be harder to implement. The advantage, zero cpu/gpu synchronization and should support far more instance.

How to override color interpolation in XNA?

When you draw triangle with 3 different colors for 3 vertices, XNA automatically interpolates pixel colors in between these vertices. I would like to disable this behavior and supply my own algorithm that determines color of in-between pixels (for example, use average of 3 colors). How this should be done in XNA?
The interpolation is the basic behaviour of a shader, you can not avoid that.
if you send vertex data to a pixel shader, the data of the three vertex that form a triangle will be interpolated.
So if you want to use the average of the three colors, one option maybe precalculate them in the cpu... and send it to gpu through a differet vertex buffer, this way you can change it when you want without change the vertex buffer that contain the vertex positions....
Of course if a vertex is shared with two triangles, and it has different colors, you have to duplicate it.

How to draw the heightmap onto the screen?

I'm using DirectX10 to simulate a water surface, and I'm now with a height map,which is a 2D array of the heights(y) at the points (x,z). But to draw it on the screen, I must turn it into a mesh or have a index to draw triangle topology.
But the data is too large to do it manually. Are there any methods for me to draw it on the screen. I hope it's easy to implement. If there is function included in DirectX10 which can make it, the it's the best one for me.
Create a mesh that format a grid of squares (each made of two triangles) and set all vertices y = 0. In the vertex shader sample the heightmap and add the value stored in the heightmap to the y of the vertice.
This might help you.
P.S: If the area you want it to cover is too big you should take a look at terrain LOD techniques (should work the same for water).
I'm sure you can make a mesh out of it. I doubt you can generate the heightmap for a water surface that is too large to "meshify".
Why are you looking at Diamond square. For a 512x512 heightmap all you need to do is define a set of point and then generate the triangles for it. Its really very simple.

Direct3D: Wireframe without Diagonals

When using wireframe fill mode in Direct3D, all rectangular faces display a diagonal running across due to the face being split in to two triangles. How do I eliminate this line? I also want to remove hidden surfaces. Wireframe mode doesn't do this.
I need to display a Direct3D model in isometric wireframe view. The rendered scene must display the boundaries of the model's faces but must exclude the diagonals.
Getting rid of the diagonals is tricky as the hardware is likely to only draw triangles and it would be difficult to determine which edge is the diagonal. Alternatively, you could apply a wireframe texture (or a shader that generates a suitable texture). That would solve the hidden line issues, but would look odd as the thickness of the lines would be dependant on z distance.
Using line primitives is not trivial, although surfaces facing away from the camera can be easily removed, partially obscured surfaces would require manual clipping. As a final thought, do a two pass approach - the first pass draws the filled polygons but only drawing to the z buffer, then draw the lines over the top with suitable z bias. That would handle the partially obscured surface problem.
The built-in wireframe mode renders edges of the primitives. As in D3D the primitives are triangles (or lines, or points - but not arbitrary polygons), that means the built-in way won't cut it.
I guess you have to look up some sort of "edge detection" algorithms. These could operate in image space, where you render the model into a texture, assigning unique color for each logical polygon, and then do a postprocessing pass using pixel shader and detect any changes in color (color changes = output black, otherwise output something else).
Alternatively, you could construct a line list that only has the edges you need and just render them.
Yet another alternative could be using geometry shaders in Direct3D 10. Somehow, lots of different options here.
I think you'll need to draw those line manually, as wireframe mode is a built in mode, so I don't think you can modify that. You can get the list of vertex in your mesh, and process them into a list of lines that you need to draw.

Resources