I am implementing Displacement mapping using DirectX 11 with its new tessellation stages.
Diffuse map and displacement map are generated by xNormal.
The result after I applied displacement mapping is cracked so badly.
http://imgur.com/a/OT2tt#0
Then I realized the values in the texture along seams are not the same/continuous, so I just use diffuse texture as displacement map, and the diffuse color is all-red.
http://imgur.com/a/OT2tt#1
The result is better but still 1-pixel gap along the seams.
http://imgur.com/a/OT2tt#2
http://imgur.com/a/OT2tt#3
http://imgur.com/a/OT2tt#4
I was confused by the little gap, so I enlarged the colored-part in the texture using MS painter, then the gap disappeared!
http://imgur.com/a/OT2tt#6
http://imgur.com/a/OT2tt#7
Now I just don't understand where the problem is.
Even if the value along seams from different part of the texture is the same (red in this case),
there are still gaps on the result model.
I tried all sample filter here(MSDN) but nothing helps.
What causes the gap? It will be better if the problem can be solved by just modifying texture instead of changing my code.
You must implement watertight seam filtering :D
If not, those gaps appear because normals are different across UV seams.
Pretty obvious.
Related
So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...
I have a PIXI.Graphics inside a PIXI.Container (along with some other stuff, including a mask, and a border). The graphics object is being used to draw various polygons. The alpha property of the Container is set to 0.5. Here is the result:
The bright square is the overlap between two polygons. It seems that even though both polygons were drawn to the same opaque graphics object, it's as though they are separate objects with their own alpha channels.
Is there any way to merge all of the polygons together so that the resulting graphics will have uniform alpha despite some overlapping polygons?
Pixi version is 4.7.3.
You can easily use AlphaFilter to achieve this. See this thread: https://github.com/pixijs/pixi.js/issues/4334
const colorMatrix = new filters.AlphaFilter();
colorMatrix.alpha = 0.5;
container.filters = [colorMatrix];
One solution to this problem in general is to draw all the necessary geometry, then set cacheAsBitmap to true on the Graphics object.
cacheAsBitmap is great for graphics that don't change often, and another benefit to using it is that it speeds up rendering.
Unfortunately, it appears that there could possibly be a bug using cacheAsBitmap with objects that use parent layers or masks which cause all the graphics to disappear if either is set.
In my particular situation, this does not help me because I need masking. Hopefully it helps someone else though.
Edit
The above solution works if you put the graphics inside a container, and apply the mask to the container. I found this out by complete accident while messing around.
I am attempting to apply one solid texture to a quadtree but I am having a problem. How my quadtree works is by creating a new mesh each time there is a subdivision. So the tree starts as one mesh, then when it splits its 4 meshes, so on so forth.
Now I am trying to apply a consistent texture to quadtree where each split still draws the same texture fully. The pictures below give a good example
Before Split:
After Split:
What I want is the texture to look like the before split picture even after the split. I can't seem to figure out the UV-mapping for it though. Is there a simple way to do this?
I have tried taking the location and modifying it's value based on the scale of the new mesh. This has proven unfruitful though and I'm really not sure what to do.
Any help is advice is greatly appreciated, thanks.
Stumbled on this...so it might be too late to help you. But if you are still thinking about this:
I think your problem is that you are getting a little confused about what a quadtree is. A quadtree is a spatial partition of a space. Think of it as a 2 dimensional b-tree. You don't texture a quadtree, you just use it to quickly figure out what lies within an arbitrary bounded region.
I suppose that you could use it to determine texture offsets for texture alignment, but that sounds like an odd use of a quadtree, and I suspect that there is probably a much easier way to solve your problem. (Perhaps use the world space coords % texture size to get the offset needed to seamlessly render the texture across multiple triangles?
Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.
I'm having a major issue which has been bugging me for a while now.
My problem is my game uses a deferred rendering engine which makes it very difficult to do alpha blending.
The only way I can think of solving this issue is to render the scene (including depth map, normal map and diffuse map) without any objects which have alphas.
Then for each polygon which has a texture with an alpha component, disable the z buffer and render it out including normals, depth and colour, and wherever alpha is '0' don't output anything to the depth, normal and colour buffer. Perform lighting calculations/other deferred effects on these two separate textures then combine the colour buffers using the depth map to check for which pixel is visible.
This idea would be extremely costly (not to mention has some severe short comings) to do so obviously should only be reserved for as few cases as possible, which makes rendering forest areas out of the question. However if there is no better solution I have one question.
When doing alpha blending with directx is there a shader/device state I can set which makes it so that I can avoid writing to the depth/normal/colour buffer when I want to? The issue is the pixel shader has to output to all its render targets specified, so if its set to output to the 3 render targets it must do it, which will override the previous colour value for that texel in the texture.
If there is no blend state which allows me to do this it would mean I would have to copy the normal, texture and depth map to keep the scene and then render to a new texture, depth and normal map then combine the two textures based on the alpha and depth values.
I guess really all I want toknow is if there is a simple sure-fire and possibly cheap way to render alphas in a deferred renderer?
A usual approach to draw transparent geometry in deferred renderer is just draw them in a separate pass, but using the usual forward rendering, not deferred rendering.