Why GLKit scaling is causing color/lighting to darken? - ios4

Im seeing a strange behavior that i cant seem to grasp.
For some reason when I apply a scale (on the z axis), my texture mapped polygon mesh becomes darkened, as if lighting is suddenly disabled or the diffuse color being set to 0. If i reverse the scale (so that scaling goes back to 0) then the color becomes bright and vibrant in color again.
* Update with additional info. *
The brightness seems fine when the z values on my vertices are at their original/initialized values. But when i scale along the z axis, this is when the color goes dim (not completely dark, but a noticeable change in brightness).
I am using an index buffer to render.
What on earth could cause this "glitch" ?
The code related to this is here:
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(60.0f), aspect, 0.1f, 100.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
// Compute the model view matrix for the object rendered with GLKit
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 15.0f, -90.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, rotationAngle, 0, 1, 0);
// where g_depthScale is a value that increases based on a slider control.
modelViewMatrix = GLKMatrix4Scale(modelViewMatrix, 0.1f, 0.1f, g_depthScale);
self.effect.transform.modelviewMatrix = modelViewMatrix;

As Matic Oblak says, you might be scaling the normals as you scale the model.
A cheap and easy way of getting a whole scene to scale is to change the view angle of projection matrix, see OpenGL ES 2.0 Pinch and Zoom

Related

threejs - creating "cel-shading" for objects that are close by

So I'm trying to "outline" 3D objects. Standard problem, for which the answer is meant to be that you copy the mesh, color it the outline color, scale it up, and then set it to only render faces that are "pointed in the wrong direction" - for us that means setting side:THREE.BackSide in the material. Eg here https://stemkoski.github.io/Three.js/Outline.html
But see what happens for me
Here's what I'd like to make
I have a bunch of objects that are close together - they get "inside" one another's outline.
Any advice on what I should do? What I want to be seeing is everywhere on the rendered frame that these shapes touch the background or each other, there you have outline.
What do you want to happen? Is that one mesh in your example or is it a bunch of intersecting meshes. If it's a bunch of intersecting meshes do you want them to have one outline? What about other meshes? My point is you need some way to define which "groups" of meshes get a single outline if you're using multiple meshes.
For multiple meshes and one outline a common solution is to draw all the meshes in a single group to a render target to generate a silhouette, then post process the silhouette to expand it. Finally apply the silhouette to the scene. I don't know of a three.js example but the concept is explained here and there's also many references here
Another solution that might work, should be possible to move the outline shell back in Z so doesn't intersect. Either all the way back (Z = 1 in clip space) or back some settable amount. Drawing with groups so that a collection of objects in front has an outline that blocks a group behind would be harder.
For example if I take this sample that prisoner849 linked to
And change the vertexShaderChunk in OutlineEffect.js to this
var vertexShaderChunk = `
#include <fog_pars_vertex>
uniform float outlineThickness;
vec4 calculateOutline( vec4 pos, vec3 objectNormal, vec4 skinned ) {
float thickness = outlineThickness;
const float ratio = 1.0; // TODO: support outline thickness ratio for each vertex
vec4 pos2 = projectionMatrix * modelViewMatrix * vec4( skinned.xyz + objectNormal, 1.0 );
// NOTE: subtract pos2 from pos because BackSide objectNormal is negative
vec4 norm = normalize( pos - pos2 );
// ----[ added ] ----
// compute a clipspace value
vec4 pos3 = pos + norm * thickness * pos.w * ratio;
// do the perspective divide in the shader
pos3.xyz /= pos3.w;
// just return screen 2d values at the back of the clips space
return vec4(pos3.xy, 1, 1);
}
`;
It's easier to see if you remove all references to reflectionCube and set the clear color to white renderer.setClearColor( 0xFFFFFF );
Original:
After:

Merging overlapping transparent shapes in directx

This is the problem I am facing simplified:
Using directx I need to draw two(or more) exactly (in the same 2d plane) overlapping triangles. The triangles are semi transparent but the effect I want to release is that they clip to transparency of a single triangle. The picture below might depict the problem better.
Is there a way to do this?
I use this to get overlapping transparent triangles to not "accumulate". You need to create a blendstate and set it on output merge.
blendStateDescription.AlphaToCoverageEnable = false;
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = D3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = D3D11.BlendOption.One; //
blendStateDescription.RenderTarget[0].BlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = D3D11.BlendOption.SourceAlpha; //Zero
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = D3D11.BlendOption.DestinationAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11.ColorWriteMaskFlags.All;
Hope this helps. Code is in C# but it works the same in C++ etc. Basically, takes the alpha of both source and destination, compares and takes the max. Which will always be the same (as long as you use the same alpha on both triangles) otherwise it will render the one with the most alpha.
edit: I've added a sample of what the blending does in my project. The roads here overlap. Overlap Sample
My pixel shader is as:
I pass the UV co-ords in a float4.
xy = uv coords.
w is the alpha value.
Pixel shader code
float4 pixelColourBlend;
pixelColourBlend = primaryTexture.Sample(textureSamplerStandard, input.uv.xy, 0);
pixelColourBlend.w = input.uv.w;
clip(pixelColourBlend.w - 0.05f);
return pixelColourBlend;
Ignore my responses, couldn't edit them...grrrr.
Enabling the depth stencil prevents this problem

HLSL beginner needs some directions

Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).

shade border of 2D polygon differently

we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.

DirectX Sphere Texture Coordinates

I have a sphere with per-vertex normals and I'm trying to derive the texture coordinates for the object using the algorithm:
U = Asin(Norm.X) / PI + 0.5
V = Asin(Norm.Y) / PI + 0.5
With a polka dot texture, I get:
Here's the same object without the texture applied:
The issue I'm particuarly looking at (I know there's a few) is the misalignment of the textures.
I am inclined to believe the issue resides in my use of those algorithms, as the specular highlighting (which doesn't utilise any textures but does rely on the normals being correct) appears to have no artifacts.
Any ideas?
Can't you just set your UVs while you are building the sphere?
Then:
u = theta / (2 * PI);
v = phi / PI;
Edit: I might also point out that there probably is something wrong with your normals given the black dot on top ... There also appears to be highlighted lines along polygon edges. This again points to probable dodgy normals ...

Resources