I can't tint my sprite with color shader. Here is the problem:
Image on the left is from XNA, and right one is form gimp. Colored sprite still have some blue pixels. Blue color is used to pass tint. I think scalling or anti-aliasing mess it up. Here is my code:
float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float3 white = {1, 1, 1};
float3 colorTint = { 0.3f, 0, 0 };
if(color.a)
{
if(color.r < 0.1 && color.g < 0.1 && color.b > 0.1)
{
color.rgb = lerp(white, colorTint.rgb, 1 - color.b);
color.a = 1;
}
}
return color;
}
Edit:
Looks like setting sampler state to "SamplerState.PointClamp" is my solution, but maybe someone can tell me what's going on here?
Related
I'm apply the concept of metaballs to a game I'm making in order to show that the player has selected a few ships, like so http://prntscr.com/klgktf
However, my goal is to keep a constant thickness of this outline, and that's not what I'm getting with the current code.
I'm using a GLSL shader to do this, and I pass to the fragmentation shader a uniform array of positions for the ships (u_metaballs).
Vertex shader:
#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragmentation shader:
#version 120
uniform vec2 u_metaballs[128];
void main() {
float intensity = 0;
for(int i = 0; i < 128 && u_metaballs[i].x != 0; i++){
float r = length(u_metaballs[i] - gl_FragCoord.xy);
intensity += 1 / r;
}
gl_FragColor = vec4(0, 0, 0, 0);
if(intensity > .2 && intensity < .21)
gl_FragColor = vec4(.5, 1, .7, .2);
}
I've tried playing around with the intensity ranges, and even changing 1 / r to 10000 / (r ^ 4) which (although it makes no sense) helps a bit, though it does not fix the problem.
Any help or suggestions would be greatly appreciated.
after some more taught it is doable even in single pass ... you just compute the distance to nearest metaball and if less or equal to the boundary thickness render fragment otherwise discard it ... Here example (assuming single quad <-1,+1> is rendered covering whole screen):
Vertex:
// Vertex
varying vec2 pos; // fragment position in world space
void main()
{
pos=gl_Vertex.xy;
gl_Position=ftransform();
}
Fragment:
// Fragment
#version 120
varying vec2 pos;
const float r=0.3; // metabal radius
const float w=0.02; // border line thickness
uniform vec2 u_metaballs[5]=
{
vec2(-0.25,-0.25),
vec2(+0.25,-0.25),
vec2( 0.00,+0.05),
vec2(+0.30,+0.35),
vec2(-1000.1,-1000.1), // end of metaballs
};
void main()
{
int i;
float d;
// d = min distance to any metaball
for (d=r+r+w+w,i=0;u_metaballs[i].x>-1000.0;i++)
d=min(d,length(pos-u_metaballs[i].xy));
// if outside range ignore fragment
if ((d<r)||(d>r+w)) discard;
// otherwise render it
gl_FragColor=vec4(1.0,1.0,1.0,1.0);
}
Preview:
(Sorry for my bad English.)
I'm new to Stack Overflow and writing a 3D game application with MS Visual C++ 2015 compiler, Direct3D 9 and HLSL(Shader model 3.0).
I've implemented a deferred rendering logic with 4 render target textures.
I stored depth values of pixels in a render target texture and created a shadow map. Here are the results. (All meshes have black color because the meshes have small size and close to the camera. The far plane distance value is 1000.0f.)
The depth texture and the shadow map.
I rendered a full screen quad with shadow mapping shaders and outputted shadows with red color to confirm the shader is working correctly.
But, It seems that the shaders output wrong results. The shadow map texture output repeats on the mesh surfaces.
https://www.youtube.com/watch?v=1URGgoCR6Zc
Here is the shadow mapping vertex shader to draw the quad.
struct VsInput {
float4 position : POSITION0;
};
struct VsOutput {
float4 position : POSITION0;
float4 cameraViewRay : TEXCOORD0;
};
float4x4 matInverseCameraViewProjection;
float4 cameraWorldPosition;
float farDistance;
VsOutput vs_main(VsInput input) {
VsOutput output = (VsOutput)0;
output.position = input.position;
output.cameraViewRay = mul(float4(input.position.xy, 1.0f, 1.0f) * farDistance, matInverseCameraViewProjection);
output.cameraViewRay /= output.cameraViewRay.w;
output.cameraViewRay.xyz -= cameraWorldPosition.xyz;
return output;
}
And here is the shadow mapping pixel shader to draw the quad.
struct PsInput {
float2 screenPosition : VPOS;
float4 viewRay : TEXCOORD0;
};
struct PsOutput {
float4 color : COLOR0;
};
texture depthMap;
texture shadowMap;
sampler depthMapSampler = sampler_state {
Texture = (depthMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = POINT;
MinFilter = POINT;
MipFilter = POINT;
};
sampler shadowMapSampler = sampler_state {
Texture = (shadowMap);
AddressU = CLAMP;
AddressV = CLAMP;
MagFilter = POINT;
MinFilter = POINT;
MipFilter = POINT;
};
//float4x4 matCameraView;
float4x4 matLightView;
float4x4 matLightProjection;
float4 cameraWorldPosition;
float4 lightWorldPosition;
float2 halfPixel;
float epsilon;
float farDistance;
PsOutput ps_main(PsInput input) {
PsOutput output = (PsOutput)0;
output.color.a = 1.0f;
//Reconstruct the world position using the view-space linear depth value.
float2 textureUv = input.screenPosition * halfPixel * 2.0f - halfPixel;
float viewDepth = tex2D(depthMapSampler, textureUv).r;
float3 eye = input.viewRay.xyz * viewDepth;
float4 worldPosition = float4((eye + cameraWorldPosition.xyz), 1.0f);
//Test if the reconstructed world position has right coordinate values.
//output.color = mul(worldPosition, matCameraView).z / farDistance;
float4 positionInLightView = mul(worldPosition, matLightView);
float lightDepth = positionInLightView.z / farDistance;
float4 positionInLightProjection = mul(positionInLightView, matLightProjection);
positionInLightProjection /= positionInLightProjection.w;
//If-statement doesn't work???
float condition = positionInLightProjection.x >= -1.0f;
condition *= positionInLightProjection.x <= 1.0f;
condition *= positionInLightProjection.y >= -1.0f;
condition *= positionInLightProjection.y <= 1.0f;
condition *= positionInLightProjection.z >= 0.0f;
condition *= positionInLightProjection.z <= 1.0f;
condition *= positionInLightProjection.w > 0.0f;
float2 shadowMapUv = float2(
positionInLightProjection.x * 0.5f + 0.5f,
-positionInLightProjection.y * 0.5f + 0.5f
);
//If-statement doesn't work???
float condition2 = shadowMapUv.x >= 0.0f;
condition2 *= shadowMapUv.x <= 1.0f;
condition2 *= shadowMapUv.y >= 0.0f;
condition2 *= shadowMapUv.y <= 1.0f;
float viewDepthInShadowMap = tex2D(
shadowMapSampler,
shadowMapUv
).r;
output.color.r = lightDepth > viewDepthInShadowMap + epsilon;
output.color.r *= condition;
output.color.r *= condition2;
return output;
}
It seems that the uv for the shadow map has some wrong values, but i can't figure out what's the real problem.
Many thanks for any help.
EDIT : I've updated the shader codes. I decided to use view-space linear depth and confirmed that the world position has right value. I really don't understand why the shadow map coordinate values have wrong values...
It really looks like you are using a wrong bias. Google up "Shadow Acne" and you should find your answer to your problem. Also the resolution of the shadowmap could be a problem.
I found the solution.
The first problem was that the render target texture had wrong texture format. I should have used D3DFMT_R32F. (I had used D3DFMT_A8R8G8B8.)
And i added these lines in my shadow mapping pixel shader.
//Reconstruct the world position using the view-space linear depth value.
float2 textureUv = input.screenPosition * halfPixel * 2.0f - halfPixel;
float4 viewPosition = float4(input.viewRay.xyz * tex2D(depthMapSampler, textureUv).r, 1.0f);
float4 worldPosition = mul(viewPosition, matInverseCameraView);
...
//If-statement doesn't work???
float condition = positionInLightProjection.x >= -1.0f;
condition *= positionInLightProjection.x <= 1.0f;
condition *= positionInLightProjection.y >= -1.0f;
condition *= positionInLightProjection.y <= 1.0f;
condition *= positionInLightProjection.z >= 0.0f;
condition *= positionInLightProjection.z <= 1.0f;
condition *= viewPosition.z < farDistance;
The last line was the key and solved my second problem. The 'farDistance' is the far plane distance of the camera frustum. I'm still trying to understand why that is needed.
You can use saturate to clamp the positionInLightProjection and compare it against the unsaturated variable. This way you can verify that positionInLightProjection is within 0..1.
if ((saturate(positionInLightProjection.x) == positionInLightProjection.x) && (saturate(positionInLightProjection.y) == positionInLightProjection.y)) {
// we are in the view of light
// todo: compare depth values from shadow map and current scene depth
} else {
// this is shadow for sure!
}
I am working on a shader for Unity in which I want to change the color based on an mask image. In this mask image the RGB channels stand for a color that can be choosen in the shader. The idea behind the shader is that it is easy to change the look of an object without having to change the texture by hand.
Shader "Custom/MultiColor" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_MaskTex ("Mask area (RGB)", 2D) = "black" {}
_ColorR ("Red Color", Color) = (1,1,1,1)
_ColorG ("Green Color", Color) = (1,1,1,1)
_ColorB ("Blue Color", Color) = (1,1,1,1)
}
SubShader {
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert
sampler2D _MainTex;
sampler2D _MaskTex;
half4 _ColorR;
half4 _ColorG;
half4 _ColorB;
half4 _MaskMult;
struct Input {
float2 uv_MainTex;
};
void surf (Input IN, inout SurfaceOutput o) {
half4 main = tex2D (_MainTex, IN.uv_MainTex);
half4 mask = tex2D (_MaskTex, IN.uv_MainTex);
half3 cr = main.rgb * _ColorR;
half3 cg = main.rgb * _ColorG;
half3 cb = main.rgb * _ColorB;
half r = mask.r;
half g = mask.g;
half b = mask.b;
half minv = min(r + g + b, 1);
half3 cf = lerp(lerp(cr, cg, g*(r+g)), cb, b*(r+g+b));
half3 c = lerp(main.rgb, cf, minv);
o.Albedo = c.rgb;
o.Alpha = main.a;
}
ENDCG
}
FallBack "Diffuse"
}
The problem with the shader is the blending between the masked color based on the green and blue channel. Between colors defined in the color supposted to be from the red region is visible. Below a sample is visable.
The red color is create by the red mask, green by the green mask and desert yellow by the blue region. I do not know why this happens or how to solve this problem.
Best guess: anti-aliasing or image compression. Aliasing (on the brush your using) will cause an overlap in the color channels, causing them to mix. Compression usually works by averaging each pixel's color info based on the colors around it (jpeg is especially notorious for this).
Troubleshoot by using a straight pixel based brush (no aliasing, no rounded edges) in Photoshop (or whatever image suite you're using) and/or try changing the colors through your shader and see how they mix- doing either should give you a better idea of what's going on under the hood. This combined with an lossless/uncompressed image-type, such as .tga should help, though they may use more memory.
Is it possible to get GLSL to produce this:
This is my fragment shader:
#version 120
uniform sampler2D diffuse;
varying vec3 shared_colors;
varying vec2 shared_texCoords;
void main() {
vec4 color = vec4(shared_colors, 1);
vec4 texture = texture2D(diffuse, shared_texCoords);
vec4 finalColor = vec4(mix(color.rgb, texture.rgb, texture.a), 1);
vec4 fCol = color * texture;
gl_FragColor = fCol;
}
My results are:
finalColor = red color, no texture
fCol = no color (black), red texture
I want to set the color of the object and have that show through wherever the alpha of the texture is less than 1...
Apparently, my texture was not loaded correctly after all, as it gave me a constant alpha value of 0, so when I tried to call the Mix() in the GLSL it ended up canceling it completly out.
I loaded the texture like this:
SDL_CreateRGBSurface(NULL, rawImage->w, rawImage->h, 32, 0, 0, 0, 0);
It was solved by setting it to this, so it actually conciders the alpha channel correctly:
SDL_CreateRGBSurface(NULL, rawImage->w, rawImage->h, 32, 0x00ff0000, 0x0000ff00, 0x000000ff, 0xff000000);
Also after that I found out my texture was loaded with reversed rgb channels (meaning it used bgr), so I also changed that when merging the texture in the shader.
From:
vec4 finalColor = vec4(mix(color.rgb, texture.rgb, texture.a), 1);
To:
vec4 finalColor = vec4(mix(color.rgb, texture.bgr, texture.a), 1);
XNA doesn't have any methods which support circle drawing.
Normally when I had to draw circle, always with the same color, I just made image with that circle and then I could display it as a sprite.
But now the color of the circle is specified during runtime, any ideas how to deal with that?
You can simply make an image of a circle with a Transparent background and the coloured part of the circle as White. Then, when it comes to drawing the circles in the Draw() method, select the tint as what you want it to be:
Texture2D circle = CreateCircle(100);
// Change Color.Red to the colour you want
spriteBatch.Draw(circle, new Vector2(30, 30), Color.Red);
Just for fun, here is the CreateCircle method:
public Texture2D CreateCircle(int radius)
{
int outerRadius = radius*2 + 2; // So circle doesn't go out of bounds
Texture2D texture = new Texture2D(GraphicsDevice, outerRadius, outerRadius);
Color[] data = new Color[outerRadius * outerRadius];
// Colour the entire texture transparent first.
for (int i = 0; i < data.Length; i++)
data[i] = Color.TransparentWhite;
// Work out the minimum step necessary using trigonometry + sine approximation.
double angleStep = 1f/radius;
for (double angle = 0; angle < Math.PI*2; angle += angleStep)
{
// Use the parametric definition of a circle: http://en.wikipedia.org/wiki/Circle#Cartesian_coordinates
int x = (int)Math.Round(radius + radius * Math.Cos(angle));
int y = (int)Math.Round(radius + radius * Math.Sin(angle));
data[y * outerRadius + x + 1] = Color.White;
}
texture.SetData(data);
return texture;
}