Shader for counting number of pixels - graphics

I'm looking for a shader CG or HLSL, that can count number of red pixels or any other colors that I want.

You could do this with atomic counters in a fragment shader. Just test the output color to see if it's within a certain tolerance of red, and if so, increment the counter. After the draw call you should be able to read the counter's value on the CPU and do whatever you like with it.
edit: added a very simple example fragment shader:
// Atomic counters require 4.2 or higher according to
// https://www.opengl.org/wiki/Atomic_Counter
#version 440
#extension GL_EXT_gpu_shader4 : enable
// Since this is a full-screen quad rendering,
// the only input we care about is texture coordinate.
in vec2 texCoord;
// Screen resolution
uniform vec2 screenRes;
// Texture info in case we use it for some reason
uniform sampler2D tex;
// Atomic counters! INCREDIBLE POWER
layout(binding = 0, offset = 0) uniform atomic_uint ac1;
// Output variable!
out vec4 colorOut;
bool isRed(vec4 c)
{
return c.r > c.g && c.r > c.b;
}
void main()
{
vec4 result = texture2D(tex, texCoord);
if (isRed(result))
{
uint cval = atomicCounterIncrement(ac1);
}
colorOut = result;
}
You would also need to set up the atomic counter in your code:
GLuint acBuffer = 0;
glGenBuffers(1, &acBuffer);
glBindBuffer(GL_ATOMIC_COUNTER_BUFFER, acBuffer);
glBufferData(GL_ATOMIC_COUNTER_BUFFER, sizeof(GLuint), NULL, GL_DYNAMIC_DRAW);

Related

How to use multiple textures in Vulkan LLVMpipe (in Docker on CPU)

I develop offscreen Vulkan based render server to perform 2D scene drawing per request.
Target platform: Ubuntu 18.04 into Docker container
Physical device: llvmpipe (LLVM 11.0.1, 256 bits)
The scene consists of the same type of meshes and textures of different sizes. Each mesh is bound to its own texture. The maximum number of scene elements is 200.
I have just 1 material (vertex + fragment shaders) so I use just 1 pipeline.
High level description of my workfllow:
1) Setup framebuffer and readback image
2) Load all meshes (VBOs and IBOs)
3) Load all textures (images, views, samplers)
4) Create descriptor set for material exposes (mesh transform and texture sampler)
5) Put per-mesh parameters to storage buffer (transform matrices)
6) Update fixed array of texture samplers.
7) Draw each mesh
8) Send readback image to response.
Thats works great on dedicated GPU, llvmpipe does not support VK_EXT_descriptor_indexing and shaderSampledImageArrayDynamicIndexing feature. Its mean I cant indexing (in shaders) texture samler array by value from PushConstants.
#version 450
layout(set = 0, binding = 2) uniform sampler2D textures[200];
layout(push_constant) uniform Constants
{
uint id;
} meta;
void main()
{
// ...
vec4 t = texture(textures, uv); // failed on llvmpipe
// ...
}
To use only one sampler I need:
clear(framebuffer)
for mesh in meshes
{
bind(mesh.vbo)
bind(mesh.ibo)
bind(descriptorset)
update(sampler) // write current mesh texture
submit()
}
read(readback)
...
I dont understand how to setup renderpass to perform this steps. submit() in middle of this approach is confuse me.
Could you help me ?
I tried another approach that is based on StorageTexelBuffers.
1. Get max size of texel storage from device limits
(maxTexelBufferElements)
2. Split scene data ito chunks limited by maxTexelBufferElements.
3. Setup framebuffer and clear it
4. Draw a chunk[i]
5. Read back result
In this case samplers usage are not required.
I put N images in 1D array and pass it to fragment shader. In the shader I calculate index of the specific texel and gather it by imageLoad(...)
layout(location = 0) in vec2 uv;
layout(set = 0, binding = 2, rgba32f) uniform imageBuffer texels;
layout(push_constant) uniform Constants
{
uint id;
uint textureStart;
uint textureWidth;
uint textureHeight;
} meta;
void main()
{
// calculate specific texel real coordinates
uint s = uint(uv.x * float(meta.textureWidth));
uint t = uint(uv.y * float(meta.textureHeight));
// calculate texel index in global array
int index = int(meta.textureStart + s + t * meta.textureWidth);
outColor = imageLoad(noise, tx);
}
Start of the texture is passed in PushConstants.

Tween the texture on a TextureButton / TextureRect. Fade out Image1 while simultaneously fade in Image2

Character portrait selection. Clicking next loads the next image in an array, clicking back loads the previous image. Instead of a sharp change from one image to another, I want a variable-speed fading out of the current image and fading in of the new image. Dissolve/Render effects would be nice, but even an opacity tween 100->0 / 0-> 100 in x Seconds.
I really prefer not to use multiple objects on top of each other and alternating between them for "current texture".
Is this possible?
We can do Fade-in and Fade-out by animation modulate. Which is the simple solution.
For dissolve we can use shaders. And there is a lot we can do with shaders. There are plenty of dissolve shaders you can find online... I'll explain some useful variations. I'm favoring variations that are easy to tinker with.
Fade-in and Fade-out
We can do this with a Tween object and either the modulate or self-modulate properties.
I would go ahead and create a Tween in code:
var tween:Tween
func _ready():
tween = Tween.new()
add_child(tween)
Then we can use interpolate_property to manipulate modulate:
var duration_seconds = 2
tween.interpolate_property(self, "modulate",
Color.white, Color.transparent, duration_seconds)
Don't forget to call start:
tween.start()
We can take advantage of yield, to add code that will execute when the tween is completed:
yield(tween, "tween_completed")
Then we change the texture:
self.texture = target_texture
And then interpolate modulate in the opposite direction:
tween.interpolate_property(self, "modulate",
Color.transparent, Color.white, duration_seconds)
tween.start()
Note that I'm using self but you could be manipulating another node. Also target_texture is whatever texture you want to transition into, loaded beforehand.
Dissolve Texture
For any effect that require both textures partially visible, use a custom shader. Go ahead and add a ShaderMaterial to your TextureRect (or similar), and give it a new Shader file.
This will be our starting point:
shader_type canvas_item;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
That is a shader that simply shows the texture. Your TextureRect should look the same it does without this shader material. Let us add the second texture with an uniform:
shader_type canvas_item;
uniform sampler2D target_texture;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
You should see a new entry on Shader Param on the Inspector panel for the new texture.
We also need another parameter to interpolate. It will be 0 to display the original Texture, and 1 for the alternative texture. In Godot we can add a hint for the range:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
In Shader Param on the Inspector Panel you should now see the new float, with a slider that goes from 0 to 1.
It does nothing, of course. We still need the code to mix the textures:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
COLOR = mix(color_a, color_b, weight);
}
That will do. However, I'll do a little refactor for ease of modification, later on this answer:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
float adjust_weight(float input, vec2 uv)
{
return input;
}
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
float adjusted_weight = adjust_weight(weight, UV);
COLOR = mix(color_a, color_b, adjusted_weight);
}
And now we manipulate it, again with Tween. I'll assume you have a Tween created the same way as before. Also that you already have your target_texture loaded.
We will start by setting the weight to 0, and target_texture:
self.material.set("shader_param/weight", 0)
self.material.set("shader_param/target_texture", target_texture)
We can tween weight:
var duration_seconds = 4
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
And then change the texture:
self.texture = target_texture
Making Dissolve Fancy
We can get fancy we our dissolve effect. For example, we can add another texture to control how fast different parts transition form one texture to the other:
uniform sampler2D transition_texture;
Set it to a new NoiseTexture (and don't forget to set the Noise property of the NoiseTexture). I'll be using the red channel of the texture.
A simple solution looks like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return min(1.0, input * (transition + 1.0));
}
Where the interpolation is always linear, and the transition controls the slope.
We can also do something like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
float input_2 = input * input;
return input_2 + (input - input_2) * transition;
}
Which ensure that an input of 0 returns 0, and an input of 1 returns 1. But transition controls the curve in between.
If you plot x * x + (x - x * x) * y in the range from 0 to 1 in both axis, you will see that when y (transition) is 1, you have a line, but when y is 0 you have a parabola.
Alternatively, we can change adjusted_weight to an step function:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return smoothstep(transition, transition, input);
}
Using smoothstep instead of step to avoid artifacts near 0.
Which will not interpolate between the textures, but each pixel will change from one to the other texture at a different instant. If your noise texture is continuous, then you will see the dissolve advance through the gradient.
Ah, but it does not have to be a noise texture! Any gradient will do. *You can create a texture defining how you want the dissolve to happen (example, under MIT license).
You probably can come up with other versions for that function.
Making Dissolve Edgy
We also could add an edge color. We need, of course, to add a color parameter:
uniform vec4 edge_color: hint_color;
And we will add that color at an offset of where we transition. We need to define that offset:
uniform float edge_weight_offset: hint_range(0, 1);
Now you can add this code:
float adjusted_weight = adjust_weight(max(0.0, weight - edge_weight_offset * (1.0 - step(1.0, weight))), UV);
float edge_weight = adjust_weight(weight, UV);
color_a = mix(color_a, edge_color, edge_weight);
Here the factor (1.0 - step(1.0, weight)) is making sure that when weight is 0, we pass 0. And when weight is 1, we pass a 1. Sadly we also need to make sure the difference does not result in a negative value. There must be another way to do it… How about this:
float weight_2 = weight * weight;
float adjusted_weight = adjust_weight(weight_2, UV);
float edge_weight = adjust_weight(weight_2 + (weight - weight_2) * edge_weight_offset, UV);
color_a = mix(color_a, edge_color, edge_weight);
Ok, feel free to inline adjust_weight. Whichever version you are using (this makes edges with the smoothstep version. With the other it blends a color with the transition).
Dissolve Alpha
It is not hard to modify the above shader to dissolve to alpha instead of dissolving to another texture. First of all, remove target_texture, also remove color_b, which we don't need and should not use. And instead of mix, we can do this:
COLOR = vec4(color_a.rgb, 1.0 - adjusted_weight);
And to use it, do the same as before to transition out:
self.material.set("shader_param/weight", 0)
var duration_seconds = 2
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
Which will result in making it transparent. So you can change the texture:
self.texture = target_texture
And transition in (with the new texture):
tween.interpolate_property(self.material, "shader_param/weight",
1, 0, duration_seconds)
tween.start()

How to draw lines and circles in a shader efficently

I have used this website to create a shader that displays a snowman and some snowflakes:
http://glslsandbox.com/e#54840.8
In case the link doesn't work, heres the code:
#ifdef GL_ES
precision mediump float;
#endif
#extension GL_OES_standard_derivatives : enable
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
uniform sampler2D backbuffer;
#define PI 3.14159265
vec2 p;
float bt;
float seed=0.1;
float rand(){
seed+=fract(sin(seed)*seed*1000.0)+.123;
return mod(seed,1.0);
}
//No I don't know why he loks so creepy
float thicc=.003;
vec3 color=vec3(1.);
vec3 border=vec3(.4);
void diff(float p){
if( (p)<thicc)
gl_FragColor.rgb=color;
}
void line(vec2 a, vec2 b){
vec2 q=p-a;
vec2 r=normalize(b-a);
if(dot(r,q)<0.){
diff(length(q));
return;
}
if(dot(r,q)>length(b-a)){
diff(length(p-b));
return;
}
vec2 rr=vec2(r.y,-r.x);
diff(abs(dot(rr,q)));
}
void circle(vec2 m,float r){
vec2 q=p-m;
vec3 c=color;
diff(length(q)-r);
color=border;
diff(abs(length(q)-r));
color=c;
}
void main() {
p=gl_FragCoord.xy/resolution.y;
bt=mod(time,4.*PI);
gl_FragColor.rgb=vec3(0.);
vec2 last;
//Body
circle(vec2(1.,.250),.230);
circle(vec2(1.,.520),.180);
circle(vec2(1.,.75),.13);
//Nose
color=vec3(1.,.4,.0);
line(vec2(1,.720),vec2(1.020,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1.020,.740),vec2(.980,.740));
border=vec3(0);
color=vec3(1);
thicc=.006;
//Eyes
circle(vec2(.930,.800),.014);
circle(vec2(1.060,.800),.014);
color=vec3(.0);
thicc=0.;
//mouth
for(float x=0.;x<.1300;x+=.010)
circle(vec2(.930+x,.680+cos(x*40.0+.5)*.014),.005);
//buttons
for(float x=0.02;x<.450;x+=.070)
circle(vec2(1.000,.150+x),0.01);
color=vec3(0.9);
thicc=0.;
//snowflakes
for(int i=0;i<99;i++){
circle(vec2(rand()*2.0,mod(rand()-time,1.0)),0.01);
}
gl_FragColor.a=1.0;
}
The way it works is, that for each pixel on the screen, the shader checks for each elment (button, body, head, eyes mouth, carrot, snowflake) wheter it's inside an area, n which case it replaces the current color at that position with the current draw color.
So we have a complexity of O(pixels_width * pixels_height * elements), which leads to to the shader slowing down when too many snowflakes are own screen.
So now I was wondering, how can this code be optimized? I already thought about using bounding boxes or even a 3d Octree (I guess that would be a quadtree) to quickly discard elements that are outside a certain pixel (or fragments) area.
Does anyone have another idea how to optimize this shadercode? Keeping in mind that every shader execution is completely independant of all others and I can't use any overarching structure.
You would need to break up your screen into regions, "tiles" and compute the snowflakes per tile. Tiles would have the same number of snowflakes and share the same seed, so that one particle leaving the tile's boundary would have an identical particle entering the next tile, making it look seamless. The pattern might still appear depending on your settings, but you could consider adding an extra uniform transformation, potentially based on the final screen position.
On a side note, your method for drawing circles could be more efficient by removing all conditional branching (and look anti-aliased in the process) and could get rid of the square root generated by length().

Calculating UV Coordinates in domain shader

I was trying to implement the terrain tutorial in Introduction to game programming by frank luna. I succeeded to implement it using the effect file.
When I try to separate the Vertex, hull, domain and pixel shaders, I got a very strange behavior in the terrain textures. After debugging I got that the problem is in calculating the UV texture coordinates in the domain shader.
Here is how I calculate the UV coordinates.
[domain("quad")]
DomainOut main(PatchTess patchTess,
float2 uv : SV_DomainLocation,
const OutputPatch<HullOut, 4> quad)
{
DomainOut dout;
// Bilinear interpolation.
dout.PosW = lerp(
lerp(quad[0].PosW, quad[1].PosW, uv.x),
lerp(quad[2].PosW, quad[3].PosW, uv.x),
uv.y);
dout.Tex = lerp(
lerp(quad[0].Tex, quad[1].Tex, uv.x),
lerp(quad[2].Tex, quad[3].Tex, uv.x),
uv.y);
// Tile layer textures over terrain.
dout.TiledTex = dout.Tex * 50.0f;
dout.TiledTex = dout.Tex*50.0f;
// Displacement mapping
dout.PosW.y = gHeightMap.SampleLevel(samHeightmap, dout.Tex, 0).r;
// NOTE: We tried computing the normal in the shader using finite difference,
// but the vertices move continuously with fractional_even which creates
// noticable light shimmering artifacts as the normal changes. Therefore,
// we moved the calculation to the pixel shader.
// Project to homogeneous clip space.
dout.PosH = mul(float4(dout.PosW, 1.0f), gViewProj);
return dout;
}
I am using quads for the domain shader.
After debugging using graphics analyzer, I got that in the domain shader the data is different from effect file from the domain shader I implemented altough the same code is used in both files.
What can be the problem?
I have an update to share with you, The data stream that enters to the domain shader is different from the effect file from the separated files. It is not the equation for the calculation.
What makes the data stream different, is there any way to change the order of patches enters the domain shader from the Hull shader.
This is the pixel shader code:
Texture2DArray gLayerMapArray : register(t3);
Texture2D gBlendMap : register(t1);
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = WRAP;
AddressV = WRAP;
AddressW = WRAP;
};
struct DomainOut
{
float4 PosH : SV_POSITION;
float3 PosW : POSITION;
float2 Tex : TEXCOORD0;
float2 TiledTex : TEXCOORD1;
};
float4 main(DomainOut pin) : SV_Target
{
//
// Texturing
//
float4 c0 = gLayerMapArray.Sample(samLinear, float3(pin.TiledTex, 0.0f));
float4 c1 = gLayerMapArray.Sample(samLinear, float3(pin.TiledTex, 1.0f));
float4 c2 = gLayerMapArray.Sample(samLinear, float3(pin.TiledTex, 2.0f));
float4 c3 = gLayerMapArray.Sample(samLinear, float3(pin.TiledTex, 3.0f));
// Sample the blend map.
float4 t = gBlendMap.Sample(samLinear, pin.Tex);
// Blend the layers on top of each other.
float4 texColor = c0;
texColor = lerp(texColor, c1, t.r);
texColor = lerp(texColor, c2, t.g);
texColor = lerp(texColor, c3, t.b);
return texColor;
}
Finally, the solution is that I should set the sampler from c++ code even if you have a sampler in the shader. I don't know why but this solved the problem.

GLSL won't accept my implicit cast

I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.

Resources