How to do phong shading in POV-ray - graphics

I am using POV-ray raytracer for rendering. I have a mesh of triangles, when I render that using :
mesh
{
triangle
{
< corner_1>, <corner_2>, <corner_3>
}
}
I am not getting smooth shading, there is also a provision for smooth shading in POV-ray which is using :
smooth_triangle
{
<Corner_1>, <Normal_1>, <Corner_2>,
<Normal_2>, <Corner_3>, <Normal_3>
[OBJECT_MODIFIER...]
}
But the problem is that it requires normal of corners of triangle ( it uses phong shading ), How to calculate normal at corners of triangle? How to have a smooth shading in POV-ray?
NOTE: triangles sharing common vertices will have same normal at those vertices

Related

Lego style split screen camera in godot

I want to have lego style splitscreen camera with seamless transitions.
Anyone has any experience creating something like this? I thought of creating one normal camera and then another camera for second player that by default wouldn't be visible. Then, when i would want to show it, i would draw a triangle to split the screen and set it texture as camera #2 view.
I found this Unity implementation but i couldn't implement it in Godot. I've managed to create second viewport with it's own camera, but for some reason the view of the second camera is not showing anything. I'm thinking that the problem is that the world of the second viewport is different than the main viewport.
Source code can be found here.
I just setup a toy project to test this out and it turned out to be simpler than expected.
Here is an overview of the process and then code examples will follow.
Add one main camera
And a secondary camera with tree: Control > Viewport > Camera
Draw the shape of the split screen with Control using the draw_* api
Add a shader to Control that takes a texture and draws it at SCREEN_UV
Get the viewport texture from Viewport
Pass the viewport texture to the Control shader every frame.
Animate the split by animating and redrawing the Control shape.
I'm not sure how to do the border.
To make the split join you'll probably have to shift the Control shape by the thickness of border and then shrink that border as the cameras go towards each other. Use that distance between the players to calculate the border width.
The split border is also at an angle between the two players so when animating the shape you'll want to use that angle. This will make the joining of the viewport look smoother.
Control code:
extends Control
func _draw():
# in this case animate tl and bml to get the
# rotating split like effect in the lego game
var tl = Vector2()
var tr = rect_size
tr.y = 0
var br = rect_size
var bml = rect_size
bml.x /= 2.0
draw_polygon([tl, tr, br, bml, tl], [Color(), Color(), Color(), Color(), Color()], [])
func _process(d):
material.set_shader_param('viewport', $Viewport.get_texture())
Shader Code:
shader_type canvas_item;
uniform sampler2D viewport;
void fragment(){
COLOR=texture(viewport, SCREEN_UV);
}
I hope this helps get you started!
It is a complex effect with many parts so be warned.
I found that the easiest way is to create a separate scene with your viewports and cameras which will be your main scene and then add your game scene under it like this:
Spatial
Viewport1
Camera1
Viewport2
Camera2
GameScene
You should then be able to make a ColorRect with a shader material and send in the textures from each viewport:
shader_type canvas_item;
render_mode unshaded, cull_disabled;
uniform sampler2D viewport1;
uniform sampler2D viewport2;
void fragment() {
vec3 view1 = texture(viewport1, UV).rgb;
vec3 view2 = texture(viewport2, UV).rgb;
vec3 col = vec3(0);
// mix them in a satisfying way depending on distance and angle between cameras
// float mixVal = <your formula here>
// col = mix(view1, view2, mixVal)
COLOR = vec4(col, 0.0); // this may not work in Godot shaders
}
This is a great guide to get you started:
https://docs.godotengine.org/en/3.1/tutorials/viewports/using_viewport_as_texture.html

Understanding What a TextureBlitter is in this Haskell Graphics Program

In a private window manager/compositor Haskell repository I have come across the following datatype which I am trying to understand:
data TextureBlitter = TextureBlitter {
_textureBlitterProgram :: Program, -- OpenGL Type
_textureBlitterVertexCoordEntry :: AttribLocation, -- OpenGL Type
_textureBlitterTextureCoordEntry :: AttribLocation, -- OpenGL Type
_textureBlitterMatrixLocation :: UniformLocation -- OpenGL Type
} deriving Eq
The types Program, AttribLocation, and UniformLocation are from this OpenGL library.
The Problem: I cannot find good information online about what the concept of a "texture blitter" is. So I'm hoping that people with more expertise might immediately have a good guess as to what this type is (probably) used for.
I'm assuming that the field _textureBlitterProgram :: Program is an OpenGL shader program. But what about the other entries? And what is a TextureBlitter as a whole supposed to represent?
EDIT: I have discovered in my repo shaders with the same name:
//textureblitter.vert
#version 300 es
precision highp float;
uniform highp mat4 matrix;
in highp vec3 vertexCoordEntry;
in highp vec2 textureCoordEntry;
out highp vec2 textureCoord;
void main() {
textureCoord = textureCoordEntry;
gl_Position = matrix * vec4(vertexCoordEntry, 1.);
}
and
//textureblitter.frag
#version 300 es
precision highp float;
uniform sampler2D uTexSampler;
in highp vec2 textureCoord;
out highp vec4 fragmentColor;
void main() {
fragmentColor = texture2D(uTexSampler, textureCoord);
}
I don't use haskell nor its OpenGL package. But the names and shaders you expose are pretty descriptive. I'll try to explain what a texture is in OpenGL parlance.
Let's say you have a picture of size width x height. Let's suppose it's saved in a two-dimension, [w,h] sized, matrix.
Instead of accesing a pixel in that matrix by its a,b coordinates let's use normalized coordinates (i.e. in [0-1] range): u= a/w and v= b/h. These formulas need u and v of type float so no rounding to integer is done.
Using u,v coordinates allows us to access any pixel in a "generic" matrix.
Now you want to show that picture on the screen. It's rectangle can be scaled, rotated or even deformed by a perspective projection. Somehow you know the final four coordinates of that rectangle.
If you use also normalized coordinates (again in [0-1] range) then a mapping between picture coordinates and rectangle coordinates makes the picture to adjust to the [likely deformed] rectangle.
This is how OpenGL works. You pass the vertices of the rectangle and compute their normalized final coordinates by the use of some matrix. You also pass the picture matrix (called texture) and map it to those final coordinates.
The programm where all of this computing and mapping is done is a shader, which usually is composed by two sub-shaders: a Vertex Shader that works vertex by vertex (the VS runs exactly once per vertex); and a Fragment Shader that works with fragments (interpolated points between vertices).
TextureBlitter or "an object that blits a picture onto the screen"
You set the program (shader) to use. You can have several shaders
with different effects (e.g. modifying the colors of the picture). Just select one.
Set the vertices. The AttribLocation represents the point of
connection between your vertices and the shader that uses it
(attribute in shaders parlance).
Same for "picture" coordinates.
Set the matrix that transform the vertices. Because it's the same for
all vertices, another type of connection is used: UniformLocation (an
uniform in shaders parlance).
I suppose you can find a good tutorial with examples for how to set and use this "texture blitter".

What is the simplest way to implement transparent objects in sharpdx?

I am currently trying to implement semi-transparent polygons in sharpdx.
At the moment I am using GraphicsDevice and BasicEffect to draw my objects.
// Setup the vertices
game.GraphicsDevice.SetVertexBuffer(myModel.vertices);
game.GraphicsDevice.SetVertexInputLayout(myModel.inputLayout);
// Apply the basic effect technique and draw the object
basicEffect.CurrentTechnique.Passes[0].Apply();
game.GraphicsDevice.Draw(PrimitiveType.TriangleList, myModel.vertices.ElementCount);
This is working fine for normal objects, however I would like to make some of the objects partially transparent. I've set the alpha value of these object's colors to 50, however they are still being rendered as opaque. What do I need to do to achieve this effect?
Transparency in Sharpdx requires alpha blending value 0..1 for float colors. The comment Nico Schertler provided above solved the question and can be regarded as answer.
Without Alpha mode, there are two options, that you can use in the HLSL shader file
In the Pixel shader, use the clip() function, dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
float4 PS( PS_IN input ) : SV_Target
{
clip(input.color[3] < 0.1f ? -1:1 );
return input.color;
}
ref: https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-clip
See the effect:
modify the Vertex Shader to project these vertices to (0,0,0), dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
PS_IN VS( VS_IN input)
{
PS_IN output = (PS_IN)0;
if ((input.color[0]!=0)||(input.color[1]!=0)||(input.color[2]!=0))
{
output.position = mul(worldViewProj,input.position);
}
output.color = input.color;
return output;
}
See below the effect on the edges of my HeightField mesh, on the left is the unchanged version..
NOTE: The latter solution gives sharper edges, but it only works when (0,0,0) is behind the object.

Use Perlin noise to render wood

I'm reading Shaders for Game Programming and Artists. In Chapter 13 "Building Materials from Scratch", the author introduced some render techniques to simulate complex materials such as marble or wood by using Perlin noise. But I'm puzzled by the wood rendering.
To simulate the wood, we need a function gives a circular value along a specific plane so that we can create the rings in the woods. This is what the author said, "take the dot product of two axes along a plane, creating the circular value on that plane"
Circle = dot(noisetxr.xy, noisetxr.xy);
noisetxr is a float3, it's a texture coordinate to sample the noise texture, I can't understand why the dot product will gives a circular value
Here is the complete code(pixel shader in hlsl):
float persistance;
float4 wood_color; //a predefined value
sampler Texture0; // noise texture
float4 ps_main(float3 txr: TEXCOORD0) : COLOR
{
// Determine two set of coordinates, one for the noise
// and one for the wood rings
float3 noisetxr = txr;
txr = txr/8;
// Combine 3 octaves of noise together.
float final_noise = 0;
for(int i=0;i<2;i++)
final_noise += ((1.0/pow(persistance,i))*
((tex3D(Texture0, txr*pow(2,i))*2)-1));
// The wood is defined by a set of concentric rings in the XY
// plane. Those rings are pertubated by the computed noise.
final_noise = abs(final_noise);
float grain = cos(dot(noisetxr.xy,noisetxr.xy) + final_noise*4);//what is this ??
return wood_color - pow(grain,8)/2; //raising the cosine to higher power
}
I know that raising the cosine function to higher power will create sharper rings, but what does the dot product mean ? Why it can create a circle value ?
A dot-product of a vector with itself simply results in the squared length of the vector. So for each point in the xy-plane, dot(noisetxr.xy,noisetxr.xy) return the squared distance of the point to the origin. Now you're applying a cosinus-function on this distance, which means for all points on the plane, which have the same distance to the origin, it creates the same output value => a circle of equal values around the origin.

HLSL beginner needs some directions

Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).

Resources