In my scene I have 2 objects, a Water Well and a Quad. The Well has 2 textures, a baseColorTexture (index 0) and a normalMapTexture (index 1). The quad has no textures applied to it. When Rendering the scene I get something that looks like this.
THE QUAD IS USING THE WELLS IMAGE. Now when looking at the debugger I find that there is no image bound to either index 0 or 1 for the quad like the picture shows below.
My Shaders use the following when using a texture. if(!is_null_texture(texture) { ... }
Can anyone please give me an idea as to what may be occuring?
You can see all the code here:
github.com/twohyjr/Metal-Game-Engine-Tutorial/blob/master/….
but it looks like this.
float4 color = material.color; if(!is_null_texture(baseColorMap)) { color = baseColorMap.sample(sampler2d, texCoord); }
Related
I want to have lego style splitscreen camera with seamless transitions.
Anyone has any experience creating something like this? I thought of creating one normal camera and then another camera for second player that by default wouldn't be visible. Then, when i would want to show it, i would draw a triangle to split the screen and set it texture as camera #2 view.
I found this Unity implementation but i couldn't implement it in Godot. I've managed to create second viewport with it's own camera, but for some reason the view of the second camera is not showing anything. I'm thinking that the problem is that the world of the second viewport is different than the main viewport.
Source code can be found here.
I just setup a toy project to test this out and it turned out to be simpler than expected.
Here is an overview of the process and then code examples will follow.
Add one main camera
And a secondary camera with tree: Control > Viewport > Camera
Draw the shape of the split screen with Control using the draw_* api
Add a shader to Control that takes a texture and draws it at SCREEN_UV
Get the viewport texture from Viewport
Pass the viewport texture to the Control shader every frame.
Animate the split by animating and redrawing the Control shape.
I'm not sure how to do the border.
To make the split join you'll probably have to shift the Control shape by the thickness of border and then shrink that border as the cameras go towards each other. Use that distance between the players to calculate the border width.
The split border is also at an angle between the two players so when animating the shape you'll want to use that angle. This will make the joining of the viewport look smoother.
Control code:
extends Control
func _draw():
# in this case animate tl and bml to get the
# rotating split like effect in the lego game
var tl = Vector2()
var tr = rect_size
tr.y = 0
var br = rect_size
var bml = rect_size
bml.x /= 2.0
draw_polygon([tl, tr, br, bml, tl], [Color(), Color(), Color(), Color(), Color()], [])
func _process(d):
material.set_shader_param('viewport', $Viewport.get_texture())
Shader Code:
shader_type canvas_item;
uniform sampler2D viewport;
void fragment(){
COLOR=texture(viewport, SCREEN_UV);
}
I hope this helps get you started!
It is a complex effect with many parts so be warned.
I found that the easiest way is to create a separate scene with your viewports and cameras which will be your main scene and then add your game scene under it like this:
Spatial
Viewport1
Camera1
Viewport2
Camera2
GameScene
You should then be able to make a ColorRect with a shader material and send in the textures from each viewport:
shader_type canvas_item;
render_mode unshaded, cull_disabled;
uniform sampler2D viewport1;
uniform sampler2D viewport2;
void fragment() {
vec3 view1 = texture(viewport1, UV).rgb;
vec3 view2 = texture(viewport2, UV).rgb;
vec3 col = vec3(0);
// mix them in a satisfying way depending on distance and angle between cameras
// float mixVal = <your formula here>
// col = mix(view1, view2, mixVal)
COLOR = vec4(col, 0.0); // this may not work in Godot shaders
}
This is a great guide to get you started:
https://docs.godotengine.org/en/3.1/tutorials/viewports/using_viewport_as_texture.html
So I am writing a volume ray caster (for the first time ever) in Java, learning from the code of the great VTK toolkit written in C.
Everything works almost exactly like VTK, except I get this strange artifacts, looking like elevation lines on the volume. I've noticed that VTK also shows them when manipulating the image, but they disappear when the image is static.
I've looked though the code multiple times, and can't find the source of the artifacts. Maybe it is something simple a computer graphics expert knows from the top of his head? :)
More info on my implementation
I am using the gradient method for normal calculations (a standard from what I've found on the internet)
I am using trilinear interpolation for ray point values
This "elevation line" artifacts look like value rounding errors, but I can't find any in my code
Increasing the resolution of the render does not solve the problem
The artifacts do not seem to be "facing" any fixed direction, like the camera position
I'm not attaching the code since it is huge :)
EDIT (ray composite loop)
while (Geometry.pointInsideCuboid(cuboid, position) && result.a > MINIMAL_OPACITY) {
if (currentVoxel.notEquals(previousVoxel)) {
final float value = VoxelUtils.interpolate(position, voxels, buffer);
color = colorLUT.getColor(value);
opacity = opacityLUT.getOpacityFromLut(value);
if (enableShading) {
final Vector3D normal = VoxelUtils.getNormal(position, voxels, buffer);
final float cos = normal.dot(light.fixedDirection);
final float gradientOpacity = cos < 0 ? 0 : cos;
opacity *= gradientOpacity;
if(cos > 0)
color = color.clone().shade(cos, colorLUT.diffuse, colorLUT.specular);
}
previousVoxel.setTo(currentVoxel);
}
if(opacity > 0)
result.accumulate(color, opacity);
position.add(rayStep);
currentVoxel.fromVector(position);
}
This is the problem I am facing simplified:
Using directx I need to draw two(or more) exactly (in the same 2d plane) overlapping triangles. The triangles are semi transparent but the effect I want to release is that they clip to transparency of a single triangle. The picture below might depict the problem better.
Is there a way to do this?
I use this to get overlapping transparent triangles to not "accumulate". You need to create a blendstate and set it on output merge.
blendStateDescription.AlphaToCoverageEnable = false;
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = D3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = D3D11.BlendOption.One; //
blendStateDescription.RenderTarget[0].BlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = D3D11.BlendOption.SourceAlpha; //Zero
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = D3D11.BlendOption.DestinationAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11.ColorWriteMaskFlags.All;
Hope this helps. Code is in C# but it works the same in C++ etc. Basically, takes the alpha of both source and destination, compares and takes the max. Which will always be the same (as long as you use the same alpha on both triangles) otherwise it will render the one with the most alpha.
edit: I've added a sample of what the blending does in my project. The roads here overlap. Overlap Sample
My pixel shader is as:
I pass the UV co-ords in a float4.
xy = uv coords.
w is the alpha value.
Pixel shader code
float4 pixelColourBlend;
pixelColourBlend = primaryTexture.Sample(textureSamplerStandard, input.uv.xy, 0);
pixelColourBlend.w = input.uv.w;
clip(pixelColourBlend.w - 0.05f);
return pixelColourBlend;
Ignore my responses, couldn't edit them...grrrr.
Enabling the depth stencil prevents this problem
I am currently trying to implement semi-transparent polygons in sharpdx.
At the moment I am using GraphicsDevice and BasicEffect to draw my objects.
// Setup the vertices
game.GraphicsDevice.SetVertexBuffer(myModel.vertices);
game.GraphicsDevice.SetVertexInputLayout(myModel.inputLayout);
// Apply the basic effect technique and draw the object
basicEffect.CurrentTechnique.Passes[0].Apply();
game.GraphicsDevice.Draw(PrimitiveType.TriangleList, myModel.vertices.ElementCount);
This is working fine for normal objects, however I would like to make some of the objects partially transparent. I've set the alpha value of these object's colors to 50, however they are still being rendered as opaque. What do I need to do to achieve this effect?
Transparency in Sharpdx requires alpha blending value 0..1 for float colors. The comment Nico Schertler provided above solved the question and can be regarded as answer.
Without Alpha mode, there are two options, that you can use in the HLSL shader file
In the Pixel shader, use the clip() function, dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
float4 PS( PS_IN input ) : SV_Target
{
clip(input.color[3] < 0.1f ? -1:1 );
return input.color;
}
ref: https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-clip
See the effect:
modify the Vertex Shader to project these vertices to (0,0,0), dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
PS_IN VS( VS_IN input)
{
PS_IN output = (PS_IN)0;
if ((input.color[0]!=0)||(input.color[1]!=0)||(input.color[2]!=0))
{
output.position = mul(worldViewProj,input.position);
}
output.color = input.color;
return output;
}
See below the effect on the edges of my HeightField mesh, on the left is the unchanged version..
NOTE: The latter solution gives sharper edges, but it only works when (0,0,0) is behind the object.
Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).