How to isolate a Phaser shader to a specific object/shape? - phaser-framework

I'm using the Phaser framework. Here is the jsfiddle:
http://jsfiddle.net/Dillybob/u3mGL/13/
Here is where the filter is getting populated:
background = game.add.sprite(0, 0);
background.width = 800;
background.height = 600;
filter = game.add.filter('Fire', 800, 600);
filter.alpha = 0.0;
background.filters = [filter];
My line object is assigned to the variable drawnObject
So I assign that object to receive the filter like so:
drawnObject.filters = [filter];
But my line is now a red fiery square instead of being a line with a fiery background, why?

Firstly, be aware that drawnObject is actually a bitmap, which is rectangular shaped. It consists of white pixels, which build your line, and transparent pixels, which are taking the rest of bitmap space.
The filter you use is a pixel shader. Pixel shader describes instructions that GPU invokes for each pixel of a provided bitmap. In case of this shader, it creates fire effect based on some noise functions, but it doesn't take original bitmap into account. The original color of pixels is not preserved, it doesn't add to final effect in any way.
To achieve your expected result, you have to amend fragmentSrc in Fire.js, so that shader uses and mixes/blends original color into final pixel color and/or doesn't change pixel transparency.

Related

Merging overlapping transparent shapes in directx

This is the problem I am facing simplified:
Using directx I need to draw two(or more) exactly (in the same 2d plane) overlapping triangles. The triangles are semi transparent but the effect I want to release is that they clip to transparency of a single triangle. The picture below might depict the problem better.
Is there a way to do this?
I use this to get overlapping transparent triangles to not "accumulate". You need to create a blendstate and set it on output merge.
blendStateDescription.AlphaToCoverageEnable = false;
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = D3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = D3D11.BlendOption.One; //
blendStateDescription.RenderTarget[0].BlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = D3D11.BlendOption.SourceAlpha; //Zero
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = D3D11.BlendOption.DestinationAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11.ColorWriteMaskFlags.All;
Hope this helps. Code is in C# but it works the same in C++ etc. Basically, takes the alpha of both source and destination, compares and takes the max. Which will always be the same (as long as you use the same alpha on both triangles) otherwise it will render the one with the most alpha.
edit: I've added a sample of what the blending does in my project. The roads here overlap. Overlap Sample
My pixel shader is as:
I pass the UV co-ords in a float4.
xy = uv coords.
w is the alpha value.
Pixel shader code
float4 pixelColourBlend;
pixelColourBlend = primaryTexture.Sample(textureSamplerStandard, input.uv.xy, 0);
pixelColourBlend.w = input.uv.w;
clip(pixelColourBlend.w - 0.05f);
return pixelColourBlend;
Ignore my responses, couldn't edit them...grrrr.
Enabling the depth stencil prevents this problem

What is the simplest way to implement transparent objects in sharpdx?

I am currently trying to implement semi-transparent polygons in sharpdx.
At the moment I am using GraphicsDevice and BasicEffect to draw my objects.
// Setup the vertices
game.GraphicsDevice.SetVertexBuffer(myModel.vertices);
game.GraphicsDevice.SetVertexInputLayout(myModel.inputLayout);
// Apply the basic effect technique and draw the object
basicEffect.CurrentTechnique.Passes[0].Apply();
game.GraphicsDevice.Draw(PrimitiveType.TriangleList, myModel.vertices.ElementCount);
This is working fine for normal objects, however I would like to make some of the objects partially transparent. I've set the alpha value of these object's colors to 50, however they are still being rendered as opaque. What do I need to do to achieve this effect?
Transparency in Sharpdx requires alpha blending value 0..1 for float colors. The comment Nico Schertler provided above solved the question and can be regarded as answer.
Without Alpha mode, there are two options, that you can use in the HLSL shader file
In the Pixel shader, use the clip() function, dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
float4 PS( PS_IN input ) : SV_Target
{
clip(input.color[3] < 0.1f ? -1:1 );
return input.color;
}
ref: https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-clip
See the effect:
modify the Vertex Shader to project these vertices to (0,0,0), dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
PS_IN VS( VS_IN input)
{
PS_IN output = (PS_IN)0;
if ((input.color[0]!=0)||(input.color[1]!=0)||(input.color[2]!=0))
{
output.position = mul(worldViewProj,input.position);
}
output.color = input.color;
return output;
}
See below the effect on the edges of my HeightField mesh, on the left is the unchanged version..
NOTE: The latter solution gives sharper edges, but it only works when (0,0,0) is behind the object.

androidplot background image shift

I'm trying to separate the background of the graph grid in 3 areas using this code:
int[] data = {0xff000000, 0x80008000, 0xff000000};
bgBitmap = Bitmap.createBitmap(data, 1, 3, Bitmap.Config.ARGB_8888);
RectF rect = plot.getGraphWidget().getGridRect();
BitmapShader myShader = new BitmapShader(
Bitmap.createScaledBitmap(bgBitmap, 1, (int) rect.height(), false),
Shader.TileMode.REPEAT,
Shader.TileMode.REPEAT);
plot.getGraphWidget().getGridBackgroundPaint().setShader(myShader);
So scaling a 3 pixel bitmap to the graph height and repeating it over the whole domain area.
However the resulting graph show that the background seems to be shifted up a bit.
It looks like the shift size is about equal to the domain label height.
How can I fix this?
Hm cannot post picture because of 'reputation' sigh.
Link to the example graph: http://marcel.mesa.nl/androidplot.png
I think you're running into the issue mentioned near the end of this thread. Essentially, the origin of the shader is the top-left corner of the screen, not the top-left corner of component for which the background is being drawn using the shader. The solution is to translate to the top-left point of the graphWidget like this:
RectF rect = plot.getGraphWidget().getGridRect();
Matrix m = new Matrix();
m.setTranslate(rect.left, rect.top);
shader.setLocalMatrix(m); // where shader is your shader instance

Texture appears to be shifted when drawn on screen (XNA)

Why do my texture's edges contain unwanted colored lines? Texture looks shifted by a part of a pixel.
Texture2ds can be seen as shifted or misplaced sometimes when you're not drawing the whole texture, but just a part of it via SourceRect parameter and the texture's position (Vector2) has nonintegral coordinates. It may look like undesired texels showing at its edges.
If you have a texture with 1px purple border, the actual image can appear with sligthly purple edges. You can avoid that by making the texture coordinates integral.
If this code causes trouble…
Texture.Position.X = 4.9876f; // 4.9876f is an example of actual value
Texture.Position.Y = 5.1234f;
…try adding a cast:
Texture.Position.X = (int)4.9876f;
Texture.Position.Y = (int)5.1234f;

HLSL beginner needs some directions

Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).

Resources