I'm writing a program that take many sprites and put them together to make a final output as .png, the thing is each sprite have a rgba color associated with it to tint it.
I found someone repo that is very similar to what I'm doing but instead of exporting it as a png file he is using bevy to render it, and the result is what I'm expecting.
He is just passing the rgba color to bevy Sprite struct: https://docs.rs/bevy/0.8.1/bevy/prelude/struct.Sprite.html.
As the documentation say, the Color is "The sprite’s color tint".
I'm working with the image crate, and colorops does not provide any function for what I want. I have to write the function manually which is ok, but I have no idea what's the algorithm, how do I take 1 rgba pixel of my sprite and mix it with the other rbga to get the tint result I want, the result bevy get.
Despite all my effort looking through the repo of bevy, I couldn't find the algorithm.
So is there other crate maybe that provide such function, or what's the algorithm for it ?
Thank you
The usual algorithm for tinting an image that you might find in a game engine (I don't know if it's what Bevy uses) is to multiply each color component of each pixel of the image by the corresponding color component of the tint. This creates an effect similar to looking at a physical object through a color filter or which is lit by a colored light source.
With image you can use ImageBuffer::pixels_mut to get at each pixel of the image and make a change.
use image::{RgbaImage, Rgb, Rgba};
pub fn tint_image(image: &mut RgbaImage, tint: Rgb<f32>) {
let Rgb([tint_r, tint_g, tint_b]) = tint;
for Rgba([r, g, b, _]) in image.pixels_mut() {
*r = (*r as f32 * tint_r) as u8;
*g = (*g as f32 * tint_g) as u8;
*b = (*b as f32 * tint_b) as u8;
}
}
I am using f32 for the tint color components so that I don't have to worry about scaling by 255, and so that the tint can brighten and not only darken the color, if desired. (Also, I haven't tested this code.)
You should feel free to try using different arithmetic to get the exact effect you want; experimentation will build up your understanding of how algorithms relate to the image on screen.
Related
This is the problem I am facing simplified:
Using directx I need to draw two(or more) exactly (in the same 2d plane) overlapping triangles. The triangles are semi transparent but the effect I want to release is that they clip to transparency of a single triangle. The picture below might depict the problem better.
Is there a way to do this?
I use this to get overlapping transparent triangles to not "accumulate". You need to create a blendstate and set it on output merge.
blendStateDescription.AlphaToCoverageEnable = false;
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = D3D11.BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = D3D11.BlendOption.One; //
blendStateDescription.RenderTarget[0].BlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = D3D11.BlendOption.SourceAlpha; //Zero
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = D3D11.BlendOption.DestinationAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = D3D11.BlendOperation.Maximum;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = D3D11.ColorWriteMaskFlags.All;
Hope this helps. Code is in C# but it works the same in C++ etc. Basically, takes the alpha of both source and destination, compares and takes the max. Which will always be the same (as long as you use the same alpha on both triangles) otherwise it will render the one with the most alpha.
edit: I've added a sample of what the blending does in my project. The roads here overlap. Overlap Sample
My pixel shader is as:
I pass the UV co-ords in a float4.
xy = uv coords.
w is the alpha value.
Pixel shader code
float4 pixelColourBlend;
pixelColourBlend = primaryTexture.Sample(textureSamplerStandard, input.uv.xy, 0);
pixelColourBlend.w = input.uv.w;
clip(pixelColourBlend.w - 0.05f);
return pixelColourBlend;
Ignore my responses, couldn't edit them...grrrr.
Enabling the depth stencil prevents this problem
Can anyone tell me how to go about converting a RGB Image object to Gray Scale? I know there is a lot of information on how to do this in Java already, but I just wanted to get an answer specific to Codenameone so others can benefit.
I am trying to implement image binarization using Otsu’s algorithm
You can use Image.getRGB() then modify the array as explained in this answer:
Convert Image to Grayscale with array matrix RGB java
Notice that the answer above is a bit over simplistic as it doesn't take into account the correct weight per color channel for proper grayscale effect but this depends on your nitpicking levels.
Then use this version of createImage with the resulting array.
For anyone looking for a simplified way (not using matrices) of doing what Shai is hinting, here is some sample code
int[] rgb = image.getRGB();
for(int k = 0;k<rgb.length;k++)
{
if(rgb[k]!=0)
{
int r = rgb[k]/256/256;
rgb[k]=rgb[k]-r*0x10000;
int g = rgb[k]/256;
rgb[k]=rgb[k]-g*0x100;
int b = rgb[k];
int intensity = (int)Math.round(((r+g+b)/(256.0*3.0))*256);
rgb[k] = intensity+(intensity*256)+intensity*(256*256);
}
}
Image grayImage = Image.createImage(rgb,image.getWidth(),image.getHeight());
I am currently trying to implement semi-transparent polygons in sharpdx.
At the moment I am using GraphicsDevice and BasicEffect to draw my objects.
// Setup the vertices
game.GraphicsDevice.SetVertexBuffer(myModel.vertices);
game.GraphicsDevice.SetVertexInputLayout(myModel.inputLayout);
// Apply the basic effect technique and draw the object
basicEffect.CurrentTechnique.Passes[0].Apply();
game.GraphicsDevice.Draw(PrimitiveType.TriangleList, myModel.vertices.ElementCount);
This is working fine for normal objects, however I would like to make some of the objects partially transparent. I've set the alpha value of these object's colors to 50, however they are still being rendered as opaque. What do I need to do to achieve this effect?
Transparency in Sharpdx requires alpha blending value 0..1 for float colors. The comment Nico Schertler provided above solved the question and can be regarded as answer.
Without Alpha mode, there are two options, that you can use in the HLSL shader file
In the Pixel shader, use the clip() function, dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
float4 PS( PS_IN input ) : SV_Target
{
clip(input.color[3] < 0.1f ? -1:1 );
return input.color;
}
ref: https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-clip
See the effect:
modify the Vertex Shader to project these vertices to (0,0,0), dependent on the input color. You could define your transparent black and not show any black triangles. Like so:
PS_IN VS( VS_IN input)
{
PS_IN output = (PS_IN)0;
if ((input.color[0]!=0)||(input.color[1]!=0)||(input.color[2]!=0))
{
output.position = mul(worldViewProj,input.position);
}
output.color = input.color;
return output;
}
See below the effect on the edges of my HeightField mesh, on the left is the unchanged version..
NOTE: The latter solution gives sharper edges, but it only works when (0,0,0) is behind the object.
Is there any example out there of a HLSL written .fx file that splats a tiled texture with different tiles?Like this: http://messy-mind.net/blog/wp-content/uploads/2007/10/transitions.jpg you can see theres a different tile type in each square and there's a little blurring between them to make a smoother transition,but right now I just need to find a way to draw the tiles on a texture.I have a 2D array of integers,each integer equals a corresponding tile type(0 = grass,1 = stone,2 = sand).I opened up a few HLSL examples and they were really confusing.Everything is running fine on the C++ side,but HLSL is proving to be difficult.
You can use a technique called 'texture splatting'. It mixes several textures (color maps) using another texture which contains alpha values for each color map. The texture with alpha values is an equivalent of your 2D array. You can create a 3-channel RGB texture and use each channel for a different color map (in your case: R - grass, G - stone, B - sand). Every pixel of this texture tells us how to mix the color maps (for example R=0 means 'no grass', G=1 means 'full stone', B=0.5 means 'sand, half intensity').
Let's say you have four RGB textures: tex1 - grass, tex2 - stone, tex3 - sand, alpha - mixing texture. In your .fx file, you create a simple vertex shader which just calculates the position and passes the texture coordinate on. The whole thing is done in pixel shader, which should look like this:
float tiling_factor = 10; // number of texture's repetitions, you can also
// specify a seperate factor for each texture
float4 PS_TexSplatting(float2 tex_coord : TEXCOORD0)
{
float3 color = float3(0, 0, 0);
float3 mix = tex2D(alpha_sampler, tex_coord).rgb;
color += tex2D(tex1_sampler, tex_coord * tiling_factor).rgb * mix.r;
color += tex2D(tex2_sampler, tex_coord * tiling_factor).rgb * mix.g;
color += tex2D(tex3_sampler, tex_coord * tiling_factor).rgb * mix.b;
return float4(color, 1);
}
If your application supports multi-pass rendering you should use it.
You should use a multi-pass shader approach where you render the base object with the tiled stone texture in the first pass and on top render the decal passes with different shaders and different detail textures with seperate transparent alpha maps.
(Transparent map could also be stored in your detail texture, but keeping it seperate allows different tile-levels and more flexibility in reusing it.)
Additionally you can use different texture coordinate channels for each decal pass one so that you do not need to hardcode your tile level.
So for minimum you need two shaders, whereas Shader 2 is used as often as decals you need.
Shader to render tiled base texture
Shader to render one tiled detail texture using a seperate transparency map.
If you have multiple decals z-fighting can occur and you should offset your polygons a little. (Very similar to basic simple fur rendering.)
Else you need a single shader which takes multiple textures and lays them on top of the base tiled texture, this solution is less flexible, but you can use one texture for the mix between the textures (equals your 2D-array).
After doing some research and reading information about OpenCV object detection, I am still not sure on how can I detect a stick in a video frame. What would be the best way so i can detect even if the user moves it around. I'll be using the stick as a sword and make a lightsaber out of it. Any points on where I can start? Thanks!
The go-to answer for this would usually be the Hough line transform. The Hough transform is designed to find straight lines (or other contours) in the scene, and OpenCV can parameterize these lines so you get the endpoints coordinates. But, word to the wise, if you are doing lightsaber effects, you don't need to go that far - just paint the stick orange and do a chroma key. Standard feature of Adobe Premiere, Final Cut Pro, Sony Vegas, etc. The OpenCV version of this is to convert your frame to HSV color mode, and isolate regions of the picture that lie in your desired hue and saturation region.
http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html?highlight=hough
Here is an old routine I wrote as an example:
//Photoshop-style color range selection with hue and saturation parameters.
//Expects input image to be in Hue-Lightness-Saturation colorspace.
//Returns a binary mask image. Hue and saturation bounds expect values from 0 to 255.
IplImage* selectColorRange(IplImage *image, double lowerHueBound, double upperHueBound,
double lowerSaturationBound, double upperSaturationBound) {
cvSetImageCOI(image, 1); //select hue channel
IplImage* hue1 = cvCreateImage(cvSize(image->width, image->height), IPL_DEPTH_8U, 1);
cvCopy(image, hue1); //copy hue channel to hue1
cvFlip(hue1, hue1); //vertical-flip
IplImage* hue2 = cvCloneImage(hue1); //clone hue image
cvThreshold(hue1, hue1, lowerHueBound, 255, CV_THRESH_BINARY); //threshold lower bound
cvThreshold(hue2, hue2, upperHueBound, 255, CV_THRESH_BINARY_INV); //threshold inverse upper bound
cvAnd(hue1, hue2, hue1); //intersect the threshold pair, save into hue1
cvSetImageCOI(image, 3); //select saturation channel
IplImage* saturation1 = cvCreateImage(cvSize(image->width, image->height), IPL_DEPTH_8U, 1);
cvCopy(image, saturation1); //copy saturation channel to saturation1
cvFlip(saturation1, saturation1); //vertical-flip
IplImage* saturation2 = cvCloneImage(saturation1); //clone saturation image
cvThreshold(saturation1, saturation1, lowerSaturationBound, 255, CV_THRESH_BINARY); //threshold lower bound
cvThreshold(saturation2, saturation2, upperSaturationBound, 255, CV_THRESH_BINARY_INV); //threshold inverse upper bound
cvAnd(saturation1, saturation2, saturation1); //intersect the threshold pair, save into saturation1
cvAnd(saturation1, hue1, hue1); //intersect the matched hue and matched saturation regions
cvReleaseImage(&saturation1);
cvReleaseImage(&saturation2);
cvReleaseImage(&hue2);
return hue1;
}
A little verbose, but you get the idea!
My old professor always said that the first law of computer vision is to do whatever you can to the image to make your job easier.
If you have control over the stick's appearance, then you might have the best luck painting the stick a very specific color --- neon pink or something that isn't likely to appear in the background --- and then using color segmentation combined with connected component labeling. That would be very fast.
You can start by following the face-recognition (training & detection) techniques written for OpenCV.
If you are looking for specific steps, let me know.