Core Problem:
I want to render Text in WebGL.
I don't want to do this via an "overlayed" HTML DOM Element.
I want the actual text rendering to happen inside WebGL.
Initial Solution 1
Render each character as a high number of quads.
This is not good as I need to render many characters.
Initial Solution 2 (implemented + tried this one).
Using Canvas, render all characters into an "atlas/map".
Convert Canvas into a WebGL Texture.
When I need to render a character, just pull it from the Texture.
Problem: Even if the Canvas renders the font at font size 80, and the WebGL renders the font at font size 20, it's still blurry due to various forms of antialiasing, interpolation, and whatever else post processing.
What I think is the right solution. (Not sure, may be wrong on this).
Signed Distance Field: https://www.youtube.com/watch?v=CGZRHJvJYIg
For every pixel, store distance to nearest border.
Question
I am having trouble finding any WebGL implementation of Signed Distance Fields.
Can SDFs work with WebGL, or is there some limitation of WebGL which prevents SDFs from working.
If so, is there some library that will take are of:
actual shader for rendering SDF AND
can take a font and produce the SDFs necessary for rendering?
EDIT: Can someone please verify that the following is a COMPLETE SDF shader?
(copied from https://www.mapbox.com/blog/text-signed-distance-fields/ )
precision mediump float;
uniform sampler2D u_texture;
uniform vec4 u_color;
uniform float u_buffer;
uniform float u_gamma;
varying vec2 v_texcoord;
void main() {
float dist = texture2D(u_texture, v_texcoord).r;
float alpha = smoothstep(u_buffer - u_gamma, u_buffer + u_gamma, dist);
gl_FragColor = vec4(u_color.rgb, alpha * u_color.a);
}
Yes, SDF's perfectly suitable for a WebGL application. For example, Mapbox uses it. The post actually contains SDF shader since it's incredibly simple.
To the second part of your question: it's better to prepare SDF texture for a font beforehand, and there're instruments to do exactly that. This one, for example.
Related
Our project uses images with bit depths higher than 8 bits, typically 10 bit. These are stored with 16bit PNGs, with P3 colorspace (so 1024 colors per channel).
I am trying to show these images in a browser using WebGL2. So far having no luck. I know Chrome can do it as I have some test images which reveal an extended colour range on my Macbook's retina screen (but not on an 8bit, external monitor).
Here's the test image: https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png (Source: https://webkit.org/blog/6682/improving-color-on-the-web/)
If you're using an 8 bit screen and hardware, the test image will look entirely red. If you have a high bit depth monitor, you'll see a faint webkit logo. Despite my high bit depth monitor showing the logo detail in Chrome, a WebGL quad with this texture applied looks flat red.
My research has shown that WebGL/OpenGL does offer support for floating point textures and high bit depth, at least when drawing to a render target.
What I want to achieve is simple, use a high bit depth texture in WebGL, applied to an on-screen quad. Here's how I am loading the texture:
var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0 + 0);
gl.bindTexture(gl.TEXTURE_2D, texture);
// Store as a 16 bit float
var texInternalFormat = gl.RGBA16F;
var texFormat = gl.RGBA16F;
var texType = gl.FLOAT;
var image = new Image();
image.src = "10bitTest.png";
image.addEventListener('load', function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
gl.TEXTURE_2D,
0,
texInternalFormat,
texFormat,
texType,
image
);
gl.generateMipmap(gl.TEXTURE_2D);
});
This fails with
WebGL: INVALID_ENUM: texImage2D: invalid format
If I change texFormat to gl.RBGA, it renders the quad, but plain red, without the extended colours.
I'm wondering if its possible at all, although Chrome can do it so I am still holding out hope.
AFAIK
You can not (as of June 2020) create a canvas that is more than 8bits per channel in any browser. There are proposals but none have shipped
You can not load > 8bit per channel images into WebGL via img or ImageBitmap. There are no tests that data > 8bits makes in into the textures.
You can load a > 8bit per channel image into a texture if you parse and load the image yourself in JavaScript but then you fall back to problem #1 which is you can not display it except to draw it into an 8bit per channel canvas. You coud pull the data back out into JavaScript, generate a 16bit image blob, get a URL for the blob, add an img tag using that URL and pray the browser supports drawing it with > 8bits per channel.
For the record, this is not currently possible. Recent activity looks promising:
https://bugs.chromium.org/p/chromium/issues/detail?id=1083693&can=2&q=component%3ABlink%3ECanvas%20colorspace
I have very little experience programming with graphics objects. I am currently tasked with exporting a document (.tiff image) with redacted annotations. The redacted annotation is just a black rectangle object. I am able to get the x coordinates, y coordinates, width and height properties through the .XMP data. There is also a property called rotatation. This is where I am getting stuck, applying the rotation.
So, imagine a document with a redaction on it blacking out the first paragraph. Then, using a tool in the editor the user rotates the document so that it is now laying on it's side. The client is able to render the redaction correctly because we are using the Atalasoft controls to get and display annotations. Now we have a web service that will go and retrieve that image with redactions. We are not able to use the Atalasoft controls in this service due to licensing issues so we just extract the .XMP data from the .tiff image and manually draw the redactions. The problem is, if the user rotates the document when the redaction is already on the document I am having a hard time getting the redaction to rotate correctly (due to my lack of knowledge on graphics programming). If I do not apply any rotation, the redaction is displayed where it was BEFORE the document had been rotated, thus redacting the wrong area of the document.
Here is what I have tried:
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
graphics.RotateTransform(rotation)
graphics.FillRectangle(Brushes.Black, rectangle)
When I do this, the redaction does not show up at all on the final document. I have read that I may need to call the following before applying the rotation:
graphics.TranslateTransform(x,y)
But I have no idea what I should be passing in as x and y. It seems like I just need to get the rotation to apply from the upper left corner of the rectangle, but I have yet to figure out a way to properly do this.
Thank you so much for any help or pushes in the right direction!
EDIT 1:
I have also tried this (taken from How can I rotate an RectangleF at a specific degree using Graphics object?).
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
Using rotationMatrix As New Matrix
rotationMatrix.RotateAt(rotation, New PointF(rectangle.Left + (rectangle.Width / 2), rectangle.Top + (rectangle.Height / 2)))
graphics.Transform = rotationMatrix
graphics.FillRectangle(Brushes.Black, rectangle)
graphics.ResetTransform()
End Using
Which does rotate the rectangle, but it ends up in the wrong spot so it is not redacting the correct portion of the document. Once again, when I display the document without any rotation transform, it looks like the redaction simply needs to be rotated using the upper left corner as an axis but I'm not quite sure how to accomplish that.
Figured it out. Here is how I am rotating a rectangle using the upper left-hand corner as the axis:
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
Using rotationMatrix As New Matrix
rotationMatrix.RotateAt(rotation, New PointF(rectangle.Left, rectangle.Top))
graphics.Transform = rotationMatrix
graphics.FillRectangle(Brushes.Black, rectangle)
graphics.ResetTransform()
End Using
Hy
I have a rendering issue with text in WebGL.
Here the pb:
The first rendering is crappy the second is OK.
The difference is my DOM (nothing related to the HTML DOM):
The difference between the view V2 and V3 is:
V2 is just a green rectangle (composed of 2 GL triangles) and contains a DOM child V4 which is a Text View (means a text, render into a Canvas then copy into a texture)
The blend is done by the GPU
V3 is TextView with a green background. The text is rendered into a Canvas then into a texture (like the V4). And a Shader fill the rectangle and takes the texture to generate the final view => no blend (actually done by the shader)
It should be a problem of blend and texture configuration. But I cannot find the right configuration.
Here my default configuration for the blend:
gl_ctx.disable (gl_ctx.DEPTH_TEST);
gl_ctx.enable (gl_ctx.BLEND);
gl_ctx.blendFunc (gl_ctx.SRC_ALPHA, gl_ctx.ONE_MINUS_SRC_ALPHA);
gl_ctx.clearDepth (1.0);
gl_ctx.clearColor (1, 1, 1, 1);
Thanks in advance for your help.
NOTE 1: A view (Vn) is the basic document object into my WebGL toolkit. It's called internally a Sprite, it's composed basically by 4 vertices, and a vertex and fragment shaders are associated to it, for the rendering purpose.
NOTE 2: If I use this blend configuration:
gl_ctx.blendFunc (gl_ctx.ONE, gl_ctx.ONE_MINUS_SRC_ALPHA);
The text rendering works well but the rest of rendering, specially images had incorrect alpha.
NOTE 3: sorry dont have enough reputation(!!!) to include image in my post :-(
Canvas 2D always uses pre-multiplied alpha so you pretty much have to use the second blendfunc. Can you set that blendfunc just when you render text?
I'm trying to create a game, and the images, both png and svg formats, seems to get pixelated, no matter how
I export them, I can't get to the HD images like angry birds for example.
I tried with illustrator exporting large PNG (512X512) or SVG basic 1.1 export and still nothing seems to get near the high quality I need, especially when you move to tablets, there you can see the image even in SVG being ruined, I guess the engine creates a bitmap out of the SVG and then scaling it to fit the screen resolution (!?).
is there any reference or tutorial on the right way to create images for games to get a good result and prevent
images of being pixelated?
I think you may be loading in your images incorrectly for SVGs as they should be "Pixel Perfect"
final PictureBitmapTextureAtlasSource textureSource = new SVGAssetBitmapTextureAtlasSource(activity, assetPath, width, height);
Bitmap pBitmap = null;
try {
pBitmap = textureSource.onLoadBitmap(Config.ARGB_8888);
} catch (Exception e) {
e.printStackTrace();
}
Key is to pay attention to the "width" & "height" parameters, this should refer to the original size of your graphic multiplied by your screen scaling.
For example if you are using 800x480 as a default and you design a sprite with a sprite of 100x100 that works perfectly with this resolution. When your game comes to creating the SVG on a device with dimenions 800x480 your sprite will be saved with width and height of 100x100. However if the device was 1000x700 the "scaleX" would be 1000/800 which is 1.25 and scaleY would be 1.46, this would yield a sprite with dimensions 125x146 and would be pixel perfect for this device.
When creating the TextureRegion later on make sure you specify a width and height of 100x100 otherwise your sprite will change size based on the resulting SVG Bitmap
I'm using andengine to create a physic simulation via box2d.
The bodies are created through PhysicsFactory using Sprites.
My idea is to procedurally position these sprites, following this pattern:
basically one central sprites which represent my world coordinates center, and a series of cloned sprites that are created by rotating the base sprite around myWorld center (the "X" inside the circle).
I've tried to use opengl way inside andengine (translate, rotate, back-translate)
super(stamiRadious, 0, image); //stamiDoadious is te distance from radix (world center) and "petal" attach point
this.setRotationCenter(0, 0);
this.setRotation((float) Math.toDegrees(angleRad));
this.setPosition(this.getX()+radixX, this.getY()+radixY);
but i failed: results are not right (wrong final position, and wrong box2d body property as if the sprite is much larger than the image)
I belive part of the problem relies on my interpretation on setRotation and setRotationCenter, and in general on my understanding of andengine coordinates system + box2d cordinates system.
Any thoughts/links to doc/explanation?
Once you created a Physics representation (Body) of a Sprite, you should be very careful on how you modify the Sprite! Usually you don't modify the Sprite anymore at all, but instead modify the Body, by calling
someBody.setTransform(); // Note that positions must be divided by PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT!
Hope that helped :)