WebGL: Text rendering and blend issue - text

Hy
I have a rendering issue with text in WebGL.
Here the pb:
The first rendering is crappy the second is OK.
The difference is my DOM (nothing related to the HTML DOM):
The difference between the view V2 and V3 is:
V2 is just a green rectangle (composed of 2 GL triangles) and contains a DOM child V4 which is a Text View (means a text, render into a Canvas then copy into a texture)
The blend is done by the GPU
V3 is TextView with a green background. The text is rendered into a Canvas then into a texture (like the V4). And a Shader fill the rectangle and takes the texture to generate the final view => no blend (actually done by the shader)
It should be a problem of blend and texture configuration. But I cannot find the right configuration.
Here my default configuration for the blend:
gl_ctx.disable (gl_ctx.DEPTH_TEST);
gl_ctx.enable (gl_ctx.BLEND);
gl_ctx.blendFunc (gl_ctx.SRC_ALPHA, gl_ctx.ONE_MINUS_SRC_ALPHA);
gl_ctx.clearDepth (1.0);
gl_ctx.clearColor (1, 1, 1, 1);
Thanks in advance for your help.
NOTE 1: A view (Vn) is the basic document object into my WebGL toolkit. It's called internally a Sprite, it's composed basically by 4 vertices, and a vertex and fragment shaders are associated to it, for the rendering purpose.
NOTE 2: If I use this blend configuration:
gl_ctx.blendFunc (gl_ctx.ONE, gl_ctx.ONE_MINUS_SRC_ALPHA);
The text rendering works well but the rest of rendering, specially images had incorrect alpha.
NOTE 3: sorry dont have enough reputation(!!!) to include image in my post :-(

Canvas 2D always uses pre-multiplied alpha so you pretty much have to use the second blendfunc. Can you set that blendfunc just when you render text?

Related

why is the color and texture of 3d file missing in arcgis scene?

i am trying to build arcgis scene with my custom 3d models with ModelSceneSymbol but when the model finally loaded(.dae format) it appeared pure white when i explicitly designed it red, also most of the other formats like .fbx,obj etc. don't seem to get rendered on the scene, can anyone give me a solution for the color part alone, i am using vectary for 3d model creation btw

Using webgl to render high bit-depth textures to the screen

Our project uses images with bit depths higher than 8 bits, typically 10 bit. These are stored with 16bit PNGs, with P3 colorspace (so 1024 colors per channel).
I am trying to show these images in a browser using WebGL2. So far having no luck. I know Chrome can do it as I have some test images which reveal an extended colour range on my Macbook's retina screen (but not on an 8bit, external monitor).
Here's the test image: https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png (Source: https://webkit.org/blog/6682/improving-color-on-the-web/)
If you're using an 8 bit screen and hardware, the test image will look entirely red. If you have a high bit depth monitor, you'll see a faint webkit logo. Despite my high bit depth monitor showing the logo detail in Chrome, a WebGL quad with this texture applied looks flat red.
My research has shown that WebGL/OpenGL does offer support for floating point textures and high bit depth, at least when drawing to a render target.
What I want to achieve is simple, use a high bit depth texture in WebGL, applied to an on-screen quad. Here's how I am loading the texture:
var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0 + 0);
gl.bindTexture(gl.TEXTURE_2D, texture);
// Store as a 16 bit float
var texInternalFormat = gl.RGBA16F;
var texFormat = gl.RGBA16F;
var texType = gl.FLOAT;
var image = new Image();
image.src = "10bitTest.png";
image.addEventListener('load', function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
gl.TEXTURE_2D,
0,
texInternalFormat,
texFormat,
texType,
image
);
gl.generateMipmap(gl.TEXTURE_2D);
});
This fails with
WebGL: INVALID_ENUM: texImage2D: invalid format
If I change texFormat to gl.RBGA, it renders the quad, but plain red, without the extended colours.
I'm wondering if its possible at all, although Chrome can do it so I am still holding out hope.
AFAIK
You can not (as of June 2020) create a canvas that is more than 8bits per channel in any browser. There are proposals but none have shipped
You can not load > 8bit per channel images into WebGL via img or ImageBitmap. There are no tests that data > 8bits makes in into the textures.
You can load a > 8bit per channel image into a texture if you parse and load the image yourself in JavaScript but then you fall back to problem #1 which is you can not display it except to draw it into an 8bit per channel canvas. You coud pull the data back out into JavaScript, generate a 16bit image blob, get a URL for the blob, add an img tag using that URL and pray the browser supports drawing it with > 8bits per channel.
For the record, this is not currently possible. Recent activity looks promising:
https://bugs.chromium.org/p/chromium/issues/detail?id=1083693&can=2&q=component%3ABlink%3ECanvas%20colorspace

Rendering Text with Signed Distance Fields in WebGL

Core Problem:
I want to render Text in WebGL.
I don't want to do this via an "overlayed" HTML DOM Element.
I want the actual text rendering to happen inside WebGL.
Initial Solution 1
Render each character as a high number of quads.
This is not good as I need to render many characters.
Initial Solution 2 (implemented + tried this one).
Using Canvas, render all characters into an "atlas/map".
Convert Canvas into a WebGL Texture.
When I need to render a character, just pull it from the Texture.
Problem: Even if the Canvas renders the font at font size 80, and the WebGL renders the font at font size 20, it's still blurry due to various forms of antialiasing, interpolation, and whatever else post processing.
What I think is the right solution. (Not sure, may be wrong on this).
Signed Distance Field: https://www.youtube.com/watch?v=CGZRHJvJYIg
For every pixel, store distance to nearest border.
Question
I am having trouble finding any WebGL implementation of Signed Distance Fields.
Can SDFs work with WebGL, or is there some limitation of WebGL which prevents SDFs from working.
If so, is there some library that will take are of:
actual shader for rendering SDF AND
can take a font and produce the SDFs necessary for rendering?
EDIT: Can someone please verify that the following is a COMPLETE SDF shader?
(copied from https://www.mapbox.com/blog/text-signed-distance-fields/ )
precision mediump float;
uniform sampler2D u_texture;
uniform vec4 u_color;
uniform float u_buffer;
uniform float u_gamma;
varying vec2 v_texcoord;
void main() {
float dist = texture2D(u_texture, v_texcoord).r;
float alpha = smoothstep(u_buffer - u_gamma, u_buffer + u_gamma, dist);
gl_FragColor = vec4(u_color.rgb, alpha * u_color.a);
}
Yes, SDF's perfectly suitable for a WebGL application. For example, Mapbox uses it. The post actually contains SDF shader since it's incredibly simple.
To the second part of your question: it's better to prepare SDF texture for a font beforehand, and there're instruments to do exactly that. This one, for example.

What manager is suggested for Gallery Manager?

I am working on implementing a gallery, I tried GridFieldManager for this, but the images of the thumbnail are not of same size. I sneaked through the gridfieldclass but there are no methods for making the cell size of each image constant.
Is it worth to use flowfieldmnager? When I tried overriding sublayout method for the above two managers it is not giving the desired reults.
Is it possible to sublayout flowfieldmanager?
Device : Blackberry 9780, OS 6.0
The below image is the desired result I am trying to get
I advice you to use a simple FlowFieldManager. But instead of BitmapField inside it, extend a Field to do the following:
setExtent to 1/4 of the Display width in the sublayout method
draw your own focus in the border of the image
draw your own borders and draw the image in the center of the field's extent

Coordinate system and sprite transformation

I'm using andengine to create a physic simulation via box2d.
The bodies are created through PhysicsFactory using Sprites.
My idea is to procedurally position these sprites, following this pattern:
basically one central sprites which represent my world coordinates center, and a series of cloned sprites that are created by rotating the base sprite around myWorld center (the "X" inside the circle).
I've tried to use opengl way inside andengine (translate, rotate, back-translate)
super(stamiRadious, 0, image); //stamiDoadious is te distance from radix (world center) and "petal" attach point
this.setRotationCenter(0, 0);
this.setRotation((float) Math.toDegrees(angleRad));
this.setPosition(this.getX()+radixX, this.getY()+radixY);
but i failed: results are not right (wrong final position, and wrong box2d body property as if the sprite is much larger than the image)
I belive part of the problem relies on my interpretation on setRotation and setRotationCenter, and in general on my understanding of andengine coordinates system + box2d cordinates system.
Any thoughts/links to doc/explanation?
Once you created a Physics representation (Body) of a Sprite, you should be very careful on how you modify the Sprite! Usually you don't modify the Sprite anymore at all, but instead modify the Body, by calling
someBody.setTransform(); // Note that positions must be divided by PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT!
Hope that helped :)

Resources