Using webgl to render high bit-depth textures to the screen - colors

Our project uses images with bit depths higher than 8 bits, typically 10 bit. These are stored with 16bit PNGs, with P3 colorspace (so 1024 colors per channel).
I am trying to show these images in a browser using WebGL2. So far having no luck. I know Chrome can do it as I have some test images which reveal an extended colour range on my Macbook's retina screen (but not on an 8bit, external monitor).
Here's the test image: https://webkit.org/blog-files/color-gamut/Webkit-logo-P3.png (Source: https://webkit.org/blog/6682/improving-color-on-the-web/)
If you're using an 8 bit screen and hardware, the test image will look entirely red. If you have a high bit depth monitor, you'll see a faint webkit logo. Despite my high bit depth monitor showing the logo detail in Chrome, a WebGL quad with this texture applied looks flat red.
My research has shown that WebGL/OpenGL does offer support for floating point textures and high bit depth, at least when drawing to a render target.
What I want to achieve is simple, use a high bit depth texture in WebGL, applied to an on-screen quad. Here's how I am loading the texture:
var texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0 + 0);
gl.bindTexture(gl.TEXTURE_2D, texture);
// Store as a 16 bit float
var texInternalFormat = gl.RGBA16F;
var texFormat = gl.RGBA16F;
var texType = gl.FLOAT;
var image = new Image();
image.src = "10bitTest.png";
image.addEventListener('load', function() {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(
gl.TEXTURE_2D,
0,
texInternalFormat,
texFormat,
texType,
image
);
gl.generateMipmap(gl.TEXTURE_2D);
});
This fails with
WebGL: INVALID_ENUM: texImage2D: invalid format
If I change texFormat to gl.RBGA, it renders the quad, but plain red, without the extended colours.
I'm wondering if its possible at all, although Chrome can do it so I am still holding out hope.

AFAIK
You can not (as of June 2020) create a canvas that is more than 8bits per channel in any browser. There are proposals but none have shipped
You can not load > 8bit per channel images into WebGL via img or ImageBitmap. There are no tests that data > 8bits makes in into the textures.
You can load a > 8bit per channel image into a texture if you parse and load the image yourself in JavaScript but then you fall back to problem #1 which is you can not display it except to draw it into an 8bit per channel canvas. You coud pull the data back out into JavaScript, generate a 16bit image blob, get a URL for the blob, add an img tag using that URL and pray the browser supports drawing it with > 8bits per channel.

For the record, this is not currently possible. Recent activity looks promising:
https://bugs.chromium.org/p/chromium/issues/detail?id=1083693&can=2&q=component%3ABlink%3ECanvas%20colorspace

Related

How can increase size of background image in Adaptive card for bot emulator?

I am creating a adaptive card with background image, i want to increase the size of the background image as 400X400.
AdaptiveCard card = new AdaptiveCard();
card.BackgroundImage = "https://www.w3schools.com/html/img_girl.jpg";
// Body content
// Add text to the card.
card.Body.Add(new TextBlock()
{
Text = "Hiya, I am testing Adaptice card background image. <a>https://www.google.co.in</a>",
Size = TextSize.Large,
Weight = TextWeight.Bolder
});
I am testing with bot emulator.
AFAIK, this feature is not supported for now, the background image will automatically cover the AdaptiveCard from card's top-left-corner, it will firstly be scaled to fit the available width and in the meantime keep its original aspect ratio. And card's rendering depends on different clients, some clients limits the height of card and some doesn't. For bot emulator, it will not limit the height of card, so the card's height will increase along with the content of AdaptiveCard.
So, if your image for background is not high enough than card's height, it can not cover the whole and if it is higher, the down part of image will be clipped.
This is how background image works for now, there is no method to control its size in our bot.

Rendering Text with Signed Distance Fields in WebGL

Core Problem:
I want to render Text in WebGL.
I don't want to do this via an "overlayed" HTML DOM Element.
I want the actual text rendering to happen inside WebGL.
Initial Solution 1
Render each character as a high number of quads.
This is not good as I need to render many characters.
Initial Solution 2 (implemented + tried this one).
Using Canvas, render all characters into an "atlas/map".
Convert Canvas into a WebGL Texture.
When I need to render a character, just pull it from the Texture.
Problem: Even if the Canvas renders the font at font size 80, and the WebGL renders the font at font size 20, it's still blurry due to various forms of antialiasing, interpolation, and whatever else post processing.
What I think is the right solution. (Not sure, may be wrong on this).
Signed Distance Field: https://www.youtube.com/watch?v=CGZRHJvJYIg
For every pixel, store distance to nearest border.
Question
I am having trouble finding any WebGL implementation of Signed Distance Fields.
Can SDFs work with WebGL, or is there some limitation of WebGL which prevents SDFs from working.
If so, is there some library that will take are of:
actual shader for rendering SDF AND
can take a font and produce the SDFs necessary for rendering?
EDIT: Can someone please verify that the following is a COMPLETE SDF shader?
(copied from https://www.mapbox.com/blog/text-signed-distance-fields/ )
precision mediump float;
uniform sampler2D u_texture;
uniform vec4 u_color;
uniform float u_buffer;
uniform float u_gamma;
varying vec2 v_texcoord;
void main() {
float dist = texture2D(u_texture, v_texcoord).r;
float alpha = smoothstep(u_buffer - u_gamma, u_buffer + u_gamma, dist);
gl_FragColor = vec4(u_color.rgb, alpha * u_color.a);
}
Yes, SDF's perfectly suitable for a WebGL application. For example, Mapbox uses it. The post actually contains SDF shader since it's incredibly simple.
To the second part of your question: it's better to prepare SDF texture for a font beforehand, and there're instruments to do exactly that. This one, for example.

WebGL: Text rendering and blend issue

Hy
I have a rendering issue with text in WebGL.
Here the pb:
The first rendering is crappy the second is OK.
The difference is my DOM (nothing related to the HTML DOM):
The difference between the view V2 and V3 is:
V2 is just a green rectangle (composed of 2 GL triangles) and contains a DOM child V4 which is a Text View (means a text, render into a Canvas then copy into a texture)
The blend is done by the GPU
V3 is TextView with a green background. The text is rendered into a Canvas then into a texture (like the V4). And a Shader fill the rectangle and takes the texture to generate the final view => no blend (actually done by the shader)
It should be a problem of blend and texture configuration. But I cannot find the right configuration.
Here my default configuration for the blend:
gl_ctx.disable (gl_ctx.DEPTH_TEST);
gl_ctx.enable (gl_ctx.BLEND);
gl_ctx.blendFunc (gl_ctx.SRC_ALPHA, gl_ctx.ONE_MINUS_SRC_ALPHA);
gl_ctx.clearDepth (1.0);
gl_ctx.clearColor (1, 1, 1, 1);
Thanks in advance for your help.
NOTE 1: A view (Vn) is the basic document object into my WebGL toolkit. It's called internally a Sprite, it's composed basically by 4 vertices, and a vertex and fragment shaders are associated to it, for the rendering purpose.
NOTE 2: If I use this blend configuration:
gl_ctx.blendFunc (gl_ctx.ONE, gl_ctx.ONE_MINUS_SRC_ALPHA);
The text rendering works well but the rest of rendering, specially images had incorrect alpha.
NOTE 3: sorry dont have enough reputation(!!!) to include image in my post :-(
Canvas 2D always uses pre-multiplied alpha so you pretty much have to use the second blendfunc. Can you set that blendfunc just when you render text?

andengine images quality issue

I'm trying to create a game, and the images, both png and svg formats, seems to get pixelated, no matter how
I export them, I can't get to the HD images like angry birds for example.
I tried with illustrator exporting large PNG (512X512) or SVG basic 1.1 export and still nothing seems to get near the high quality I need, especially when you move to tablets, there you can see the image even in SVG being ruined, I guess the engine creates a bitmap out of the SVG and then scaling it to fit the screen resolution (!?).
is there any reference or tutorial on the right way to create images for games to get a good result and prevent
images of being pixelated?
I think you may be loading in your images incorrectly for SVGs as they should be "Pixel Perfect"
final PictureBitmapTextureAtlasSource textureSource = new SVGAssetBitmapTextureAtlasSource(activity, assetPath, width, height);
Bitmap pBitmap = null;
try {
pBitmap = textureSource.onLoadBitmap(Config.ARGB_8888);
} catch (Exception e) {
e.printStackTrace();
}
Key is to pay attention to the "width" & "height" parameters, this should refer to the original size of your graphic multiplied by your screen scaling.
For example if you are using 800x480 as a default and you design a sprite with a sprite of 100x100 that works perfectly with this resolution. When your game comes to creating the SVG on a device with dimenions 800x480 your sprite will be saved with width and height of 100x100. However if the device was 1000x700 the "scaleX" would be 1000/800 which is 1.25 and scaleY would be 1.46, this would yield a sprite with dimensions 125x146 and would be pixel perfect for this device.
When creating the TextureRegion later on make sure you specify a width and height of 100x100 otherwise your sprite will change size based on the resulting SVG Bitmap

How to display high resolution images in iOS4 using UIImageView

I wanted to know how should I use high res images in iOS4 sdk using UIImaageView.
blackBox = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"alert_bg.png"]];
blackBox.frame = CGRectMake(98.0f, 310.0f, 573.0f, 177.0f);
When I use this code I get strange results... the image does not get the correct size. It is looking very big on iPhone 4 screen.
Should I use 326 ppi images?
I have read http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/SupportingResolutionIndependence/SupportingResolutionIndependence.html this but I am very confuse.
Thanks
Saurabh
The key thing to understand about supporting the Retina Display is that, in your code, the screen is always 320x480. You don't need to double the resolution of anything but your image resources themselves. In this case, you just need to put two resources in your app bundle: an alert_bg.png that fits on a 320x480 screen—in this case, I'd guess that'd be 286x88—and an alert_bg#2x.png, exactly double the size of the other, that fits on a 640x960 one. If you ask UIKit for [UIImage imageNamed:#"alert_bg"], it'll automatically pick the correct-resolution resource for the current screen.
You should provide a 480x320 pixels image for the 3G, 3GS and original iPhone, named "alert_bg.png" and another 960x640 px one, named "alert_bg#2x.png" for the iPhone 4.
The "#2x" in the name is automatically added by iOS and loads the image automatically if it finds it, instead of the standard resolution one.

Resources