how to read pixeldata efficiently with WebGL? - colors

is there a faster method to read pixel data than getimagedata() ? I'm using THREE.js and so far in order to get the pixel's colors from an image and use them into my scene I have to read the pixel data directly from a canvas element, is there a faster way?
cheers

getimagedata() is a html5 canvas method. If you want to manipulate the pixel values, then you need to look further into WebGL fragment-shaders. If you need the pixel data for something else, then I don't think there's a faster method.

Related

render scalable textures in webgl from svg

I am working on a 2d fantasy map displayed in browser via WebGL. Here is what it looks like:
It is procedurally generated so you can move wherever you want but you can also zoom and unzoom without losing quality. I would like to add assets in some places, especially mountains when the altitude is high. I have those assets as vector images (.svg) so that you can still zoom in without losing quality. The thing is I have no idea how I could draw them on screen. I think I would need to convert those vectors to vertices of triangles but I am wondering if there is an automatic way to do it. I heard about something called SVGLoader but I think this is only for threejs and I am using webgl alone. What would you advise me to do?
edit: I just found https://github.com/MoeYc/svg-webgl-loader which looks interesting

Warping though SVG

Is it possible to apply a warp transformation on an image in a SVG ?
The goal if to "bend" an image, as if it was stuck on a cylinder, therefore going from this :
To this:
No. Not easily.
SVG only supports affine transformations.
If your SVG was pure vectors, you could achieve the affect by manipulating the path points using your own non-affine transformation code. But that wouldn't work for bitmap images.
However you can warp bitmaps with a Canvas element. Or perhaps with WebGL.

What is a deep frame buffer?

In a real-time graphics application, I believe a frame buffer is the memory that holds the final rasterised image that will be displayed for a single frame.
References to deep frame buffers seem to imply there's some caching going on (vertex and material info), but it's not clear what this data is used for, or how.
What specifically is a deep frame buffer in relation to a standard frame buffer, and what are its uses?
Thank you.
Google is your friend.
It can mean two things:
You're storing more than just RGBA per pixel. For example, you might be storing normals or other lighting information so you can do re-lighting later.
Interactive Cinematic Relighting with Global Illumination
Deep Image Compositing
You're storing more than one color and depth value per pixel. This is useful, for example, to support order-independent transparency.
A z buffer is similar to a color buffer which is usually used to store the "image" of a 3D scene, but instead of storing color information (in the form a 2D array of rgb pixels), it stores the distance from the camera to the object visible through each pixel of the framebuffer.
Traditionally, z-buffer only sore the distance from the camera to the nearest object in the 3D for any given pixel in the frame. The good thing about this technique is that if 2 images have been rendered with their z-buffer, then they can be re-composed using a 2D program for instance, but pixels from the image A which are in "front" of the pixels from image "B", will be composed on top of the re-composed image. To decide whether these pixels are in front, we can use the information stored in the images' respective z-buffer. For example, imagine we want to compose pixels from image A and B at pixel coordinates (100, 100). If the distance (z value) stored in the z-buffer at coordinates (100, 100) is 9.13 for image A and 5.64 for image B, the in the recomposed image C, at pixel coordinates (100, 100) we shall put the pixel from the image B (because it corresponds to a surface in the 3D scene which is in front of the object which is visible through that pixel in image A).
Now this works great when objects are opaque but not when they are transparent. So when objects are transparent (such as when we render volumes, clouds, or layers of transparent surfaces) we need to store more than one z value. Also note, that "opacity" changes as the density of the volumetric object or the number of transparent layers increase. Anyway, just to say that a deep image or deep buffer is technically just like a z-buffer but rather than storing only one depth or z values it stores not only more than one depth value but also stores the opacity of the object at each one of these depth value.
Once we have stored this information, it is possible in post-production to properly (that is accurately) recompose 2 or more images together with transparencies. For instance if you render 2 clouds and that these clouds overlap in depth, then their visibility will be properly recomposed as if they had been rendered together in the same scene.
Why would we use such technique at all? Often because rendering scenes containing volumetric elements is generally slow. Thus it's good to render them seprately from other objects in the scene, so that if you need to make tweaks to the solid objects you do not need to re-render the volumetrics elements again.
This technique was mostly made popular by Pixar, in the renderer they develop and sell (Prman). Avatar (Weta Digital in NZ) was one of the first film to make heavy use of deep compositing.
See: http://renderman.pixar.com/resources/current/rps/deepCompositing.html
The cons of this technique: deep images are very heavy. It requires to store many depth values per pixels (and these values are stored as floats). It's not uncomon for such images to be larger than a few hundred to a a couple of gigabytes depending on the image resolution and scene depth complexity. Also you can recompose volume object properly but they won't cast shadow on each other which you would get if you were rendering objects together in the same scene. This make scene management slightly more complex that usual, ... but this is generally dealt with properly.
A lot of this information can be found on scratchapixel.com (for future reference).

SDL2 / Surface / Texture / Render

I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.

How to get a pixel color in J2ME?

I have drawn a canvas and i want to know how to get a pixel color of the canvas ?
Create a mutable Image the same size as your Canvas. Then, any operations you perform on your Canvas's Graphics object, perform the same ones on your Image's Graphics object.
Finally, get the pixel data from the Image using getRGB(); it should be the same as the Canvas.
Unfortunately, you can't. Graphics class, which is used to draw to Canvas, is for painitng only, it can't give you any information on the pixels.
If you are targeting a platform that supports the NokiaUI API you can use the DirectGraphics#getPixels to read pixel data. On mobile platforms with graphics accelerator hardware, reading pixels tend to be slower so you should use this very sparingly.

Resources