get Vertices from Vertex Buffer Handle - graphics

Is it possible to somehow retrive the vertices of an object from the Vertex Buffer Handle ?
I'm using OpenGL.
If it is possible how is this done.

In a desktop OpenGL application you can use glMapBuffer to retrieve a pointer to the vertices stored in a vbo. However, this method is not required according to the ES 2.0 spec, section 2.9.

Related

Can you export the entire spatial map from hololens?

Is there a way to export the entire scanned environment from a Hololens 2? Using the device portal, I can either export the closest 128 blocks as .obj, or what I assume is the entire dataset in .mapx format. I'd like to either export more than 128 blocks, export in sections but with a consistent alignment so I can stitch them together, or convert the .mapx to something else.
It is recommended you using Scene understanding SDK to query a static version of the spatial mapping data and save a serialized scene bytes to disk.
After scanning the room, invoke SceneObserver.ComputeSerializedAsync to serialize the scene as a byte array.
Microsoft.MixedReality.SceneUnderstanding.Samples is a Unity-based sample application that showcases Scene Understanding on HoloLens 2. And it shows up how to save any scene you've captured by saving the output of ComputeSerializedAsync to file: Line1154
Besides, the SaveObjsToDiskAsync function show how to save the Unity Objects from Scene Understanding as Obj files: Line1206

How to update texture for every frame in vulkan?

As my question title says, I want update texture for every frame.
I got an idea :
create a VkImage as a texture buffer with bellow configurations :
initialLayout = VK_IMAGE_LAYOUT_PREINITIALIZED
usage= VK_IMAGE_USAGE_SAMPLED_BIT
and it's memory type is VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT | VK_MEMORY_PROPERTY_HOST_COHERENT_BIT
In draw loop :
first frame :
map texure data to VkImage(use vkMapMemory).
change VkImage layout from VK_IMAGE_LAYOUT_PREINITIALIZED to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL.
use this VkImage as texture buffer.
second frame:
The layout will be VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL after the first frame , can I map next texure data to this VkImage directly without change it's layout ? if I can not do that, which layout can I change this VkImage to ?
In vkspec 11.4 it says :
The new layout used in a
transition must not be VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED
So , I can not change layout back to _PREINITIALIZED .
Any help will be appreciated.
For your case you do not need LAYOUT_PREINITIALIZED. That would only complicate your code (forcing you to provide separate code for the first frame).
LAYOUT_PREINITIALIZED is indeed a very special layout intended only for the start of the life of the Image. It is more useful for static textures.
Start with LAYOUT_UNDEFINED and use LAYOUT_GENERAL when you need to write the Image from CPU side.
I propose this scheme:
berfore render loop
Create your VkImage with UNDEFINED
1st to Nth frame (aka render loop)
Transition image to GENERAL
Synchronize (likely with VkFence)
Map the image, write it, unmap it (weell, the mapping and unmaping can perhaps be outside render loop)
Synchronize (potentially done implicitly)
Transition image to whatever layout you need next
Do your rendering and whatnot
start over at 1.
It is a naive implementation but should suffice for ordinary hobbyist uses.
Double buffered access can be implemented — that is e.g. VkBuffer for CPU access and VkImage of the same for GPU access. And VkCmdCopy* must be done for the data hand-off.
It is not that much more complicated than the above approach and there can be some performance benefits (if you need those at your stage of your project). You usually want your resources in device local memory, which often is not also host visible.
It would go something like:
berfore render loop
Create your VkBuffer b with UNDEFINED backed by HOST_VISIBLE memory and map it
Create your VkImage i with UNDEFINED backed by DEVICE_LOCAL memory
Prepare your synchronization primitives between i and b: E.g. two Semaphores, or Events could be used or Barriers if the transfer is in the same queue
1st to Nth frame (aka render loop)
Operations on b and i can be pretty detached (even can be on different queues) so:
For b:
Transition b to GENERAL
Synchronize to CPU (likely waiting on VkFence or vkQueueIdle)
invalidate(if non-coherent), write it, flush(if non-coherent)
Synchronize to GPU (done implicitly if 3. before queue submission)
Transition b to TRANSFER
Synchronize to make sure i is not in use (likely waiting on a VkSemaphore)
Transition i to TRANSFER
Do vkCmdCopy* from b to i
Synchronize to make known I am finished with i (likely signalling a VkSemaphore)
start over at 1.
(The fence at 2. and semaphore at 6. have to be pre-signalled or skipped for first frame to work)
For i:
Synchronize to make sure i is free to use (likely waiting on a VkSemaphore)
Transition i to whatever needed
Do your rendering
Synchronize to make known I am finished with i (likely signalling a VkSemaphore)
start over at 1.
You have a number of problems here.
First:
create a VkImage as a texture buffer
There's no such thing. The equivalent of an OpenGL buffer texture is a Vulkan buffer view. This does not use a VkImage of any sort. VkBufferViews do not have an image layout.
Second, assuming that you are working with a VkImage of some sort, you have recognized the layout problem. You cannot modify the memory behind the texture unless the texture is in the GENERAL layout (among other things). So you have to force a transition to that, wait until the transition command has actually completed execution, then do your modifications.
Third, Vulkan is asynchronous in its execution, and unlike OpenGL, it will not hide this from you. The image in question may still be accessed by the shader when you want to change it. So usually, you need to double buffer these things.
On frame 1, you set the data for image 1, then render with it. On frame 2, you set the data for image 2, then render with it. On frame 3, you overwrite the data for image 1 (using events to ensure that the GPU has actually finished frame 1).
Alternatively, you can use double-buffering without possible CPU waiting, by using staging buffers. That is, instead of writing to images directly, you write to host-visible memory. Then you use a vkCmdCopyBufferToImage command to copy that data into the image. This way, the CPU doesn't have to wait on events or fences to make sure that the image is in the GENERAL layout before sending data.
And BTW, Vulkan is not OpenGL. Mapping of memory is always persistent; there's no reason to unmap a piece of memory if you're going to map it every frame.

Use of PBO and VBO

My application (QT/OpenGL) needs to upload, at 25fps, a bunch of videos from IP camaras, and then process it applying:
for each videos, a demosaic filter, sharpening filter, LUT and
distortion docrretion.
Then i need to render in opengl (texture projection, etc..) picking one or more frames processed earlier
Then I need to show the result to some widgets (QGLWidget) and read the pixels to write into a movie file.
I try to understand the pros and cons of PBO and FBO, and i picture the following architecture which i want to validate with your help:
I create One thread per video, to capture in a buffer (array of images). There is one buffer for video.
I create a Upload-filter-render thread which aims to: a) upload the frames to GPU, b) apply the filter into GPU, c) apply the composition and render to a texture
I let the GUI thread to render in my widget the texture created in the previous step.
For the Upload-Frames-to-GPU process, i guess the best way is to use PBO (maybe two PBOS) for each video, to load asynchronously the frames.
For the Apply-Filter-info-GPU, i want to use FBO which seems the best to do render-to-texture. I will first bind the texture uploaded by the PBO, and then render to another texture, the filtered image. I am not sure to use only one FBO and change the binding texture input and binding texture target according the video upload, or use as many FBOS, as videos to upload.
Finally, in order to show the result into a widget, i use the final texture rendered by the FBO. For writing into a movie file, i use PBO to copy back asynchronously the pixels from GPU to CPU.
Does it seem correct?

OpenGL (ES): Efficiently get vertex data from texture

I have an OpenGL texture that I generate through a couple of render passes in my FBO.
Now I want to use the texture as vertex data in a VBO and render stuff with it.
Since all data is GPU-sided, is there an efficient way to transfer (or re-interpret) texture data as vertex data in OpenGL?
Or do I have to go all the way through the CPU with
// ... generate texture on GPU.
glReadBuffer(..where the texture is..);
glReadPixels(..., mainMemoryBuffer);
glBufferSubData(GL_ARRAY_BUFFER, ..., mainMemoryBuffer);
? Or is there another way to achieve what I want?
Ok. After some more research it seems that PBOs (Pixel Buffer Objects) are the OpenGL feature which enable this:
glBindBuffer(GL_PIXEL_PACK_BUFFER, myVbo);
glReadPixels(..., 0);
Unfortunately it seems they are unavailable on OpenGL ES (see this thread on GameDevelopment).

OpenGL ES - Is it possible to perform transformations on a VBO (scale,translate,rotate)

Is this something that can be done more efficiently than with a vertex array?
VBOs just let the driver stash your geometry in (probably) video memory rather than uploading it to the driver each frame as with vertex arrays.
glScalef(), glTranslatef(), and glRotatef() work the same either way.

Resources