Vertices and Lost Devices - direct3d

In a Direct3D application (managed), should I recreate the vertex buffer every time I lose my device?
The application I am developing is a windows CAD application. Not a game. I was thinking I could generate the vertex buffer when my 3D Model changes. But should I redo it when I lose my device or can I reuse the vertex buffer from the old device?

If you have to recreate your vertex buffer depends in which pool you have created them.
Vertes buffers that reside in the D3DPOOL_MANAGED pool will be automatically recreated by directx. Buffes in system memory don't get lost so you don't have to recreate these either.
Only buffers that reside entirely in the video-memory need to be recreated as the content of the video memory gets lost each time you loose the device.
I suggest that you just use the managed pool for all your static objects. That increases the memory requirement a bit but you don't have to care about the pesky details such as running out of video memory, lost buffer recreation ect.

Related

What's the reason that don't use staging buffer?

Recently I was wondering for a game, if there are many models and each one of them needs vertex buffer to draw. Like below three options, which one is most efficient?
Create several small vertex buffers and update it with vkMapMemory before draw those models.
Create several small vertex buffers and use staging buffer to update.
Create several big enough vertex buffers and use it.
You're conflating two different issues... memory management and memory updates.
For memory management, conventional wisdom for both OpenGL and Vulkan is that you want to make few allocations in the underlying API and do memory management yourself, so that you can store multiple vertex data sets in a single buffer at different offsets. This is critical for being able to do indirect drawing, where you execute a single command to render many models. Because you can't change the bindings within that command you can only render models that are all within a single vertex buffer.
For memory updates, you need to keep in mind that not all memory can necessarily be mapped using vkMapMemory. Often only system memory can be mapped, while GPU local memory cannot. The best performance is going to come when the vertex data is on the GPU, so the best practice is to use a staging buffer composed of mappable memory, and then use it to transfer to the real buffer of device local memory.

How to optimize memory with Clistctrl

I'm using a virtual listview with icon view to display a number of images from a folder in hard disk. All the images are stored in CImageList. There is provision to scale image size in the UI using a scrollbar. For performance sake drawing of each item is done when the NM_CUSTOMDRAW notification comes.
The problem is when there is lots of images, memory taken by the application is too much. And scaling of images is not smooth. Is there any way to reduce the memory usage, say by keeping in memory only images that are being viewed.
The solution for this is to enable virtual mode for your list view. In this mode, the list view control doesn't host any data itself; all it knows is how many rows it has. All data is requested on demand. That makes you (the application) responsible for managing the data it displays, but it also allows you to keep only a subset of the data items in memory at a time.
Instructions for setting this up on a CListCtrl are here on MSDN.
I found that using a CreateBitmap() to create HBitmap solved my memory problem.
Initially i was using CreateDIBitmap() function to create HBitmap. This was storing too much data.
Later i used CreateBitmap() to create my bitmaps in memory. This also stored data in memory, but it was negligible.

In directx, if reuse a slot does gpu keep previous resource in its memory? Also can original processor resources be safely altered?

I was writing this question about directx and the following questions were part of it, but I realized I needed to separate them out.
If something isn't in a "slot" (register) on the GPU, will it have to be retransferred to the GPU to be used again, i.e. if put texture A in register t0, then later put texture B in register t0, is texture t0 no longer available on the GPU? Or is it still resident in the GPU memory, but I will have to place a call to load it into a texture register to get at it? Or something else entirely?
In a similar vein do calls to PSSetShaders, or PSSetShaderResource, or IASetVertexBuffers, etc.... block and copy data to the GPU before returning, so after the call one can alter or even free the resources they were based on because it is now resident on the GPU?
I guess this is more than one question, but I expect I'll get trouble if I try asking too many directx questions in one day (thought I think these are honestly decent questions about which the msdn documentation remains pretty silent, even if they are all newbie questions).
if put texture A in regsiter t0, then later put texture B in register t0, is texture t0 no longer available on the GPU?
It is no longer bound to the texture register so will not get applied to any polygons. You will have to bind it to a texture register again to use it.
Or is it still resident in the GPU memory, but I will have to place a call to load it into a texture register to get at it?
Typically they will stay in video memory until enough other resources have been loaded in that it needs to reclaim the memory. This was more obvious in DirectX 9 when you had to specify which memory pool to place resource in. Now everything is effectively in what was the D3DPOOL_MANAGED memory pool in Direct3D 9. When you set the texture register to use the texture it will be fast as long as the textures is still in video memory.
In a similar vein do calls to PSSetShaders, or PSSetShaderResource, or IASetVertexBuffers, etc.... block and copy data to the GPU before returning, so after the call one can alter or even free the resources they were based on because it is now resident on the GPU?
DirectX manages the resources for you and tries to keep them in video memory as long as it can.

Changing existing shared memory segment size

I have some legacy code that uses shmget/shmat/shmdt to create, attach and manage shared memory segments.
The app with the code sometimes crashes, leaving the segments in memory. The code re-uses same segment key to reconnect to them, but the problem it uses different shared memory sizes every time, and unable to connect because of this.
My question is:
1) Is it possible to change the shared memory size on connection?
2) If not, how I can connect to the shared memory segment (even if I might not know the size), in order to erase it (for later re-creation of a newer one)?
Thanks!
You can use shmctl to delete and create one of your own size. I presume the legacy code will try to use the existing shared memory if it is not able to shmget?

Store more than 3GB of video-frames in memory, on 32-bit OS

At work we have an application to play 2K (2048*1556px) OpenEXR film sequences. It works well.. apart from when sequences that are over 3GB (quite common), then it has to unload old frames from memory, despite the fact all machines have 8-16GB of memory (which is addressable via the linux BIGMEM stuff).
The frames have to he cached into memory to play back in realtime. The OS is a several-year old 32-bit Fedora Distro (not possible to upgradable to 64bit, for the foreseeable future). The per-process limitation is 3GB per process.
Basically, is it possible to cache more than 3GB of data in memory, somehow? My initial idea was to spread the data between multiple processes, but I've no idea if this is possible..
One possibility may be to use mmap. You would map/unmap different parts of your data into the same virtual memory region. You could only have one set mapped at a time, but as long as there was enough physical memory, the data should stay resident.
How about creating a RAM drive and loading the file into that ... assuming the RAM drive supports the BIGMEM stuff for you.
You could use multiple processes: each process loads a view of the file as a shared memory segment, and the player process then maps the segments in turn as needed.
My, what an interesting problem :)
(EDIT: Oh, I just read Rob's ram drive post...I got all excited by the problem...but have a bit more to suggest, so I won't delete)
Would it be possible to...
setup a multi-gigabyte ram disk, and then
modify the program to do all it's reading from the "disk"?
I'd guess the ram disk part is where all the problem would be, since the size of the ram disk would be OS and file system dependent. You might have to create multiple ram disks and have your code jump between them. Or maybe you could setup a RAID-0 stripe set over multiple ram disks. Or, if there are still OS limitations and you can afford to drop a couple grand (4k?), setup a hardware RAID-0 strip set with some of those new blazing fast solid state drives. Or...
Fun, fun, fun.
Be sure to follow up!
I assume you can modify the application. If so, the easiest thing would be to start the application several times (once for each 3GB chunk of video), have each one hold a chunk of video, and use another program to synchronize them so they each take control of the framebuffer (or other video output) in turn.
The synchronization is going to be a little messy, perhaps, but it can be simplified if each app has its own framebuffer and the sync program points the video controller to the correct framebuffer inbetween frames when switching to the next app.
#dbr said:
There is a review machine with an absurd fiber-channel-RAID-array that can play 2K files direct from the array easily. The issue is with the artist-workstations, so it wouldn't be one $4000 RAID array, it'd be hundreds..
Well, if you can accept a limit of ~30GB, then maybe a single 36GB SSD drive would be enough? Those go for ~US$1k each I think, and the data rates might be enough. That very well maybe cheaper than a pure RAM approach. There are smaller sizes available, too. If ~60GB is enough you could probably get away with a JBOD array of 2 for double the cost, and skip the RAID controller. Be sure only to look at the higher end SSD options--the low end is filled with glorified memory sticks. :P

Resources