I'm using a virtual listview with icon view to display a number of images from a folder in hard disk. All the images are stored in CImageList. There is provision to scale image size in the UI using a scrollbar. For performance sake drawing of each item is done when the NM_CUSTOMDRAW notification comes.
The problem is when there is lots of images, memory taken by the application is too much. And scaling of images is not smooth. Is there any way to reduce the memory usage, say by keeping in memory only images that are being viewed.
The solution for this is to enable virtual mode for your list view. In this mode, the list view control doesn't host any data itself; all it knows is how many rows it has. All data is requested on demand. That makes you (the application) responsible for managing the data it displays, but it also allows you to keep only a subset of the data items in memory at a time.
Instructions for setting this up on a CListCtrl are here on MSDN.
I found that using a CreateBitmap() to create HBitmap solved my memory problem.
Initially i was using CreateDIBitmap() function to create HBitmap. This was storing too much data.
Later i used CreateBitmap() to create my bitmaps in memory. This also stored data in memory, but it was negligible.
Related
I have a visual presentation of a graphical scheme in svg. It consists of a large number of elements(~2000). Each minute I load fresh data(there might be no changes) via background worker, then remove an old svg, all event listeners, and draw a new one. Several hours later the tab's memory usage grows on 200-300Mb. And though the memory heap doesn't grow much, it grows anyway. One thing is confusing, the most of time I see the (system), (compiled code), (array) objects growing in size and numbers(even "Object" do not!). But never seen them to be decreased. How should I treat this situation?
link to screenshot
Recently I was wondering for a game, if there are many models and each one of them needs vertex buffer to draw. Like below three options, which one is most efficient?
Create several small vertex buffers and update it with vkMapMemory before draw those models.
Create several small vertex buffers and use staging buffer to update.
Create several big enough vertex buffers and use it.
You're conflating two different issues... memory management and memory updates.
For memory management, conventional wisdom for both OpenGL and Vulkan is that you want to make few allocations in the underlying API and do memory management yourself, so that you can store multiple vertex data sets in a single buffer at different offsets. This is critical for being able to do indirect drawing, where you execute a single command to render many models. Because you can't change the bindings within that command you can only render models that are all within a single vertex buffer.
For memory updates, you need to keep in mind that not all memory can necessarily be mapped using vkMapMemory. Often only system memory can be mapped, while GPU local memory cannot. The best performance is going to come when the vertex data is on the GPU, so the best practice is to use a staging buffer composed of mappable memory, and then use it to transfer to the real buffer of device local memory.
everyone. I am stuck on the following question.
I am working on a hybrid storage system which uses an ssd as a cache layer for hard disk. To this end, the data read from the hard disk should be written to the ssd to boost the subsequent reads of this data. Since Linux caches data read from disk in the page cache, the writing of data to the ssd can be delayed; however, the pages caching the data may be freed, and accessing the freed pages is not recommended. Here is the question: I have "struct page" pointers pointing to the pages to be written to the ssd. Is there any way to determine whether the page represented by the pointer is valid or not (by valid I mean the cached page can be safely written to the ssd? What will happen if a freed page is accessed via the pointer? Is the data of the freed page the same as that before freeing?
Are you using cleancache module? You should only get valid pages from it and it should remain valid until your callback function finished.
Isn't this a cleancache/frontswap reimplementation? (https://www.kernel.org/doc/Documentation/vm/cleancache.txt).
The benefit of existing cleancache code is that it calls your code only just before it frees a page, so before the page resides in RAM, and when there is no space left in RAM for it the kernel calls your code to back it up in tmem (transient memory).
Searching I also found an existing project that seems to do exactly this: http://bcache.evilpiepirate.org/:
Bcache is a Linux kernel block layer cache. It allows one or more fast
disk drives such as flash-based solid state drives (SSDs) to act as a
cache for one or more slower hard disk drives.
Bcache patches for the Linux kernel allow one to use SSDs to cache
other block devices. It's analogous to L2Arc for ZFS, but Bcache also
does writeback caching (besides just write through caching), and it's
filesystem agnostic. It's designed to be switched on with a minimum of
effort, and to work well without configuration on any setup. By
default it won't cache sequential IO, just the random reads and writes
that SSDs excel at. It's meant to be suitable for desktops, servers,
high end storage arrays, and perhaps even embedded.
What you are trying to achieve looks like the following:
Before the page is evicted from the pagecache, you want to cache it. This, in concept, is called a Victim cache. You can look for papers around this.
What you need is a way to "pin" the pages targeted for eviction for the duration of the IO. Post IO, you can free the pagecache page.
But, this will delay the eviction, which is possibly needed during memory pressure to create more un-cached pages.
So, one possible solution is to start your caching algorithm a bit before the pagecache eviction starts.
A second possible solution is to set aside a bunch of free pages and exchange the page being evicted form the page cache with a page from the free pool, and cache the evicted page in the background. But, you need to now synchronize with file block deletes, etc
I have some legacy code that uses shmget/shmat/shmdt to create, attach and manage shared memory segments.
The app with the code sometimes crashes, leaving the segments in memory. The code re-uses same segment key to reconnect to them, but the problem it uses different shared memory sizes every time, and unable to connect because of this.
My question is:
1) Is it possible to change the shared memory size on connection?
2) If not, how I can connect to the shared memory segment (even if I might not know the size), in order to erase it (for later re-creation of a newer one)?
Thanks!
You can use shmctl to delete and create one of your own size. I presume the legacy code will try to use the existing shared memory if it is not able to shmget?
In a Direct3D application (managed), should I recreate the vertex buffer every time I lose my device?
The application I am developing is a windows CAD application. Not a game. I was thinking I could generate the vertex buffer when my 3D Model changes. But should I redo it when I lose my device or can I reuse the vertex buffer from the old device?
If you have to recreate your vertex buffer depends in which pool you have created them.
Vertes buffers that reside in the D3DPOOL_MANAGED pool will be automatically recreated by directx. Buffes in system memory don't get lost so you don't have to recreate these either.
Only buffers that reside entirely in the video-memory need to be recreated as the content of the video memory gets lost each time you loose the device.
I suggest that you just use the managed pool for all your static objects. That increases the memory requirement a bit but you don't have to care about the pesky details such as running out of video memory, lost buffer recreation ect.