Using DirectX 9 I want to capture what is on the screen and display a smaller version of it in my program.
To capture it I found and am using GetFrontBufferData. However the way it works is by writing to a surface defined in system memory (D3DPOOL_SYSTEMMEM). This results in my having to then transfer the screen shot back into video memory so I can then draw it.
As you can imagine this needless transfer from (video memory -> System memory -> video memory) causes quite a shutter in my program.
Is there a way I can get the image stored in the front buffer and put it onto a surface in video memory?
This question is a spin off of my recent question : Capture and Draw a ScreenShot using DirectX
Related
I use the Video tool box, to decode h264 data on IOS8.x. When this code run on IOS 9.x, I got a memory leak every time I call VTDecompressionSessionDecodeFrame, and I can't see any memory leak in Instruments tool!
here is the code:
https://github.com/stevenyao/iOSHardwareDecoder
I was seeing a memory leak on device when processing the hardware decoded frame in a background thread, but I then copied the YUV frame to a BGRA frame via CoreImage and then passed the BGRA frame back to the main thread so that it could be turned into a UIImage. At that point, the leak went away.
As I'm reading VideoCoreIV-AG100-R spec of BCM vc4 chip, there is a paragraph talking about:
All rendering by the 3D system is in tiles, requiring separate binning and rendering passes to render a frame. In
normal operation the host processor creates a control list in memory defining all the operations and supplying
all the data for rendering for a complete frame.
It mentions of rendering a frame requires binning and rendering pass. Could anybody explain in details how exactly those 2 passes playing roles in a graphic pipeline? Thanks a lot.
For tile based render architecture passes are:
Binning pass - generates stream\map between frame tiles & corresponding geometry which should be rendered into particular tile
Rendering pass - takes map between tiles & geometry and renders the appropriate pixels per tile.
In mobile GPUs due to many limitations compared to Desktops GPUs (such as memory bandwidth due to in mobile devices memory is shared between GPU & CPU,etc) vendors uses approaches to split work into small pieces to decrease overall memory bandwidth consumption - for ex. apply Tile Based Rendering - to achieve efficient utilization of all available resources and gain acceptable performance.
Details
Tile Based Rendering approach described on many GPU vendors sites such as:
A look at the PowerVR graphics architecture: Tile-based rendering
GPU Framebuffer Memory: Understanding Tiling
Ok.I can find simulation designs for simple architectures.(Edit :definitely like not x86) For example use an int as the program counter , use a byte array as the Memory and so on.But how can I simulate the graphic card's(the simplest graphic card imaginable) functionality ?
like use an array to represent each pixel and "paint" each pixel one by one.
But when to paint- synchronized with CPU or asynchronously ? Who stores graphic data in that array ? Is there an instruction for storing a pixel and painting a pixel ?
Please consider all the question marks ('?') doesn't mean "you are asking a lot of questions" but explains the problem itself - How to simulate a Graphic Card ?
Edit : LINK to a basic implementation design for CPU+Memory simulation
Graphic cards typically carry a number of KBs or MBs of memory that stores colors of individual pixels that are then displayed on the screen. The card scans this memory a number of times per second turning the numeric representation of pixel colors into video signals (analog or digital) that the display understands and visualizes.
The CPU has access to this memory and whenever it changes it, the card eventually translates the new color data into appropriate video signals and the display shows the updated picture. The card does all the processing asynchronously and doesn't need much help from the CPU. From the CPU's point of view it's pretty much like write the new pixel color into the graphic card's memory at the location corresponding to the coordinates of the pixel and forget about it. It may be a little more complex in reality (due to poor synchronization artifacts such as tearing, snow and the like), but that's the gist of it.
When you simulate a graphic card, you need to somehow mirror the memory of the simulated card in the physical graphic card's memory. If in the OS you can have direct access to the physical graphic card's memory, it's an easy task. Simply implement writing to the memory of your emulated computer something like this:
void MemoryWrite(unsigned long Address, unsigned char Value)
{
if ((Address >= SimulatedCardVideoMemoryStart) &&
(Address - SimulatedCardVideoMemoryStart < SimulatedCardVideoMemorySize))
{
PhysicalCard[Address - SimulatedCardVideoMemoryStart] = Value;
}
EmulatedComputerMemory[Address] = Value;
}
The above, of course, assumes that the simulated card has exactly the same resolution (say, 1024x768) and pixel representation (say, 3 bytes per pixel, first byte for red, second for green and third for blue) as the physical card. In real life things can be slightly more complex, but again, that's the general idea.
You can access the physical card's memory directly in MSDOS or on a bare x86 PC without any OS if you make your code bootable by the PC BIOS and limit it to using only the BIOS service functions (interrupts) and direct hardware access for all the other PC devices.
Btw, it will probably be very easy to implement your emulator as a DOS program and run it either directly in Windows XP (Vista and 7 have extremely limited support for DOS apps in 32-bit editions and none in 64-bit editions; you may, however, install XP Mode, which is XP in a VM in 7) or better yet in something like DOSBox, which appears to be available for multiple OSes.
If you implement the thing as a Windows program, you will have to use either GDI or DirectX in order to draw something on the screen. Unless I'm mistaken, neither of these two options lets you access the physical card's memory directly such that changes in it would be automatically displayed.
Drawing individual pixels on the screen using GDI or DirectX may be expensive if there's a lot of rendering. Redrawing all simulated card's pixels every time when one of them gets changed amounts to the same performance problem. The best solution is probably to update the screen 25-50 times a second and update only the parts that have changed since the last redraw. Subdivide the simulated card's buffer into smaller buffers representing rectangular areas of, say, 64x64 pixels, mark these buffers as "dirty" whenever the emulator writes to them and mark them as "clean" when they've been drawn on the screen. You may set up a periodic timer driving screen redraws and do them in a separate thread. You should be able to do something similar to this in Linux, but I don't know much about graphics programming there.
I am developing a tool drawing primitves with DX9 in my XP-32.
When create vertex buffer and index buffer, there could be some error of creation failed.
Return code could be D3DERR_OUTOFVIDEOMEMORY or E_OUTOFMEMORY.
I am not sure what the difference of them.
I use VideoMemory tool in DX sample to check the memory, and it reports 1024MB.
Does that mean if I create a bunch of managed resource more than 1024MB, it will report D3DERR_OUTOFVIDEOMEMORY?
If there is no more free virtual space memory in process and malloc fail, DX9 will report E_OUTOFMEMORY?
E_OUTOFMEMORY means that DirectX was unable to allocate (ie through malloc or new) the block of memory you requested.
D3DERR_OUTOFVIDEOMEMORY means that DirectX was unable to allocate (ie out of the pool of memory, either on the gfx card or reserved in main memory) the block of memory you requested.
Caveat: Drivers might lie.
D3DERR_OUTOFVIDEOMEMORY is a directx memory error...not necessarily related to video memory, it could memory occupied for holding a scene or drawing an image, as you have found out if there is not enough memory for your process you will get E_OUTOFMEMORY. The latter refers to the memory that is assigned to your process being exhausted. You did not say what operating system/hardware spec you have, best bet would be to look into getting a system memory upgrade if you're running low on resources..
Edit: Some laptops/netbooks have a graphics adapter that is 'fitted with system memory', ok these graphics cards are not serious for the likes of 'Beyond Call of Duty' and other top end games...the graphics card actually steal some memory from the main board thus inflating the amount of RAM that is available to the graphics controllers. They are fine if you are doing word processing/emailing and so on...but at the cost of the system ram which is gobbled by the controller a la 'Integrated graphics controller'...
Hope this helps,
Best regards,
Tom.
In a Direct3D application (managed), should I recreate the vertex buffer every time I lose my device?
The application I am developing is a windows CAD application. Not a game. I was thinking I could generate the vertex buffer when my 3D Model changes. But should I redo it when I lose my device or can I reuse the vertex buffer from the old device?
If you have to recreate your vertex buffer depends in which pool you have created them.
Vertes buffers that reside in the D3DPOOL_MANAGED pool will be automatically recreated by directx. Buffes in system memory don't get lost so you don't have to recreate these either.
Only buffers that reside entirely in the video-memory need to be recreated as the content of the video memory gets lost each time you loose the device.
I suggest that you just use the managed pool for all your static objects. That increases the memory requirement a bit but you don't have to care about the pesky details such as running out of video memory, lost buffer recreation ect.