What's the difference between screen size and desktop size? - graphics

I frequently see screen size and desktop size in graphics programming forum or some documents, Isn't screen size equal to desktop size? take the following as an example, what's the differences?
For windowed mode, the size of the destination surface should be the
size of the desktop. For full-screen mode, the size of the destination
surface should be the screen size.

That is because the Desktop Size does NOT always fill the screen. Change your resolution to maybe 800x600 or (if possible) 640x480, and you will see that on many screens, it will only occupy a small area on the display. Screen size = the full amount your screen can show. Desktop size is that box that I just described. MOST cases, they are the same - but not always.
It also, depending on the context, may be describing what fits inside the current window, minus the title bar, tool bars, task bar, etc.

Related

How does the Windows 10 render windows under multi-display, multi-GPU environment?

I am working with four displays and both external and internal GPU enabled (IGFX enabled). Three of them are connected to external GPU and the other is connected to motherboard. In graphics control panel, I see that each display is allocated to specific GPU. In fact, the previous QnA says both CPU and GPU contribute to the rendering process for the pixels displayed. However, I use two GPU and sometimes I put a window among multiple displays. In this case, I am wondering which GPU or CPU are actually working on which drawing region, including hidden part of the window outside the displayed region.
I know that the basic window class in Windows 10 use GDI+ and some app or games use DirectX or OpenGL library. Is there a difference between them in terms of rendering unit? According to my friend, the reference point of rendering would be the top-left corner point (I'm not sure but I want to know the truth).
The following image shows my configuration
Display #1, #2, #3: connected to external GPU (NVIDIA GTX 1060)
Display #4: connected to motherboard graphics port (Intel i7-8700's internal GPU)
The following examples show several possibilities for the location of a window (red rectangle). In each case, I want to know which processor calculates which partition.
Case 1:
This case is obvious that CPU and external GPU (GTX 1060) work for the whole window region.
Case 2:
Case 3:
Case 4:
Case 5:

What exactly is a GPU binning pass

As I'm reading VideoCoreIV-AG100-R spec of BCM vc4 chip, there is a paragraph talking about:
All rendering by the 3D system is in tiles, requiring separate binning and rendering passes to render a frame. In
normal operation the host processor creates a control list in memory defining all the operations and supplying
all the data for rendering for a complete frame.
It mentions of rendering a frame requires binning and rendering pass. Could anybody explain in details how exactly those 2 passes playing roles in a graphic pipeline? Thanks a lot.
For tile based render architecture passes are:
Binning pass - generates stream\map between frame tiles & corresponding geometry which should be rendered into particular tile
Rendering pass - takes map between tiles & geometry and renders the appropriate pixels per tile.
In mobile GPUs due to many limitations compared to Desktops GPUs (such as memory bandwidth due to in mobile devices memory is shared between GPU & CPU,etc) vendors uses approaches to split work into small pieces to decrease overall memory bandwidth consumption - for ex. apply Tile Based Rendering - to achieve efficient utilization of all available resources and gain acceptable performance.
Details
Tile Based Rendering approach described on many GPU vendors sites such as:
A look at the PowerVR graphics architecture: Tile-based rendering
GPU Framebuffer Memory: Understanding Tiling

Simulate a simple Graphic Card

Ok.I can find simulation designs for simple architectures.(Edit :definitely like not x86) For example use an int as the program counter , use a byte array as the Memory and so on.But how can I simulate the graphic card's(the simplest graphic card imaginable) functionality ?
like use an array to represent each pixel and "paint" each pixel one by one.
But when to paint- synchronized with CPU or asynchronously ? Who stores graphic data in that array ? Is there an instruction for storing a pixel and painting a pixel ?
Please consider all the question marks ('?') doesn't mean "you are asking a lot of questions" but explains the problem itself - How to simulate a Graphic Card ?
Edit : LINK to a basic implementation design for CPU+Memory simulation
Graphic cards typically carry a number of KBs or MBs of memory that stores colors of individual pixels that are then displayed on the screen. The card scans this memory a number of times per second turning the numeric representation of pixel colors into video signals (analog or digital) that the display understands and visualizes.
The CPU has access to this memory and whenever it changes it, the card eventually translates the new color data into appropriate video signals and the display shows the updated picture. The card does all the processing asynchronously and doesn't need much help from the CPU. From the CPU's point of view it's pretty much like write the new pixel color into the graphic card's memory at the location corresponding to the coordinates of the pixel and forget about it. It may be a little more complex in reality (due to poor synchronization artifacts such as tearing, snow and the like), but that's the gist of it.
When you simulate a graphic card, you need to somehow mirror the memory of the simulated card in the physical graphic card's memory. If in the OS you can have direct access to the physical graphic card's memory, it's an easy task. Simply implement writing to the memory of your emulated computer something like this:
void MemoryWrite(unsigned long Address, unsigned char Value)
{
if ((Address >= SimulatedCardVideoMemoryStart) &&
(Address - SimulatedCardVideoMemoryStart < SimulatedCardVideoMemorySize))
{
PhysicalCard[Address - SimulatedCardVideoMemoryStart] = Value;
}
EmulatedComputerMemory[Address] = Value;
}
The above, of course, assumes that the simulated card has exactly the same resolution (say, 1024x768) and pixel representation (say, 3 bytes per pixel, first byte for red, second for green and third for blue) as the physical card. In real life things can be slightly more complex, but again, that's the general idea.
You can access the physical card's memory directly in MSDOS or on a bare x86 PC without any OS if you make your code bootable by the PC BIOS and limit it to using only the BIOS service functions (interrupts) and direct hardware access for all the other PC devices.
Btw, it will probably be very easy to implement your emulator as a DOS program and run it either directly in Windows XP (Vista and 7 have extremely limited support for DOS apps in 32-bit editions and none in 64-bit editions; you may, however, install XP Mode, which is XP in a VM in 7) or better yet in something like DOSBox, which appears to be available for multiple OSes.
If you implement the thing as a Windows program, you will have to use either GDI or DirectX in order to draw something on the screen. Unless I'm mistaken, neither of these two options lets you access the physical card's memory directly such that changes in it would be automatically displayed.
Drawing individual pixels on the screen using GDI or DirectX may be expensive if there's a lot of rendering. Redrawing all simulated card's pixels every time when one of them gets changed amounts to the same performance problem. The best solution is probably to update the screen 25-50 times a second and update only the parts that have changed since the last redraw. Subdivide the simulated card's buffer into smaller buffers representing rectangular areas of, say, 64x64 pixels, mark these buffers as "dirty" whenever the emulator writes to them and mark them as "clean" when they've been drawn on the screen. You may set up a periodic timer driving screen redraws and do them in a separate thread. You should be able to do something similar to this in Linux, but I don't know much about graphics programming there.

GetFrontBufferData to video memory and not system memory

Using DirectX 9 I want to capture what is on the screen and display a smaller version of it in my program.
To capture it I found and am using GetFrontBufferData. However the way it works is by writing to a surface defined in system memory (D3DPOOL_SYSTEMMEM). This results in my having to then transfer the screen shot back into video memory so I can then draw it.
As you can imagine this needless transfer from (video memory -> System memory -> video memory) causes quite a shutter in my program.
Is there a way I can get the image stored in the front buffer and put it onto a surface in video memory?
This question is a spin off of my recent question : Capture and Draw a ScreenShot using DirectX

Browser maximum screen size

I am planning an advertisement display solution that might use a browser with multiple monitors. One of the questions is: is there a limit in the maximum screen size of a browser?
No, there isn't. You're generally safe deeming 2,000 pixels or so to be the 99.999%-of-users limit, but there's no hard technological limit preventing some nutter from having a 100,000 pixel browser window displaying on a Times Square-style billboard or something.

Resources