OpenGL rendering - a window out ot the screen - linux

When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization). That is described for example here: link What happens when a part of the window is placed outside the screen (monitor) ? What happens if I use compositing window manager (on linux, for example compiz) and the whole OpenGL window is placed on virtual desktop (for example a wall of the cube) which is not visible ? What happenns in that OpenGL application ? Is there any GPU usage ? What about redirecting content of that window to the offscreen pixmap ?

When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization).
Clipping is a geometrical operation. When it comes to rasterization, everything happens on the pixel level.
It all comes down to Pixel Ownership.
In a plain, not composited windowing system, all windows share the same screen framebuffer. Each window is, well, a window (offset + size) into the screen framebuffer. When things get drawn to a window, using OpenGL or not, for every pixel it is tested if this pixel of the framebuffer actually belongs to the window; if some other, logical (and visible) window is in front, the pixel ownership tests will fail for these pixels and nothing gets drawn there (so that the window in front doesn't get overdrawn). That's why in a plain, uncomposited environment you can "screenshot" windows overlaying your OpenGL window with glReadPixels, because effectively that part of the framebuffer handed to OpenGL actually belongs to another window, but OpenGL doesn't know this.
Similarly, if a window is moved partially or completely off screen, the off-screen pixels will fail the pixel ownership test and nothing gets drawn there.
What happens if I use compositing window manager
Then every window has its very own framebuffer, which it completely owns. Pixel Ownership tests will never fail. You figure the rest.
What happenns in that OpenGL application?
To OpenGL it looks like all the pixels pass the ownership test when rasterizing. That's it.
Is there any GPU usage?
Pixel ownership test is so important, that even long before there were GPUs, the first generation of graphics cards did have the functionality required to implement pixel ownership tests. The function is easy to implement and hardwired so there's no difference with that regard.
However the more pixels fail the test, i.e. are not being touch, the less the GPU has to work in rasterization stage, to rendering throughput actually increases if the window is obscured or partially moved off-screen in a non-composited environment.

Related

Copy D3D11 Texture2D to D2D1 Bitmap

I have a D3D11 device created, windows 10, latest edition, and a ID3D11Texture2D * created, GPU memory. I want to get the contents of this Texture2D stretched and drawn onto a region of an HWND. I don't want to use vertex buffers, I want to use "something else". I don't want to copy the bits down to the CPU, then bring them back up to GPU again. StretchDIBits or StretchBlt would be way too slow.
Let's pretend I want to use D2D1... I need a way to get my D3D11 texture2D copied or shared over to D2D1. Then, I want to use a D2D1 render target to stretch blit it to the HWND.
I've read the MS documents, and they don't make a lot of sense.
ideas?
If you already have a ID3D11Texture, why aren't you just using Direct3D to render it to a texture? That's what the hardware is designed to do very fast with high quality.
The DirectX Tool Kit SpriteBatch class is a good place to start for general sprite rendering, and it does indeed make use of VBs, shader, etc. internally.
Direct2D is really best suited to scenarios where you are drawing classic vector/presentation graphics, like circles, ellipses, arcs, etc. It's also useful as a way to use DirectWrite for high-quality, highly scalable fonts. For blitting rectangles, just use Direct3D which is what Direct2D has to use under the covers anyhow.
Note that if you require Direct3D Hardware Feature Level 10.0 or better, you can use a common trick which relies on the Vertex_IDin the vertex shader, so you can self-generate the geometry without any need for a VB or IB. See this code.

XOrg server code that draws the mouse pointer

I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.

SDL library: does SDL_UpdateRect() really need to be called for an SDL_HWSURFACE?

I'm a newbie to SDL and I've read about a dozen introductory tutorials so far. I'm a bit puzzled about the difference between hardware and software surfaces (i.e. the SDL_HWSURFACE and SDL_SWSURFACE flags in the call to SDL_SetVideoMode()), and how SDL_UpdateRect() behaves for each type of surface. My understanding is that for a hardware surface, there is no need to call SDL_UpdateRect() because one is drawing directly to the display screen. However, the example in this Wikibooks tutorial (http://en.wikibooks.org/wiki/SDL_%28Simple_DirectMedia_Layer%29) shows otherwise: it calls SDL_UpdateRect() on a hardware surface.
I'm a bit puzzled about the difference between hardware and software surfaces (i.e. the SDL_HWSURFACE and SDL_SWSURFACE flags in the call to SDL_SetVideoMode())
In the SDL documentation, AFAIK, they simply state that software surfaces are stored in system memory (your computer RAM) and hardware surface are stored in video memory (your GPU RAM) and that hardware surface may take advantage of hardware acceleration (see below on SDL_Flip).
Though the documentation doesn't says much about it, there are differences, e.g., I know that when using alpha blending software surfaces are better than hardware ones performance-wise. I also heard that software surfaces are better for direct pixel access, but I can't confirm it.
and how SDL_UpdateRect() behaves for each type of surface
The behavior is the same for both: the function updates the given rectangle in the given surface. Maybe, implementation-wise, there must be differences, but, again, the documentation does not state anything about it.
For SDL_Flip though it is a different story: when using it with hardware surfaces, it attempts to swap video buffers (only possible if the hardware supports double buffering, remember to pass the SDL_DOUBLEBUF flag to SDL_SetVideoMode). If the hardware does not support double buffering or if it is a software surface the SDL_Flip call is equivalent to updating the entire surface area with SDL_UpdateRect (SDL_UpdateRect(screen, 0, 0, 0, 0)).
My understanding is that for a hardware surface, there is no need to call SDL_UpdateRect() because one is drawing directly to the display screen.
No, you shall call it just as with software surfaces.
Also note that the chosen name to the surface parameter on SDL_UpdateRect and SDL_Flip functions can be a little misleading: the parameter name is screen, but it can be any SDL_Surface (not just the surface that represents the user screen). Most of time (at least for simple applications) you will be blitting only on the screen surface, but it makes sense (and some times is necessary) to blit in other surfaces that are not the user screen one.
SDL_UpdateRect() has nothing to do with surface type.(don't use it when using SDL OpenGL). You should call it whenever you have to update a (part) of a SDL_Surface.
In fact everytime you flip a surface a SDL_UpdateRect(screen, 0, 0, 0, 0) is called for that surface.

Fast pixel drawing library

My application produces an "animation" in a per-pixel manner, so i need to efficiently draw them. I've tried different strategies/libraries with unsatisfactory results especially at higher resolutions.
Here's what I've tried:
SDL: ok, but slow;
OpenGL: inefficient pixel operations;
xlib: better, but still too slow;
svgalib, directfb, (other frame buffer implementations): they seem perfect but definitely too tricky to setup for the end user.
(NOTE: I'm maybe wrong about these assertions, if it's so please correct me)
What I need is the following:
fast pixel drawing with performances comparable to OpenGL rendering;
it should work on Linux (cross-platform as a bonus feature);
it should support double buffering and vertical synchronization;
it should be portable for what concerns the hardware;
it should be open source.
Can you please give me some enlightenment/ideas/suggestions?
Are your pixels sparse or dense (e.g. a bitmap)? If you are creating dense bitmaps out of pixels, then another option is to convert the bitmap into an OpenGL texture and use OpenGL APIs to render at some framerate.
The basic problem is that graphics hardware will be very different on different hardware platforms. Either you pick an abstraction layer, which slows things down, or code more closely to the type of graphics hardware present, which isn't portable.
I'm not totally sure what you're doing wrong, but it could be that you are writing pixels one at a time to the display surface.
Don't do that.
Instead, create a rendering surface in main memory in the same format as the display surface to render to, and then copy the whole, rendered image to the display in a single operation. Modern GPU's are very slow per transaction, but can move lots of data very quickly in a single operation.
Looks like you are confusing window manager (SDL and xlib) with rendering library (opengl).
Just pick a window manager (SDL, glut, or xlib if you like a challenge), activate double buffer mode, and make sure that you got direct rendering.
What kind of graphical card do you have? Most likely it will process pixels on the GPU. Look up how to create pixel shaders in opengl. Pixel shaders are processing per pixel.

Drawing Vector Graphics Faster

In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).

Resources