How to implement high speed animation? - graphics

I'm trying to write an application (winforms) that can demonstrate how two oscillating colors will result in a third color. For this I need to be able to switch between two colors very fast (at >50 fps). I'm really hoping to do this in managed code.
Right now I'm drawing two small rectangular bitmaps with solid colors on top of each other. Using GDI+ DrawImage with two in-memory bitmaps in a doublebuffering enabled control doesn't cut it and results in flickering/tearing at high speeds. A timer connected to a slider triggers the switching.
Is this a sensible approach?
Will GDI and BitBLT be better?
Does WPF perform better?
What about DirectX or other
technologies?
I would really appreciate feedback, TIA!

I have never had good luck with GDI to do high speed graphics, so I used DirectX, but MS has dropped support for Managed DirectX, so you may need to do this in unmanaged C++.
Just write your controller in C#, then have a very thin layer of managed C++ that just calls to the unmanaged C++ DLL that has DirectX support.
You will need to get exclusive control of the computer, so that no other application can really use the cpu, otherwise you will find that your framerate can dropped, or at least no be very consistent.
If you use an older version of DirectX, such as DirectX 9.0c, that may still have support for .NET, and I used that to get a framerate for a music program of about 70 frames/second.

Flicker should be avoidable with a double-buffered approach (and by this I don't mean just setting the rendering control's DoubleBuffered property to True - ironically, this will have no effect on flicker).
Tearing can be dealt with via DirectX, but only if you synchronize your frame rate with your monitor's refresh rate. This may not be possible, especially if you need to achieve a specific frame rate (and it doesn't happen to be your monitor's refresh rate).
I don't think WPF gets around the fundamental tearing problem (but I could be wrong).

This will work with GDI, but you won't be able to control flicker, so it's kind of out of the question. Direct X may be a lot of extra fluff just for showing two non-flickering images. Perhaps SDL will work well enough? It's cross platform and you can literally code this effect in less than 30 lines of code.
http://cs-sdl.sourceforge.net/index.php/SimpleExample

Related

Why do we still use Fixed function blending operations in D3D11 ect?

I was looking aground trying to understand why we are still using fixed function blending modes in newer 3D API's (like D3D11). In D3D10 fixed function Alpha Clipping was removed in favor of using the shaders. Why because its a much more powerful approach to almost any situation.
So why then can we not calculate or own blending operations (aka texture sample from the RenderTarget we are currently rendering into)?? Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
The reason this would be useful, is because you could do things like make refraction shaders run way faster as you wouldn't have to swap back and forth between two renderTargets for each refractive object overlay. Such as a refractive windowing system for an OS or game UI.
Where might be the best place to suggest an idea like this as this is not a discussion forum as I would love to see this in D3D12? Or is this already possible in D3D11?
So why then can we not calculate or own blending operations
Who says you can't? With shader_image_load_store (and the D3D11 equivalent), you can do pretty much anything you want with images. Provided that you follow the rules. That last part is generally what trips people up. Doing a full read/modify/write in a shader, such that later fragment shader invocations don't read the wrong value is almost impossible in the most general case. You have to restrict it by saying that each rendered object will not overlap with itself, and you have to insert a memory barrier between rendered objects (which can overlap with other rendered objects). Or you use the linked list approach.
But the point is this: with these mechanisms, not only have people implemented blending in shaders, but they've implemented order-independent transparency (via linked lists). Nothing is stopping you from doing what you want right now.
Well, nothing except performance of course. The fixed-function blender will always be faster because it can run in parallel with the fragment shader operations. The blending units are separate hardware from the fragment shaders, so you can be doing blending operations while simultaneously doing fragment shader ops (obviously from later fragments, not the ones being blended).
The read/modify/write mechanism in the blend hardware is designed specifically for blending, while the image_load_store is a more generic mechanism. And while generic may beat specific in the long-term of hardware evolution, for the immediate and near-future, you can expect fixed-function blending to beat image_load_store blending performance-wise every time.
You should use it only when you must. And even the, decide if you really, really need it.
Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
Yes, this is actually the case. If one could do blending in the fragment shader, this would introduce possible feedback loops, and this really complicates things. Blending is done in a separate hardwired stage for performance and parallelization reasons.

vc++ graphic performance

I want to build a obj to draw a realtime graph but I have performance limitations
size of graph is static.
in repaint the graph
I can redraw all the needed line.
I have other way that save the graph on bitmap memory
and each time copy it on the screen
which way is better?
what is faster copy bitmap or draw lines?
I guess it depends on what you are trying to display. Showing a few lines should not pose any performance problems (if done well), but doing anything more graphics-intensive can be more problematic.
It also depends on what you use for drawing. GDI is easy but slow; GDI+ is also easy, can be prettier (antialiasing, etc.) but is also quite slow (or used to be when I tried it); OpenGL is fast but a bit trickier.
So it's a question with no easy answer, not knowing all the details of your needs. I think I would draw directly, and if it's not fast enough then check other options. What you'll probably need anyway is a double-buffering system, to avoid flickering (check http://www.codeproject.com/KB/GDI/flickerfree.aspx)
You can take a look at http://www.codeproject.com/KB/miscctrl/High-speedCharting.aspx. It's a charting control which seems to be quite fast.

How to avoid tearing with pygame on Linux/X11

I've been playing with pygame (on Debian/Lenny).
It seems to work nicely, except for annoying tearing of blits (fullscreen or windowed mode).
I'm using the default SDL X11 driver. Googling suggests that it's a known issue with SDL that X11 provides no vsync facility (even with a display created with FULLSCREEN|DOUBLEBUF|HWSURFACE flags), and I should use the "dga" driver instead.
However, running
SDL_VIDEODRIVER=dga ./mygame.py
throws in pygame initialisation with
pygame.error: No available video device
(despite xdpyinfo showing an XFree86-DGA extension present).
So: what's the trick to getting tear-free vsynced flips ? Either by getting this dga thing working or some other mechanism ?
The best way to keep tearing to a minimum is to keep your frame rate as close to the screen's frequency as possible. The SDL library doesn't have a vsync unless you're running OpenGL through it, so the only way is to approximate the frame rate yourself.
The SDL hardware double buffer isn't guaranteed, although nice when it works. I've seldomly seen it in action.
In my experience with SDL you have to use OpenGL to completely eliminate tearing. It's a bit of an adjustment, but drawing simple 2D textures isn't all that complicated and you get a few other added bonuses that you're able to implement like rotation, scaling, blending and so on.
However, if you still want to use the software rendering, I'd recommend using dirty rectangle updating. It's also a bit difficult to get used to, but it saves loads of processing which may make it easier to keep the updates up to pace and it avoids the whole screen being teared (unless you're scrolling the whole play area or something). As well as the time it takes to draw to the buffer is at a minimum which may avoid the blitting taking place while the screen is updating, which is the cause of the tearing.
Well my eventual solution was to switch to Pyglet, which seems to support OpenGL much better than Pygame, and doesn't have any flicker problems.
Use the SCALED flag and vsync=True when calling set_mode and you should be all set (at least on any systems which actually support this; in some scenarios SDL still can't give you a VSync-capable surface but they are increasingly rare).

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Learning about low-level graphics programming

I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.

Resources