vc++ graphic performance - visual-c++

I want to build a obj to draw a realtime graph but I have performance limitations
size of graph is static.
in repaint the graph
I can redraw all the needed line.
I have other way that save the graph on bitmap memory
and each time copy it on the screen
which way is better?
what is faster copy bitmap or draw lines?

I guess it depends on what you are trying to display. Showing a few lines should not pose any performance problems (if done well), but doing anything more graphics-intensive can be more problematic.
It also depends on what you use for drawing. GDI is easy but slow; GDI+ is also easy, can be prettier (antialiasing, etc.) but is also quite slow (or used to be when I tried it); OpenGL is fast but a bit trickier.
So it's a question with no easy answer, not knowing all the details of your needs. I think I would draw directly, and if it's not fast enough then check other options. What you'll probably need anyway is a double-buffering system, to avoid flickering (check http://www.codeproject.com/KB/GDI/flickerfree.aspx)
You can take a look at http://www.codeproject.com/KB/miscctrl/High-speedCharting.aspx. It's a charting control which seems to be quite fast.

Related

Advanced Text Rendering with Direct3D

Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.

How to implement high speed animation?

I'm trying to write an application (winforms) that can demonstrate how two oscillating colors will result in a third color. For this I need to be able to switch between two colors very fast (at >50 fps). I'm really hoping to do this in managed code.
Right now I'm drawing two small rectangular bitmaps with solid colors on top of each other. Using GDI+ DrawImage with two in-memory bitmaps in a doublebuffering enabled control doesn't cut it and results in flickering/tearing at high speeds. A timer connected to a slider triggers the switching.
Is this a sensible approach?
Will GDI and BitBLT be better?
Does WPF perform better?
What about DirectX or other
technologies?
I would really appreciate feedback, TIA!
I have never had good luck with GDI to do high speed graphics, so I used DirectX, but MS has dropped support for Managed DirectX, so you may need to do this in unmanaged C++.
Just write your controller in C#, then have a very thin layer of managed C++ that just calls to the unmanaged C++ DLL that has DirectX support.
You will need to get exclusive control of the computer, so that no other application can really use the cpu, otherwise you will find that your framerate can dropped, or at least no be very consistent.
If you use an older version of DirectX, such as DirectX 9.0c, that may still have support for .NET, and I used that to get a framerate for a music program of about 70 frames/second.
Flicker should be avoidable with a double-buffered approach (and by this I don't mean just setting the rendering control's DoubleBuffered property to True - ironically, this will have no effect on flicker).
Tearing can be dealt with via DirectX, but only if you synchronize your frame rate with your monitor's refresh rate. This may not be possible, especially if you need to achieve a specific frame rate (and it doesn't happen to be your monitor's refresh rate).
I don't think WPF gets around the fundamental tearing problem (but I could be wrong).
This will work with GDI, but you won't be able to control flicker, so it's kind of out of the question. Direct X may be a lot of extra fluff just for showing two non-flickering images. Perhaps SDL will work well enough? It's cross platform and you can literally code this effect in less than 30 lines of code.
http://cs-sdl.sourceforge.net/index.php/SimpleExample

How to avoid tearing with pygame on Linux/X11

I've been playing with pygame (on Debian/Lenny).
It seems to work nicely, except for annoying tearing of blits (fullscreen or windowed mode).
I'm using the default SDL X11 driver. Googling suggests that it's a known issue with SDL that X11 provides no vsync facility (even with a display created with FULLSCREEN|DOUBLEBUF|HWSURFACE flags), and I should use the "dga" driver instead.
However, running
SDL_VIDEODRIVER=dga ./mygame.py
throws in pygame initialisation with
pygame.error: No available video device
(despite xdpyinfo showing an XFree86-DGA extension present).
So: what's the trick to getting tear-free vsynced flips ? Either by getting this dga thing working or some other mechanism ?
The best way to keep tearing to a minimum is to keep your frame rate as close to the screen's frequency as possible. The SDL library doesn't have a vsync unless you're running OpenGL through it, so the only way is to approximate the frame rate yourself.
The SDL hardware double buffer isn't guaranteed, although nice when it works. I've seldomly seen it in action.
In my experience with SDL you have to use OpenGL to completely eliminate tearing. It's a bit of an adjustment, but drawing simple 2D textures isn't all that complicated and you get a few other added bonuses that you're able to implement like rotation, scaling, blending and so on.
However, if you still want to use the software rendering, I'd recommend using dirty rectangle updating. It's also a bit difficult to get used to, but it saves loads of processing which may make it easier to keep the updates up to pace and it avoids the whole screen being teared (unless you're scrolling the whole play area or something). As well as the time it takes to draw to the buffer is at a minimum which may avoid the blitting taking place while the screen is updating, which is the cause of the tearing.
Well my eventual solution was to switch to Pyglet, which seems to support OpenGL much better than Pygame, and doesn't have any flicker problems.
Use the SCALED flag and vsync=True when calling set_mode and you should be all set (at least on any systems which actually support this; in some scenarios SDL still can't give you a VSync-capable surface but they are increasingly rare).

Manage rotated moving raster map

My application present a (raster) moving map.
I need to be able to show the map rotated base on any given angle.
The program is currently in VC++/MFC but the problem is generic.
I have a source bitmap (CBitmap or HBITMAP) and draw it to the device context (CDC) using StretchBlt.
While this works fast and smooth for angle=0 (and the user can grab the map smoothly with the mouse), this is not the case if I try to rotate the bitmap and then present it (the rotation of the bitmap using SetWorldTransform() or so takes hundreds of miliseconds and this is too slow).
I think that the solution is to be able to relate only to the pixels that currently on the screen and not rotating the original source bitmap - and this is the key.
If someone has experience with similar implementation then it might save me lots of trial and error efforts.
Thanks!
Avi.
It looks like SetWorldTransform is extremely slow:
http://www.codeguru.com/Cpp/G-M/bitmap/specialeffects/article.php/c1743
And while the other options presented in that article are faster, there are of course other better solutions like this:
http://www.codeguru.com/cpp/g-m/gdi/article.php/c3693/ (check the comments for fixes and improvements as well)
Also here are some non-Windows centric fast rotation algorithms:
http://www.ddj.com/windows/184416337?pgno=11
Note that if you guarantee power of 2 dimensions you can get significant speed improvements.
As follow up to my question and provided answer, let me summarize the following:
I used the algorithm mentioned at http://www.codeguru.com/cpp/g-m/gdi/article.php/c3693/.
It works and provide pretty good performance and smooth display.
There were some bugs in it that I needed to fix as well as simplify the formulas and
code in some cases.
I will examine the algorithm mentioned at http://www.ddj.com/windows/184416337?pgno=11 to see if it provides some break through performance that worth adapting it.
My implementation required using a large source bitmap, so I needed to modify the code so I will not rotate the whole bitmap each time but only the relevant portion that will be displayed at the screen (otherwise performance would be unacceptable).
Avi.

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Resources