I have a D3D11 device created, windows 10, latest edition, and a ID3D11Texture2D * created, GPU memory. I want to get the contents of this Texture2D stretched and drawn onto a region of an HWND. I don't want to use vertex buffers, I want to use "something else". I don't want to copy the bits down to the CPU, then bring them back up to GPU again. StretchDIBits or StretchBlt would be way too slow.
Let's pretend I want to use D2D1... I need a way to get my D3D11 texture2D copied or shared over to D2D1. Then, I want to use a D2D1 render target to stretch blit it to the HWND.
I've read the MS documents, and they don't make a lot of sense.
ideas?
If you already have a ID3D11Texture, why aren't you just using Direct3D to render it to a texture? That's what the hardware is designed to do very fast with high quality.
The DirectX Tool Kit SpriteBatch class is a good place to start for general sprite rendering, and it does indeed make use of VBs, shader, etc. internally.
Direct2D is really best suited to scenarios where you are drawing classic vector/presentation graphics, like circles, ellipses, arcs, etc. It's also useful as a way to use DirectWrite for high-quality, highly scalable fonts. For blitting rectangles, just use Direct3D which is what Direct2D has to use under the covers anyhow.
Note that if you require Direct3D Hardware Feature Level 10.0 or better, you can use a common trick which relies on the Vertex_IDin the vertex shader, so you can self-generate the geometry without any need for a VB or IB. See this code.
Related
Note: Yes I know there are other ways of doing buttons in Android, but this is just an example to demonstrate my issue (the actuall buttons are far far more complex). So please don't reply in offering other solutions for buttons in Android, I am looking for a solution with PaintCode...
I have been using PaintCode for drawing custom buttons for years in iOS, it works brilliantly. I want to do the same for android and have the following issue:
In PaintCode I draw a button which is basically a rounded rectangle with a radius of 20 points.
I draw a frame around and then setting the correct resizing behaviour using the springs (see screenshot).
The result is that whatever the size of the button is going to be (= the frame) the corners will always be nicely rounded with 20 points. Basically a nicely resizable button.
This works very well on iOS but on android, the radius is 20 pixels not points, resulting in a far to small radius (now with the high res devices).
Or in general all drawings that I make in PaintCode when drawn using the draw method generated by PaintCode are to small.
It seams that the generated drawing code does not take into account the scale of the device (as it does on iOS).
Looking at https://www.paintcodeapp.com/documentation/android section "scale" PaintCode suggest to play with the density metric in android to perform scaling.
This does work, but makes the generated drawing fuzzy, I guess this is because we are drawing in lower resolution due to the scaling. So its not a viable solution.
class Button1 #JvmOverloads constructor(
context: Context, attrs: AttributeSet? = null, defStyleAttr: Int = 0
) : Button(context, attrs, defStyleAttr) {
var frame = RectF()
override fun onDraw(canvas: Canvas?) {
super.onDraw(canvas)
val displayDensity = resources.displayMetrics.density
canvas?.scale(displayDensity, displayDensity)
frame.set(0f,0f,width.toFloat()/displayDensity,height.toFloat()/displayDensity)
StyleKitName.drawButton1(canvas, frame)
}
}
Any suggestions to solve this? Is this a bug in PaintCode?
I'm the developer. Sorry about the long answer.
TLDR: handle the scaling yourself, for example the way you do. Switch layerType of your View to software to avoid blurry results of scale.
First I totally understand the confusion, for iOS it just works and in Android you have to fiddle around with some scales. It would make much more sense if it just worked the same I would love that and also would other PaintCode users. Yet it’s not a bug. The problem is difference between UIKit and android.graphic.
In UIKit the distances are measured in points. That means if you draw a circle with diameter 40 points, it should be more-less the same size on various iOS devices. PaintCode adopted this convention and all the numbers you see in PaintCode's user interface like position of shapes, stroke width or radius - everything is in points. The drawing code generated by PaintCode is not only resolution independent (i.e. you can resize/scale it and it keeps the sharpness), but also display-density independent (renders about the same size on retina display, regular display and retina HD display). And there isn’t anything special about the code. It looks like this:
NSBezierPath* rectanglePath = [NSBezierPath bezierPathWithRect: NSMakeRect(0, 0, 100, 50)];
[NSColor.grayColor setFill];
[rectanglePath fill];
So the display scaling is handled by UIKit. Also the implicit scale depends on the context. If you call the drawing code within drawRect: of some UIView subclass, it takes the display-density, but if you are drawing inside a custom UIImage, it takes the density of that image. Magic.
Then we added support for Android. All the measures in android.graphic are represented in pixels. Android doesn’t do any of UIKit's “display density” magic. Also there isn’t a good way to find out what the density is in the scope of drawing code. You need access to resources for that. So we could add that as a parameter to all the drawing methods. But what if you are not going to publish the drawing to the display but you are rather creating an image (that you are going to send to your friend or whatever)? Then you don’t want display density, but image density.
OK so if adding a parameter, we shouldn’t add resources, but the density itself as a float and generate the scaling inside every drawing method. Now what if you don’t really care about the density? What if all you care about is that your drawing fills some rectangle and have the best resolution possible? Actually I think that that is usually the case. Having so many different display resolutions and display densities makes the “element of one physical size fits all” approach pretty minor in my opinion. So in most cases the density parameter would be extraneous. We decided to leave the decision of how the scale should be handled to user.
Now for the fuzziness of the scaled drawing. That’s another difference between UIKit and android.graphics. All developers should understand that CoreGraphics isn’t very fast when it comes to rendering large scenes with multiple objects. If you are programming performance sensitive apps, you should probably consider using SpriteKit or Metal. The benefit of this is that you are not restricted in what you can do in CoreGraphics and you will almost always get very accurate results. Scaling is one such example. You can apply enormous scale and the result is still crisp. If you want more HW acceleration, use a different API and handle the restrictions yourself (like how large textures you can fit in your GPU).
Android took other path. Their android.graphic api can work in two modes - without HW acceleration (they call it software) or with HW acceleration (they call it hardware). It’s still the same API, but one of the hardware modes has some significant restrictions. This includes scale, blur (hence shadows), some blend modes and more.
https://developer.android.com/guide/topics/graphics/hardware-accel.html#unsupported
And they decided that every view will use the hardware mode by default if target API level >= 14. You can of course turn it off and magically your scaled button will be nice and sharp.
We mention that you need to turn off hardware acceleration in our documentation page section “Type of Layer” https://www.paintcodeapp.com/documentation/android
And it’s also in Android documentation https://developer.android.com/guide/topics/graphics/hardware-accel.html#controlling
I remember having a similar issue and, after talking to PaintCode support guys, we came up with a function to convert DP to PX. So, say, in Android those 20 points will be converted to "whatever" pixels -- considering the different display densities.
Long story short, this is the summary of PaintCode support answer:
The fact that Android doesn't support device independent pixels in its
drawing API is the reason for the different behaviour of iOS and
Android. When using UIKit, the coordinates entered are in Points, so
it takes care of display density itself. When generating bitmap in
Android the display density needs to be taken into account.
This is the function I created:
private static PointF convertDPtoPX(float widthDP, float heightDP) {
Context context = getAppContext();
DisplayMetrics metrics = context.getResources().getDisplayMetrics();
float widthPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, widthDP, metrics);
float heightPX = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, heightDP, metrics);
return new PointF(widthPX, heightPX);
}
Here is an example on how to call it:
Bitmap myButton = StyleKit.imageOfMyButton(convertDPtoPX(widthDP, heightDP));
Of course in this example I'm using a bitmap of myButton (StyleKit.imageOfMyButton) due to the specifics of the project. You'd need to adjust the example to fit your needs.
You could, for instance, make that radius an external parameter in PaintCode, making each platform responsible for providing its value. Like:
// In Android, convert 20DP to 20PX based on the above function.
// In iOS, just pass 20.
StyleKitName.drawButton1(canvas, frame, radius)
I hope this helps.
In D2D, is there a way to create a gradient brush which uses a custom path geometry as its start/stop points? I can do the trivial way of creating a different brush for each step of the path and rendering that as a separate path with that brush, but I am looking for something that won't kill performance.
Thanks!
What you want is an equivalent to GDI+'s PathGradient, which simply doesn't exist in Direct2D.
As a workaround, you may try using GDI+ to render what you need into a bitmap, and then draw that with Direct2D. This won't be hardware accelerated, and the bitmap sharing between GDI+ and Direct2D is a little clumsy, but it would at least work. You would create an ID2D1Bitmap with ID2D1RenderTarget::CreateBitmap(), then lock the GDI+ Bitmap, then use ID2D1Bitmap::CopyFromMemory() with the values from the GDI+ BitmapData.
If you are using a software render target, you can also use ID2D1RenderTarget::CreateSharedBitmap() which would let you skip the memoroy copying. It would require you to first wrap the GDI+ BitmapData (aka "the locked GDI+ Bitmap") with an IWICBitmapLock implementation of your own (it's not difficult, but certainly clumsy).
My application produces an "animation" in a per-pixel manner, so i need to efficiently draw them. I've tried different strategies/libraries with unsatisfactory results especially at higher resolutions.
Here's what I've tried:
SDL: ok, but slow;
OpenGL: inefficient pixel operations;
xlib: better, but still too slow;
svgalib, directfb, (other frame buffer implementations): they seem perfect but definitely too tricky to setup for the end user.
(NOTE: I'm maybe wrong about these assertions, if it's so please correct me)
What I need is the following:
fast pixel drawing with performances comparable to OpenGL rendering;
it should work on Linux (cross-platform as a bonus feature);
it should support double buffering and vertical synchronization;
it should be portable for what concerns the hardware;
it should be open source.
Can you please give me some enlightenment/ideas/suggestions?
Are your pixels sparse or dense (e.g. a bitmap)? If you are creating dense bitmaps out of pixels, then another option is to convert the bitmap into an OpenGL texture and use OpenGL APIs to render at some framerate.
The basic problem is that graphics hardware will be very different on different hardware platforms. Either you pick an abstraction layer, which slows things down, or code more closely to the type of graphics hardware present, which isn't portable.
I'm not totally sure what you're doing wrong, but it could be that you are writing pixels one at a time to the display surface.
Don't do that.
Instead, create a rendering surface in main memory in the same format as the display surface to render to, and then copy the whole, rendered image to the display in a single operation. Modern GPU's are very slow per transaction, but can move lots of data very quickly in a single operation.
Looks like you are confusing window manager (SDL and xlib) with rendering library (opengl).
Just pick a window manager (SDL, glut, or xlib if you like a challenge), activate double buffer mode, and make sure that you got direct rendering.
What kind of graphical card do you have? Most likely it will process pixels on the GPU. Look up how to create pixel shaders in opengl. Pixel shaders are processing per pixel.
I used DirectDraw in C and C++ years back to draw some simple 2D graphics. I was used to the steps of creating a surface, writing to it using pointers, flipping the back-buffer, storing sprites on off-screen surfaces, and so on. So today if I want write some 2D graphics programs in C or C++, what is the way to go?
Will this same method of programming still apply or do I have to have a different understanding of the video hardware abstraction?
What libraries and tools are available on Windows and Linux?
What libraries and tools are available on Windows and Linux?
SDL, OpenGL, and Qt 4 (it is gui library, but it is fast/flexible enough for 2D rendering)
Will this same method of programming still apply or do I have to have a different understanding of the video hardware abstraction?
Normally you don't write data into surface "using pointers" every frame, and instead manipulate/draw them using methods provided by API. This is because the driver will work faster with video memory than if you transfer data from system memory into video memory every frame. You still can write data into hardware surface/texture (even during every frame), if you have to, but those surfaces may need to be treated in special way to get optimal performance. For example, in DirectX you would need to tell the driver that surface data is going to change frequently and that you're going only to write data into surface, never reading it back. Also, in 3D-oriented APIs (openGL/DirectX) rendering surface on the other surface is a somewhat "special case", and you may need to use "Render Targets"(DirectX) or "Framebuffer Objects"(OpenGL). Which is different from DirectDraw (where, AFAIK, you could blit anything onto anything). The good thing is that with 3D api you get incredibly flexible way of dealing with surfaces/textures - stretching, rotating, tinting them with color, blending them together, processing them using shaders can be done on hardware.
Another thing is that modern 3D apis with hardware support frequently don't operate on 8bit palleted textures, and prefers ARGB images. 8 bit surfaces with palette may be emulated, when needed, and 2D low-level apis (SDL, DirectDraw) provide them. Also you can emulate 8bit texture on hardware using fragment/pixel shaders.
Anyway, if you want "old school" cross-platform way of using surfaces (i.e. "write data every frame using pointers" - i.e. you need software renderer or something), SDL easily allows that. If you want higher-level, more flexible operations - Qt 4 and OpenGL are for you.
On Linux you could use OpenGL, it is not only used for 3D support but also supports 2D.
SDL is also pretty easy to use, out of the box. It is also cross-platform, and includes (and has a lot of) plugins available to handle your needs. It interfaces nicely with openGL as well should you need 3D support.
Direct2D on Windows.
EGLOutput/EGLDevice or GEM depending on the GPU driver for Linux.
Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.