Direct2D gradient along geometry path - graphics

In D2D, is there a way to create a gradient brush which uses a custom path geometry as its start/stop points? I can do the trivial way of creating a different brush for each step of the path and rendering that as a separate path with that brush, but I am looking for something that won't kill performance.
Thanks!

What you want is an equivalent to GDI+'s PathGradient, which simply doesn't exist in Direct2D.
As a workaround, you may try using GDI+ to render what you need into a bitmap, and then draw that with Direct2D. This won't be hardware accelerated, and the bitmap sharing between GDI+ and Direct2D is a little clumsy, but it would at least work. You would create an ID2D1Bitmap with ID2D1RenderTarget::CreateBitmap(), then lock the GDI+ Bitmap, then use ID2D1Bitmap::CopyFromMemory() with the values from the GDI+ BitmapData.
If you are using a software render target, you can also use ID2D1RenderTarget::CreateSharedBitmap() which would let you skip the memoroy copying. It would require you to first wrap the GDI+ BitmapData (aka "the locked GDI+ Bitmap") with an IWICBitmapLock implementation of your own (it's not difficult, but certainly clumsy).

Related

What is the most efficient way to manipulate raw pixels in an SDL2 window in Rust?

I would like to efficiently manipulate (on the CPU) the raw pixels that are displayed in an SDL2 window. There is a lot of information on SDL2 out there, but it's mostly for C++ and the relationship to the Rust bindings is not always clear.
I found out that from the window, I can create a canvas, which can create a texture, whose raw data can be manipulated as expected. That texture must be copied into the canvas to display it.
That works, but it seems like it could be possible to directly access a pixel buffer with the contents of the window and mutate that. Is that the case? That would perhaps be a bit faster and automatically resize with the window. Some other answers led me to believe that a Surface already exposes these raw pixels, but
These answers always cautioned that using a Surface is inefficient – is that true even for my use case?
I could not find out how to get the right Surface. There is Window's surface method, but if that holds a reference to an EventPump, how can I use the EventPump in the main loop afterwards?

Copy D3D11 Texture2D to D2D1 Bitmap

I have a D3D11 device created, windows 10, latest edition, and a ID3D11Texture2D * created, GPU memory. I want to get the contents of this Texture2D stretched and drawn onto a region of an HWND. I don't want to use vertex buffers, I want to use "something else". I don't want to copy the bits down to the CPU, then bring them back up to GPU again. StretchDIBits or StretchBlt would be way too slow.
Let's pretend I want to use D2D1... I need a way to get my D3D11 texture2D copied or shared over to D2D1. Then, I want to use a D2D1 render target to stretch blit it to the HWND.
I've read the MS documents, and they don't make a lot of sense.
ideas?
If you already have a ID3D11Texture, why aren't you just using Direct3D to render it to a texture? That's what the hardware is designed to do very fast with high quality.
The DirectX Tool Kit SpriteBatch class is a good place to start for general sprite rendering, and it does indeed make use of VBs, shader, etc. internally.
Direct2D is really best suited to scenarios where you are drawing classic vector/presentation graphics, like circles, ellipses, arcs, etc. It's also useful as a way to use DirectWrite for high-quality, highly scalable fonts. For blitting rectangles, just use Direct3D which is what Direct2D has to use under the covers anyhow.
Note that if you require Direct3D Hardware Feature Level 10.0 or better, you can use a common trick which relies on the Vertex_IDin the vertex shader, so you can self-generate the geometry without any need for a VB or IB. See this code.

Why are there aliasing drawings with gdi?

Why are there aliasing drawings with gdi? And even don't scale it.
If I don't scale it, I think it won't be aliased.
And draw a circle with SVG will not be aliased.
I guess by "sawtooth" you mean aliasing. GDI is about 30 years old. Since antialiasing requires quite a lot of computation power it's support has never been added. It is technically possible to draw smooth images using GDI and some additional code, however it is better to use newer API that supports antialiasing out of box, such as Direct2D or at least GDI+.
Also svg is just an xml-based file format. You don't "draw" anything with svg, you just describe image with svg and then it gets rendered with some rendering engine, such as cairo. If you render svg using plain GDI you'll still get aliased image.

OpenGLUT: Texture for a sphere: what's the correct way to do it?

I need to give texture to an sphere: but I want to learn a good way to do it: I read methods which seem not to take advantage of the glut, they only use gl features. Maybe I am trying something unavailable, but I want to call solidsphere and generate the coordinates, I have load the .bmp, but the image don't want to rotate when the solid sphere is rotating, instead, the image (a face) always is facing the screen, as if it is "floating" in the surface. How can I stick it to the solid?? Thanks a bundle.
ok, I got It: instead of calling to the glutSolidSphere, it must be employed the gluSphere, and call to the gluQuadricTexture. Then the face stays in its position, and we can rotate the camera view (with the function gluLookAt) around it and evaluate all the texture wrapping the solid.

Fast pixel drawing library

My application produces an "animation" in a per-pixel manner, so i need to efficiently draw them. I've tried different strategies/libraries with unsatisfactory results especially at higher resolutions.
Here's what I've tried:
SDL: ok, but slow;
OpenGL: inefficient pixel operations;
xlib: better, but still too slow;
svgalib, directfb, (other frame buffer implementations): they seem perfect but definitely too tricky to setup for the end user.
(NOTE: I'm maybe wrong about these assertions, if it's so please correct me)
What I need is the following:
fast pixel drawing with performances comparable to OpenGL rendering;
it should work on Linux (cross-platform as a bonus feature);
it should support double buffering and vertical synchronization;
it should be portable for what concerns the hardware;
it should be open source.
Can you please give me some enlightenment/ideas/suggestions?
Are your pixels sparse or dense (e.g. a bitmap)? If you are creating dense bitmaps out of pixels, then another option is to convert the bitmap into an OpenGL texture and use OpenGL APIs to render at some framerate.
The basic problem is that graphics hardware will be very different on different hardware platforms. Either you pick an abstraction layer, which slows things down, or code more closely to the type of graphics hardware present, which isn't portable.
I'm not totally sure what you're doing wrong, but it could be that you are writing pixels one at a time to the display surface.
Don't do that.
Instead, create a rendering surface in main memory in the same format as the display surface to render to, and then copy the whole, rendered image to the display in a single operation. Modern GPU's are very slow per transaction, but can move lots of data very quickly in a single operation.
Looks like you are confusing window manager (SDL and xlib) with rendering library (opengl).
Just pick a window manager (SDL, glut, or xlib if you like a challenge), activate double buffer mode, and make sure that you got direct rendering.
What kind of graphical card do you have? Most likely it will process pixels on the GPU. Look up how to create pixel shaders in opengl. Pixel shaders are processing per pixel.

Resources