I need to delete arc in another arc (to be transparent), but can't find function for this operation
Codename One doesn't have an erase draw mode but you can just draw an arc to begin with. You can also use image masking if that's what you are actually trying to accomplish.
Masking reference: http://www.codenameone.com/blog/round-at-codemotion
Graphics shape API reference:
http://www.codenameone.com/blog/codename-one-graphics
http://www.codenameone.com/blog/codename-one-graphics-part-2-drawing-an-analog-clock
Related
I would like to efficiently manipulate (on the CPU) the raw pixels that are displayed in an SDL2 window. There is a lot of information on SDL2 out there, but it's mostly for C++ and the relationship to the Rust bindings is not always clear.
I found out that from the window, I can create a canvas, which can create a texture, whose raw data can be manipulated as expected. That texture must be copied into the canvas to display it.
That works, but it seems like it could be possible to directly access a pixel buffer with the contents of the window and mutate that. Is that the case? That would perhaps be a bit faster and automatically resize with the window. Some other answers led me to believe that a Surface already exposes these raw pixels, but
These answers always cautioned that using a Surface is inefficient – is that true even for my use case?
I could not find out how to get the right Surface. There is Window's surface method, but if that holds a reference to an EventPump, how can I use the EventPump in the main loop afterwards?
I have a D3D11 device created, windows 10, latest edition, and a ID3D11Texture2D * created, GPU memory. I want to get the contents of this Texture2D stretched and drawn onto a region of an HWND. I don't want to use vertex buffers, I want to use "something else". I don't want to copy the bits down to the CPU, then bring them back up to GPU again. StretchDIBits or StretchBlt would be way too slow.
Let's pretend I want to use D2D1... I need a way to get my D3D11 texture2D copied or shared over to D2D1. Then, I want to use a D2D1 render target to stretch blit it to the HWND.
I've read the MS documents, and they don't make a lot of sense.
ideas?
If you already have a ID3D11Texture, why aren't you just using Direct3D to render it to a texture? That's what the hardware is designed to do very fast with high quality.
The DirectX Tool Kit SpriteBatch class is a good place to start for general sprite rendering, and it does indeed make use of VBs, shader, etc. internally.
Direct2D is really best suited to scenarios where you are drawing classic vector/presentation graphics, like circles, ellipses, arcs, etc. It's also useful as a way to use DirectWrite for high-quality, highly scalable fonts. For blitting rectangles, just use Direct3D which is what Direct2D has to use under the covers anyhow.
Note that if you require Direct3D Hardware Feature Level 10.0 or better, you can use a common trick which relies on the Vertex_IDin the vertex shader, so you can self-generate the geometry without any need for a VB or IB. See this code.
I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.
In D2D, is there a way to create a gradient brush which uses a custom path geometry as its start/stop points? I can do the trivial way of creating a different brush for each step of the path and rendering that as a separate path with that brush, but I am looking for something that won't kill performance.
Thanks!
What you want is an equivalent to GDI+'s PathGradient, which simply doesn't exist in Direct2D.
As a workaround, you may try using GDI+ to render what you need into a bitmap, and then draw that with Direct2D. This won't be hardware accelerated, and the bitmap sharing between GDI+ and Direct2D is a little clumsy, but it would at least work. You would create an ID2D1Bitmap with ID2D1RenderTarget::CreateBitmap(), then lock the GDI+ Bitmap, then use ID2D1Bitmap::CopyFromMemory() with the values from the GDI+ BitmapData.
If you are using a software render target, you can also use ID2D1RenderTarget::CreateSharedBitmap() which would let you skip the memoroy copying. It would require you to first wrap the GDI+ BitmapData (aka "the locked GDI+ Bitmap") with an IWICBitmapLock implementation of your own (it's not difficult, but certainly clumsy).
In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).