Drawing Vector Graphics Faster - graphics

In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?

Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).

You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.

...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).

Related

Two canvases on top of each with both 'visible' at the same time

To be quite specific in what I'm wanting to do...
I have a scrollable map on a canvas. I have zoom in/out capabilities. I want to place a cross hair/bullseye on top of the map so I know where I am going to be zooming in/out with high accuracy. I realize I could be more flexible and do it by locating the mouse pointer position and just have it zoom in/out based on where the mouse is at but I think given the magnitude of the project and the way it keeps evolving I better plan otherwise.
I'm thinking I would have to have two canvases on the screen to pull off what I'm wanting to do. That shouldn't be a problem. The problem...is it possible to make the top canvas trans????, is that parent or lucent...aka see-thru(I can never remember which is which, LOL:)) while still being able to see the cross hairs placed on the center of the top canvas. I don't think it could be done with only one canvas but I might be wrong.
Yes, this is a bit of a tricky question.
No, it is not possible to have a transparent canvas.
If all you need is a crosshair, just draw one on the canvas. You can hide it temporarily while you zoom so that it doesn't get zoomed along with everything else.

XOrg server code that draws the mouse pointer

I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.

mouse movements for background thread created windows forms

I am wondering if it is possible to
1) Hosting a background thread created windows form inside application main thread created windows form ?
or 2) Separate the thread which processes mouse movements from the thread which is painting a windows form application ?
I know these questions sounds very crazy, but I find myself encountering a peculiar problem and I welcome advice of being able to side-step this problem, though I do not think we can part from windows forms and use different ui technology easily.
Our software team writes some plugins for a 3-rd party windows form application. SO we provide a root main form which hosts a bunch of user controls to them, using their api and they host them in their own windows form. Currently all user interfaces are created on the application's main thread, so they play nicely with each other. Many of our user controls we provide have system.windows.forms.datavisualisation charts, which are painted live by data the application is sourcing. One of the problem we have is when the user moves the mouse around erratically the display stop updating as all the painting of the graphs (GDI+) is done on the main thread ( we use background threads and TPL for sourcing and computing data, but painting is done on the main thread). So I am wondering if it is possible to make all the gdi+ painting of these charts occur on a different thread than the application main thread, so the painting would continue and we can still receive mouse movement inputs and clicks for user interactions, but erratic mouse movements can not flood the message queue and stop the gdi+ painting of the user controls.
Any help specially, specially pointers to relevant apis or articles demonstrating techniques would be much appreciated.
Thank you.
It depends on the cause of the slow down when the draw is done.
First, you should optimize your drawing routines maybe you are painting the entire form/control on each call to OnPaint? If that's the case then you should only re draw the invalidated area, that will speed up things a lot.
Else, if you still need to do the drawing on another thread, then you cant do it directly, ui can only be modified on main thread, but you can use a bitmap as an off screen buffer and then re draw from it.
To do that, when your control is created you create also a bitmap of your controls size (also you should take care of resizes). So, when any change must be done to your control's appearance, draw it to the bitmap, you can do it in another thread, and then invalidate your control's updated area.
And finally, in your control's OnPaint just blit from that bitmap to your control, and only the invalidated area, this call will still be done on main thread, but i assure you any machine today is able of blitting very large images in milliseconds.
As an example, let's suppose you have a control where you draw a gradient background and a circle.
Usually, you will paint directly to the Control surface through a Graphics object each time you want to update something, let's say the background, so, to change it you will draw the full background and your circle over it.
To do that in background, if you created a bitmap as off screen buffer, you don't draw to the surface, you draw to the bitmap in another thread (obtaining a graphics object from it) and then invalidate the updated área of the control. When you invalidate the área OnPaint will be called and then you can blit from that bitmap to the controls surface.
In this step you gained little or none speed, but let's expand our example. Suppose you are drawing a very complex control,with lots of calls to DrawImage, DrawCircle, etc.
When the mouse moves over the control, small áreas get invalidated, and on each OnPaint call you will draw all the "layers" composing the invalidated área, it can be lots of draws to do if as we said the control is very complex.
But if you did draw to a bitmap your controls appearance, on each OnPaint call you will just blit the corresponding área from the bitmap to the control, as you see you are reducing calls from lots of draws to just a blit.
Hope it clarifies the idea.

Is a drag-and-drop interface simply infeasible in Pygame?

I'm building a simple RPG using Pygame and would like to implement a drag-and-drop inventory. However, even with the consideration of blitting a separate surface, it seems that the entire screen will need to be recalculated every single time the user drags an item around. Would it be best to allow a limited range of motion, or is it simply not feasible to implement such an interface?
redrawing most or all of the screen is a very normal thing, across all windowing systems. this is rarely an issue, since most objects on screen can be drawn quickly.
To make this practical, it's necessary to organize all of the game objects that have to be drawn in such a way that they can be quickly found and drawn in the right order. This often means that objects of a particular type are grouped into some sort of layer. The drawing code can go through each layer, and for each object in each layer, ask the object to draw itself. If a particular layer is costly to draw, because it's got a lot of objects, can store a prerendered surface and blit that instead.
A really simple hack to get a similar effect is to capture the screen at the start of a drag to a surface, and then blit that every frame instead of the whole game. This obviously only makes sense in a game where dragging also means that the rest of the game is effectively paused.
There are many GUI examples on pygame.org, as well as libraries for GUIs.

How do I project lines dynamically on to 3D terrain?

I'm working on a game in XNA for Xbox 360. The game has 3D terrain with a collection of static objects that are connected by a graph of links. I want to draw the links connecting the objects as lines projected on to the terrain. I also want to be able to change the colors etc. of links as players move their selection around, though I don't need the links to move. However, I'm running into issues making this work correctly and efficiently.
Some ideas I've had are:
1) Render quads to a separate render target, and use the texture as an overlay on top of the terrain. I currently have this working, generating the texture only for the area currently visible to the camera to minimize aliasing. However, I'm still getting aliasing issues -- the lines look jaggy, and the game chugs frequently when moving the camera EDIT: it chugs all the time, I just don't have a frame rate counter on Xbox so I only notice it when things move.
2) Bake the lines into a texture ahead of time. This could increase performance, but makes the aliasing issue worse. Also, it doesn't let me dynamically change the properties of the lines without much munging.
3) Make geometry that matches the shape of the terrain by tessellating the line-quads over the terrain. This option seems like it could help, but I'm unsure if I should spend time trying it out if there's an easier way.
Is there some magical way to do this that I haven't thought of? Is one of these paths the best when done correctly?
Your 1) is a fairly good solution. You can reduce the jagginess by filtering -- first, make sure to use bilinear sampling when using the overlay. Then, try blurring the overlay after drawing it but before using it; if you choose a proper filter, it will remove the aliasing.
If it's taking too much time to render the overlay, try reducing its resolution. Without the antialiasing filter, that would just make it jaggier, but with a good filter, it might even look better.
I don't know why the game would chug only when moving the camera. Remember, you should have a separate camera for the overlay -- orthogonal, and pointing down onto the terrain.
Does XNA have a shadowing library? If so, yo could just pretend the lines are shadows.

Resources