Multiple oddities with Scheme graphics - graphics

Perhaps against my better judgement, I decided to try to get graphics working in scheme.
(MIT/GNU)
To get it started, I wrote
(define graphics-types (enumerate-graphics-types))
(define graphics (make-graphics-device (car graphics-types)))
which popped up a white window. Calling
(graphics-draw-point graphics .5 0)
gave the expected result, which was that a little black pixel appeared 3/4 of the way to the right of the window (vertically in the center). However, calling
(graphics-erase-point graphics .5 0)
did nothing. Furthermore, minimizing and restoring the window erased the point, but experimentation showed that minimizing always cleared the entire window.
Does anyone know what's going on?

The graphics-erase-point procedure in MIT Scheme works by changing the drawing mode to 0, calling graphics-draw-point, then changing the drawing mode back to whatever it was before. More information on the mechanics of MIT Scheme drawing modes can be found at MIT's drawing mode documentation
The bug appears to be in the graphics-bind-drawing-mode procedure, which is used in graphics-erase-point to change the drawing mode. The easiest solution is to redefine graphics-erase-point to use graphics-set-drawing-mode instead. The resulting code looks like this:
(define (graphics-erase-point device x y)
(graphics-set-drawing-mode device 0)
(graphics-draw-point device x y)
(graphics-set-drawing-mode device 15))
15 is the default drawing mode, which I've used for the sake of simplicity, but it is certainly possible to intelligently revert drawing modes (I leave that exercise for you).

Related

Moving a graphic over a background image

I have a new project but I'm really not sure where to start, other than I think GNAT Ada would be good. I taught myself Ada three years ago and I have managed some static graphics with GNAT but this is different.
Please bear with me, all I need is a pointer or two towards where I might start, I'm not asking for a solution. My history is in back-end languages that are now mostly obsolete, so graphics is still a bit of a challenge.
So, the project:
With a static background image (a photograph) - and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line. Once in place, I need to report on the position of the cursor relative to the overall length of the line. I can probably handle the reporting with what I already know but I have no clue as to how to create a graphic that I can slide around over another image. In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
If I am wrong to choose GNAT Ada for this, please suggest an alternative.
I have looked in Stackoverflow for anything similar but have found answers only for Blackberry and Java, neither of which seems relevant.
For background, this will be a useful means of measuring relative lengths of the features of insect bodies from photographs, hopefully to set up some definitive identification guides for closely-related species.
With a static background image (a photograph)
So first you need a window to put your interface in. You can get this from any GUI framework (link as given by trashgod in the comments).
and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line.
These are affine transformations. They are commonly employed in low-level graphics rendering. You can, like Zerte suggested, employ OpenGL – however modern OpenGL has a steep learning curve for beginners.
GtkAda includes a binding to the Cairo graphics library which supports such transformations, so you can create a window with GtkAda with a Cairo surface and then render your image & line on it. Cairo does have a learning curve and I never used the Ada binding, so I cannot really give an opinion about how complex this will be.
Another library that fully supports what you want to do is SDL which has Ada bindings here. The difference to GtkAda is that SDL is a pure graphics drawing library so you need to „draw“ any interactive controls yourself. On the other hand, setting up a window via SDL and drawing things will be somewhat simpler than doing it via GtkAda & Cairo.
SFML which has also been mentioned in comments is on the same level as SDL. I do not know it well enough to give a more informed opinion but what I said about SDL will most probably also apply to SFML.
In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
HID event processing is handled by whatever GUI library you use. If you use SDL, you'll get mouse events from SDL; if you use GTK, you'll get mouse events from GTK, and so on.

XOrg server code that draws the mouse pointer

I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.

OpenGL rendering - a window out ot the screen

When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization). That is described for example here: link What happens when a part of the window is placed outside the screen (monitor) ? What happens if I use compositing window manager (on linux, for example compiz) and the whole OpenGL window is placed on virtual desktop (for example a wall of the cube) which is not visible ? What happenns in that OpenGL application ? Is there any GPU usage ? What about redirecting content of that window to the offscreen pixmap ?
When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization).
Clipping is a geometrical operation. When it comes to rasterization, everything happens on the pixel level.
It all comes down to Pixel Ownership.
In a plain, not composited windowing system, all windows share the same screen framebuffer. Each window is, well, a window (offset + size) into the screen framebuffer. When things get drawn to a window, using OpenGL or not, for every pixel it is tested if this pixel of the framebuffer actually belongs to the window; if some other, logical (and visible) window is in front, the pixel ownership tests will fail for these pixels and nothing gets drawn there (so that the window in front doesn't get overdrawn). That's why in a plain, uncomposited environment you can "screenshot" windows overlaying your OpenGL window with glReadPixels, because effectively that part of the framebuffer handed to OpenGL actually belongs to another window, but OpenGL doesn't know this.
Similarly, if a window is moved partially or completely off screen, the off-screen pixels will fail the pixel ownership test and nothing gets drawn there.
What happens if I use compositing window manager
Then every window has its very own framebuffer, which it completely owns. Pixel Ownership tests will never fail. You figure the rest.
What happenns in that OpenGL application?
To OpenGL it looks like all the pixels pass the ownership test when rasterizing. That's it.
Is there any GPU usage?
Pixel ownership test is so important, that even long before there were GPUs, the first generation of graphics cards did have the functionality required to implement pixel ownership tests. The function is easy to implement and hardwired so there's no difference with that regard.
However the more pixels fail the test, i.e. are not being touch, the less the GPU has to work in rasterization stage, to rendering throughput actually increases if the window is obscured or partially moved off-screen in a non-composited environment.

SDL library: does SDL_UpdateRect() really need to be called for an SDL_HWSURFACE?

I'm a newbie to SDL and I've read about a dozen introductory tutorials so far. I'm a bit puzzled about the difference between hardware and software surfaces (i.e. the SDL_HWSURFACE and SDL_SWSURFACE flags in the call to SDL_SetVideoMode()), and how SDL_UpdateRect() behaves for each type of surface. My understanding is that for a hardware surface, there is no need to call SDL_UpdateRect() because one is drawing directly to the display screen. However, the example in this Wikibooks tutorial (http://en.wikibooks.org/wiki/SDL_%28Simple_DirectMedia_Layer%29) shows otherwise: it calls SDL_UpdateRect() on a hardware surface.
I'm a bit puzzled about the difference between hardware and software surfaces (i.e. the SDL_HWSURFACE and SDL_SWSURFACE flags in the call to SDL_SetVideoMode())
In the SDL documentation, AFAIK, they simply state that software surfaces are stored in system memory (your computer RAM) and hardware surface are stored in video memory (your GPU RAM) and that hardware surface may take advantage of hardware acceleration (see below on SDL_Flip).
Though the documentation doesn't says much about it, there are differences, e.g., I know that when using alpha blending software surfaces are better than hardware ones performance-wise. I also heard that software surfaces are better for direct pixel access, but I can't confirm it.
and how SDL_UpdateRect() behaves for each type of surface
The behavior is the same for both: the function updates the given rectangle in the given surface. Maybe, implementation-wise, there must be differences, but, again, the documentation does not state anything about it.
For SDL_Flip though it is a different story: when using it with hardware surfaces, it attempts to swap video buffers (only possible if the hardware supports double buffering, remember to pass the SDL_DOUBLEBUF flag to SDL_SetVideoMode). If the hardware does not support double buffering or if it is a software surface the SDL_Flip call is equivalent to updating the entire surface area with SDL_UpdateRect (SDL_UpdateRect(screen, 0, 0, 0, 0)).
My understanding is that for a hardware surface, there is no need to call SDL_UpdateRect() because one is drawing directly to the display screen.
No, you shall call it just as with software surfaces.
Also note that the chosen name to the surface parameter on SDL_UpdateRect and SDL_Flip functions can be a little misleading: the parameter name is screen, but it can be any SDL_Surface (not just the surface that represents the user screen). Most of time (at least for simple applications) you will be blitting only on the screen surface, but it makes sense (and some times is necessary) to blit in other surfaces that are not the user screen one.
SDL_UpdateRect() has nothing to do with surface type.(don't use it when using SDL OpenGL). You should call it whenever you have to update a (part) of a SDL_Surface.
In fact everytime you flip a surface a SDL_UpdateRect(screen, 0, 0, 0, 0) is called for that surface.

Get mouse deltas under linux (xorg)

Is there a convenient way to get mouse deltas (e.g. mickeys) under X/linux? I know that I could read from /dev/input/mice, but that requires root access and seems a bit too low-level for me.
If this is for a game, i.e. an application with an actual X window, the typical approach used to be:
Grab the mouse, so all mouse input goes to your window
Warp the mouse pointer to the center of your window, to give maximum amount of space to move
On each mouse movement event, subtract the center of the window from the reported position; this gives you a "delta event"
Goto 2
I write "used to be" because there might be better ways to solve this now, haven't looked into it for a while.
This of course won't give you a resolution that is higher than what X is reporting to applications, i.e. pixels. If you're after sub-pixel reporting, I think you need to go lower, perhaps read the device directly as you suggest.

Resources