XOrg server code that draws the mouse pointer - linux

I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.

So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.

Related

Is it possible to make camera tracking without moving the whole world?

I don't really want to move the whole world in my game but I do want the screen to follow my character.
So for example, normally pygame would render at the position of 0,0 and the window height width would allow you to expand that viewing area. But I want to move the starting position so that I can view something at coordinates 1000x1000 even if my screen is only 500x500 big.
Is it possible to make camera tracking without moving the whole world?
No.
By relativity, moving the player within the world is the same as moving the world around the player. Since your camera is fixed on the player, by definition you will see the world moving when the player moves within it. Therefore, you must draw your world in a different place.
It is more explicit in 3D graphics; we represent the scene's motion as one matrix, and the camera's as another. The renderer uses their product. It doesn't care which contributed to the motion.

OpenGL rendering - a window out ot the screen

When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization). That is described for example here: link What happens when a part of the window is placed outside the screen (monitor) ? What happens if I use compositing window manager (on linux, for example compiz) and the whole OpenGL window is placed on virtual desktop (for example a wall of the cube) which is not visible ? What happenns in that OpenGL application ? Is there any GPU usage ? What about redirecting content of that window to the offscreen pixmap ?
When I draw a triangle and part (or whole) of that primitive is placed outside a viewing volume OpenGL performs clipping (before rasterization).
Clipping is a geometrical operation. When it comes to rasterization, everything happens on the pixel level.
It all comes down to Pixel Ownership.
In a plain, not composited windowing system, all windows share the same screen framebuffer. Each window is, well, a window (offset + size) into the screen framebuffer. When things get drawn to a window, using OpenGL or not, for every pixel it is tested if this pixel of the framebuffer actually belongs to the window; if some other, logical (and visible) window is in front, the pixel ownership tests will fail for these pixels and nothing gets drawn there (so that the window in front doesn't get overdrawn). That's why in a plain, uncomposited environment you can "screenshot" windows overlaying your OpenGL window with glReadPixels, because effectively that part of the framebuffer handed to OpenGL actually belongs to another window, but OpenGL doesn't know this.
Similarly, if a window is moved partially or completely off screen, the off-screen pixels will fail the pixel ownership test and nothing gets drawn there.
What happens if I use compositing window manager
Then every window has its very own framebuffer, which it completely owns. Pixel Ownership tests will never fail. You figure the rest.
What happenns in that OpenGL application?
To OpenGL it looks like all the pixels pass the ownership test when rasterizing. That's it.
Is there any GPU usage?
Pixel ownership test is so important, that even long before there were GPUs, the first generation of graphics cards did have the functionality required to implement pixel ownership tests. The function is easy to implement and hardwired so there's no difference with that regard.
However the more pixels fail the test, i.e. are not being touch, the less the GPU has to work in rasterization stage, to rendering throughput actually increases if the window is obscured or partially moved off-screen in a non-composited environment.

What is the "side scrolling hack" from old games?

I have heard that old arcade side scrolling games used a specific programming hack to enable performant side scrolling.
I understand that years ago the machines weren't powerful enough to repaint the whole screen every frame as it's done nowadays. There are techniques, such as dirty rectangles, which allow to minimise the screen area needed to repaint when the background is stationary and only the sprites move.
The above approach only works when the background doesn't change (and hence most of the screen pixels remain stationary).
Vertical scrolling games, like old school shoot'em ups, have the thing a bit more difficult with the background changing every frame due to the scroll. However, one could take advantage of the way pixels are fed to the display (line-by-line). I imagine that one could use a bigger buffer and shift the data pointer some lines "down" every frame, so that it will be redrawn starting from another position, thus giving the impression of a smooth scroll. Still only sprites (and a bit of the background at the edge of the screen) would need to be redrawn, which is a serious optimisation.
However, for side scrolling games, the thing is not that simple and obvious. Still, I'm aware that somebody, somewhere in the past, has though of an optimisation which (with some limitations) allowed the old machines to scroll the background horizontally without redrawing it every frame.
IIRC it was used in many old games, mostly 80's beat'em ups, as well as in demoscene productions
Can you describe this technique and name its author?
I have written games for the good old C64 doing exactly this. And there are basically two things to be aware of:
These games were NOT using bitmapped graphics, but instead used "remapped" character fonts, which means that chunks of 8x8 pixels were actually hurdled around as just one byte.
The next thing to note is that there was hardware support for displacing the whole screen seven pixels. Note that this didn't in any way affect any graphics - it just made everything sent to the TV a little bit displaced.
So 2) made it possible to really smooth scroll 7 pixels away. Then you moved every character around - which for a full screen was exactly 1000 bytes, which the computer could cope with, while at the same time you moved the scrolling register back 7 pixels. 8 - 7 = 1 means that it looked like you scrolled yet another single pixel... and then it just continued that way. So 1) and 2) combined made the illusion of true smooth scrolling!
After that a third thing came into play: raster interrupts. This means that the CPU gets an interrupt when the TV/monitor was about to begin drawing a scan line at a specified location. That technique made it possible to create split screen so that you weren't required to scroll the entire screen as opposed to my first description.
And to be even more into details: even if you didn't want a split screen, the raster interrupt was very important anyway: because it was just as important then as it is today (but today the framework hides this from you) to update the screen at the right time. Modifying the "scroll register" when the TV/monitor was updating anywhere on the visible area would cause an effect called "tearing" - where you clearly notice the two parts of the screen are one pixel off sync with each other.
What more is there to say? Well, the technique with remapped character sets made it possible to do some animations very easily. For example conveyors and cog wheels and stuff could be animated by constantly changing the appearance of the "characters" representing them on screen. So a conveyor spanning the entire screen width could look as it was spinning everywhere by just changing a single byte in the character map.
I did something similar way back in the 90s, using two different approaches.
The first one involved "windowing," which was supported by the VESA SVGA standard. Some cards implemented it correctly. Basically, if you had a frame buffer/video RAM larger than the displayable area, you could draw a large bitmap and give the system coordinates for a window within that area that you wanted to display. By changing those coordinates, you could scroll around without having to re-fill the frame buffer.
The other method relied on manipulating the BLT method used to get a completed frame into the frame buffer. Blitting a page to the frame buffer that was the same size as the screen is easy and efficient.
I found this old 286 assembler code (on a functioning 17 year old floppy!) that copied a 64000 byte (320x200) screen from an off-screen page to the video buffer:
Procedure flip; assembler;
{ This copies the entire screen at "source" to destination }
asm
push ds
mov ax, [Dest]
mov es, ax
mov ax, [Source]
mov ds, ax
xor si, si
xor di, di
mov cx, 32000
rep movsw
pop ds
end;
The rep movsw moved CX words (where a word is two bytes in this case). This was very efficient since it's basically a single instruction that tells the CPU to move the whole thing as quickly as possible.
However, if you had a larger buffer (say, 1024*200 for a side scroller), you could just as easily use a nested loop, and copy a single row of pixels per loop. In the 1024-pixel wide buffer, for instance, you could copy bytes:
start count
0+left 320
1024+left 320
...
255*1024+left 320
where left is the x coordinate within the large background image that you want to start at (left side of the screen).
Of course, in 16-bit mode, some magic and manipulation of segment pointers (ES, DS) was required to get a buffer larger than 64KB (in reality, multiple adjacent 64k buffers), but it worked pretty well.
There were probably better solutions to this problem (and definitely better ones to use today), but it worked for me.
Arcade games frequently featured customized video chips or discrete logic to allow scrolling without the CPU having to do (much) work. The approach would be similar to what danbystrom was describing on the C-64.
Basically the graphics hardware took care of fine scrolling characters (or tiles) and the CPU then handled replacing all tiles once the scrolling registers have reached their limit. I am currently looking at the Irem m-52 board which deals with multiple scrolling backgrounds in hardware. Schematics can be found online.
For right scrolling on the Commodore Amiga we used the Copper to right-shift the screen up to 16 pixels. When the screen had shifted we added 2 bytes to the start address of the screen buffer while on the right side we used the Blitter to copy graphics from the main memory to the screen buffer. We would set the screen buffer slightly larger than the screen view so that we could copy the graphics without you seeing a flickering effect from the copying on the right side of the viewport.

Get mouse deltas under linux (xorg)

Is there a convenient way to get mouse deltas (e.g. mickeys) under X/linux? I know that I could read from /dev/input/mice, but that requires root access and seems a bit too low-level for me.
If this is for a game, i.e. an application with an actual X window, the typical approach used to be:
Grab the mouse, so all mouse input goes to your window
Warp the mouse pointer to the center of your window, to give maximum amount of space to move
On each mouse movement event, subtract the center of the window from the reported position; this gives you a "delta event"
Goto 2
I write "used to be" because there might be better ways to solve this now, haven't looked into it for a while.
This of course won't give you a resolution that is higher than what X is reporting to applications, i.e. pixels. If you're after sub-pixel reporting, I think you need to go lower, perhaps read the device directly as you suggest.

Drawing Vector Graphics Faster

In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).

Resources