Is there a convenient way to get mouse deltas (e.g. mickeys) under X/linux? I know that I could read from /dev/input/mice, but that requires root access and seems a bit too low-level for me.
If this is for a game, i.e. an application with an actual X window, the typical approach used to be:
Grab the mouse, so all mouse input goes to your window
Warp the mouse pointer to the center of your window, to give maximum amount of space to move
On each mouse movement event, subtract the center of the window from the reported position; this gives you a "delta event"
Goto 2
I write "used to be" because there might be better ways to solve this now, haven't looked into it for a while.
This of course won't give you a resolution that is higher than what X is reporting to applications, i.e. pixels. If you're after sub-pixel reporting, I think you need to go lower, perhaps read the device directly as you suggest.
Related
I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.
I have heard that old arcade side scrolling games used a specific programming hack to enable performant side scrolling.
I understand that years ago the machines weren't powerful enough to repaint the whole screen every frame as it's done nowadays. There are techniques, such as dirty rectangles, which allow to minimise the screen area needed to repaint when the background is stationary and only the sprites move.
The above approach only works when the background doesn't change (and hence most of the screen pixels remain stationary).
Vertical scrolling games, like old school shoot'em ups, have the thing a bit more difficult with the background changing every frame due to the scroll. However, one could take advantage of the way pixels are fed to the display (line-by-line). I imagine that one could use a bigger buffer and shift the data pointer some lines "down" every frame, so that it will be redrawn starting from another position, thus giving the impression of a smooth scroll. Still only sprites (and a bit of the background at the edge of the screen) would need to be redrawn, which is a serious optimisation.
However, for side scrolling games, the thing is not that simple and obvious. Still, I'm aware that somebody, somewhere in the past, has though of an optimisation which (with some limitations) allowed the old machines to scroll the background horizontally without redrawing it every frame.
IIRC it was used in many old games, mostly 80's beat'em ups, as well as in demoscene productions
Can you describe this technique and name its author?
I have written games for the good old C64 doing exactly this. And there are basically two things to be aware of:
These games were NOT using bitmapped graphics, but instead used "remapped" character fonts, which means that chunks of 8x8 pixels were actually hurdled around as just one byte.
The next thing to note is that there was hardware support for displacing the whole screen seven pixels. Note that this didn't in any way affect any graphics - it just made everything sent to the TV a little bit displaced.
So 2) made it possible to really smooth scroll 7 pixels away. Then you moved every character around - which for a full screen was exactly 1000 bytes, which the computer could cope with, while at the same time you moved the scrolling register back 7 pixels. 8 - 7 = 1 means that it looked like you scrolled yet another single pixel... and then it just continued that way. So 1) and 2) combined made the illusion of true smooth scrolling!
After that a third thing came into play: raster interrupts. This means that the CPU gets an interrupt when the TV/monitor was about to begin drawing a scan line at a specified location. That technique made it possible to create split screen so that you weren't required to scroll the entire screen as opposed to my first description.
And to be even more into details: even if you didn't want a split screen, the raster interrupt was very important anyway: because it was just as important then as it is today (but today the framework hides this from you) to update the screen at the right time. Modifying the "scroll register" when the TV/monitor was updating anywhere on the visible area would cause an effect called "tearing" - where you clearly notice the two parts of the screen are one pixel off sync with each other.
What more is there to say? Well, the technique with remapped character sets made it possible to do some animations very easily. For example conveyors and cog wheels and stuff could be animated by constantly changing the appearance of the "characters" representing them on screen. So a conveyor spanning the entire screen width could look as it was spinning everywhere by just changing a single byte in the character map.
I did something similar way back in the 90s, using two different approaches.
The first one involved "windowing," which was supported by the VESA SVGA standard. Some cards implemented it correctly. Basically, if you had a frame buffer/video RAM larger than the displayable area, you could draw a large bitmap and give the system coordinates for a window within that area that you wanted to display. By changing those coordinates, you could scroll around without having to re-fill the frame buffer.
The other method relied on manipulating the BLT method used to get a completed frame into the frame buffer. Blitting a page to the frame buffer that was the same size as the screen is easy and efficient.
I found this old 286 assembler code (on a functioning 17 year old floppy!) that copied a 64000 byte (320x200) screen from an off-screen page to the video buffer:
Procedure flip; assembler;
{ This copies the entire screen at "source" to destination }
asm
push ds
mov ax, [Dest]
mov es, ax
mov ax, [Source]
mov ds, ax
xor si, si
xor di, di
mov cx, 32000
rep movsw
pop ds
end;
The rep movsw moved CX words (where a word is two bytes in this case). This was very efficient since it's basically a single instruction that tells the CPU to move the whole thing as quickly as possible.
However, if you had a larger buffer (say, 1024*200 for a side scroller), you could just as easily use a nested loop, and copy a single row of pixels per loop. In the 1024-pixel wide buffer, for instance, you could copy bytes:
start count
0+left 320
1024+left 320
...
255*1024+left 320
where left is the x coordinate within the large background image that you want to start at (left side of the screen).
Of course, in 16-bit mode, some magic and manipulation of segment pointers (ES, DS) was required to get a buffer larger than 64KB (in reality, multiple adjacent 64k buffers), but it worked pretty well.
There were probably better solutions to this problem (and definitely better ones to use today), but it worked for me.
Arcade games frequently featured customized video chips or discrete logic to allow scrolling without the CPU having to do (much) work. The approach would be similar to what danbystrom was describing on the C-64.
Basically the graphics hardware took care of fine scrolling characters (or tiles) and the CPU then handled replacing all tiles once the scrolling registers have reached their limit. I am currently looking at the Irem m-52 board which deals with multiple scrolling backgrounds in hardware. Schematics can be found online.
For right scrolling on the Commodore Amiga we used the Copper to right-shift the screen up to 16 pixels. When the screen had shifted we added 2 bytes to the start address of the screen buffer while on the right side we used the Blitter to copy graphics from the main memory to the screen buffer. We would set the screen buffer slightly larger than the screen view so that we could copy the graphics without you seeing a flickering effect from the copying on the right side of the viewport.
Perhaps against my better judgement, I decided to try to get graphics working in scheme.
(MIT/GNU)
To get it started, I wrote
(define graphics-types (enumerate-graphics-types))
(define graphics (make-graphics-device (car graphics-types)))
which popped up a white window. Calling
(graphics-draw-point graphics .5 0)
gave the expected result, which was that a little black pixel appeared 3/4 of the way to the right of the window (vertically in the center). However, calling
(graphics-erase-point graphics .5 0)
did nothing. Furthermore, minimizing and restoring the window erased the point, but experimentation showed that minimizing always cleared the entire window.
Does anyone know what's going on?
The graphics-erase-point procedure in MIT Scheme works by changing the drawing mode to 0, calling graphics-draw-point, then changing the drawing mode back to whatever it was before. More information on the mechanics of MIT Scheme drawing modes can be found at MIT's drawing mode documentation
The bug appears to be in the graphics-bind-drawing-mode procedure, which is used in graphics-erase-point to change the drawing mode. The easiest solution is to redefine graphics-erase-point to use graphics-set-drawing-mode instead. The resulting code looks like this:
(define (graphics-erase-point device x y)
(graphics-set-drawing-mode device 0)
(graphics-draw-point device x y)
(graphics-set-drawing-mode device 15))
15 is the default drawing mode, which I've used for the sake of simplicity, but it is certainly possible to intelligently revert drawing modes (I leave that exercise for you).
i made a program that uses the colors of pixels from the screen but getpixel is slow so is there a better way to find the color of pixels. i don't care if i get the full screen or one pixel at a time but the main thing is speed.
This depends entirely on what you need to do. If you only need one pixel, then use GetPixel--it may be slow, but who cares when you only need one? If you are trying to capture a whole region of the screen, though, then there are screen capture APIs you can use. Once you have the screen capture, you can pick out whatever pixels you want.
BitBlt is pretty fast. It also allows you to specify a region to be copied, so you don't have to copy pixels you don't want.
In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).