Is it possible to programmatically set the cursor position out of the current resolution?
OS: Ubuntu 14
Window manager: Compiz
Resolution: 1920 * 1080
XWarpPointer(display, None, None, 0, 0, 0, 0, 0, 1090);
The code above can only move cursor to the bottom edge.
XWarpPointer has at least one documented limitation (which may affect your program):
Note that you cannot use XWarpPointer() to move the pointer outside the confine_to window of an active pointer grab. An attempt to do so will only move the pointer as far as the closest edge of the confine_to window.
The likely reason for wanting to move the pointer off-screen is to hide it. An X application can define a cursor using XDefineCursor (which is used for displaying the pointer), and hide that. This is for a given window, of course.
xterm does that, for instance, since patch #230 ("hide the mouse pointer while user is typing").
Here are a few links using or discussing the technique:
LinuxMouse.cpp, source-code
Platform_Linux.cpp, source-code
How to hide the mouse pointer?, discussion on comp.windows.x
The Cursor, a set of slides for a class
Basic Graphics Programming With The Xlib Library - Part II
Related
I am trying to get the color of a pixel on my screen using node.js. I want it to be returned in RGB format, e.g. (255, 0, 0). My current solution is to use screenshot-desktop to screenshot my entire screen in JPG format, decode it to get the raw pixel data, and get the color of a given pixel. However, this lags out my entire computer for 1-2 seconds as it is taking the screenshot. This is unusable as I would like to do this multiple times per second. So my question is: How can I get the color of a given pixel on the screen, without taking a full screenshot?
I am using Linux with X11. There is an X11 library for node.js, so I asssume I should use that to get the pixel color, I'm just not sure how. If you could show me how to do it in C then I can easily use node.js to do the same thing.
Thanks!
Oh my gosh I just figured it out after posting this. I was using robotjs for reading the mouse position and I totally forgot it can do screen stuff too! So, the solution would be to do
var robot = require('robotjs');
var color = robot.getPixelColor(x, y);
X11 solution using x11 node library ( I am the author ):
query windows tree with QueryTree starting at the root window
get every child geometry using GetGeometry request
if your point is not inside any child, use current window id and get 1x1 pixmap from the current image: GetImage(format, currentWindow, x, y, 1, 1, planeMask) ( 2 for format and 0xffffffff for plane mask should work ). Make sure you calculate relative x y position as you travers windows tree.
if child window covers your point query children for that window and repeat again. Note that QueryTree returns windows in bottom to top stacking order so make sure you pick last one covering your point
Once you have 1x1 pixmap from the topmost window under your point - the buffer should contain only color bytes for your image, RGB order and bit mask might depend on red_mask, green_mask, blue_mask from display.screen[0].depths[visual].
If you cache "topmost window" between requests and only start from root when no match anymore the above solution might be much more performant then the one using robotjs ( although much more low level and complicated ) Good luck!
I'm trying to figure out how to take a screenshot of a window that is currently not focused, so there is a good chance that the window will be partially or fully obscured by other windows.
I've found an example here on this link Get a screenshot of a window that is cover or not visible or minimized with Xcomposite extension for X11 but I can't make it work, any time I take a screenshot I get only strange output, mostly black, like I'm accessing the wrong buffer or something.
XID xid = windowID; // Checked and confirmed that the window ID is correct
XGetWindowAttributes( display, windowID, &attrributes );
XCompositeRedirectWindow (display, xid, CompositeRedirectAutomatic);
Pixmap pixmap = XCompositeNameWindowPixmap (display, xid);
// Extract the data
XRenderPictFormat *format = XRenderFindVisualFormat (display, attrributes.visual);
XRenderPictureAttributes pa;
pa.subwindow_mode = IncludeInferiors;
Picture picture = XRenderCreatePicture (display, xid, format, CPSubwindowMode, &pa);
QPixmap finalPix (attrributes.width, attrributes.height);
XRenderComposite (display, PictOpSrc, picture, None, finalPix.x11PictureHandle(), 0,0, 0,0, 0,0, attrributes.width, attrributes.height);
XFreePixmap (display, pixmap);
XCompositeUnredirectWindow (display, xid, CompositeRedirectAutomatic);
return finalPix;
(Edit: This screenshot was taken from a fully visible window, not an obscured window, so I guess currently the issue is not even that X11 doesn't draw it but my implementation seems to be not working and I can't figure out why.)
And this is how a screenshot of my konsole window looks:
First of all Qt has this feature. You can use: QScreen::grabWindow.
Problem is that documentation says:
Note on X11 that if the given window doesn't have the same depth as
the root window, and another window partially or entirely obscures the
one you grab, you will not get pixels from the overlying window. The
contents of the obscured areas in the pixmap will be undefined and
uninitialized.
So this will simplify your code, but obscured parts of window will still remains as a problem. Looks like functionality of x11 won't let to resolve this issue.
There is a good example how to use this feature.
I'm playing about with the most recent NSDocument in a Swift document-based-app. One thing that's a bit odd is that the starting location for a new window is near the bottom of the screen.
Playing with the Storyboard a bit, its not clear how to use the built-in settings to come up with a reasonable "near the top" selection - the setting moves up from the bottom, not down from the top, so the position would change depending on the screen size?
I assume there's a position mechanism I can hook, but it's not obvious in the shell code that's supplied. Any hints?
OS X coordinate system is flipped in contrast to iOS. So the 0,0 is the bottom left corner.
You can calculate the position of your window in similar manner (any screen size)
CGFloat width = NSWidth([self.window screen].frame);
CGFloat height = NSHeight([self.window screen].frame);
[self.window setFrame:NSMakeRect(100, height - 100, width, height) display:YES];
Most easiest is to set initial height to 900 and forget about it and enable window restoration -> this will cause to open the window where it previously was and this is where it user wants.
Select your window in Storyboard. And fill initial position coordinates
I would like to implement red lines moving oh H/V rulers similar to what I see in windows paint brush (8.1) indicating current mouse position. See the example (red line at 560):
What would be the best way to do it. Direct2D Animation? layers? any other simple trick? The thing here is of cause doing it efficiently without repainting the whole area on mouse move.
I currently using MFC/direct2d so I paint myself the area with field and rulers inside the view, so I have full control on graphics here.
There are many ways to attack this problem. The simplest is to rely on your OnPaint function to paint the line in a location based on a member variable. In your OnMouseMove handler, call InvalidateRect on the current location of the line based on the saved variable, update the variable, and call InvalidateRect a second time for the new line position.
The BeginPaint call that is generated in the CPaintDC constructor will set a clipping region based on the invalidation rectangles you provided. Even if your OnPaint tries to paint the entire window, only those parts that have been invalidated will be redrawn. If this is too inefficient, you can cache the ruler in a bitmap and use GetClipBox to determine which part of the bitmap to blit to the screen.
I am developing a SDL OpenGL application on Ubuntu and have noticed a problem with the mouse range when a new window size is set. The initiale size of my application is 600x400 and the mouse range (x,y) reflects this. However, when a user changes the screen to any other size (using given predefined sizes), the mouse range still only reflects a 600x400 screen size and causes issues with mouse location functionality.
To set the new resolution, I call:
SDL_SetVideoMode(Width, Height, 32, SDL_OPENGL); which to my understanding should handle the mouse range resizing but doesn't seem to do so in Linux. Can anyone give me a solution to this problem?
Note: Possible hack seem to be to exit SDL and re-initialize using SDL_Init(SDL_INIT_EVERYTHING);
After some digging, I found the problem was that I was calling SDL_GetMouseState(0,0) later after the size change was made which apparently was interfering with the recalculation of the mouse range. However, I've gone through the SDL source but I can't really determine how this would effect it so. There seems to be some mouse state switching that may be causing it.
Anytime i resize the window, I execute the following to refresh my viewport:
m_ParentWindow = SDL_SetVideoMode( m_width, m_height, m_depth, m_SDL_Vid_Flags );
glViewport(0,0,m_width,m_height);
Clear();
Where Clear calls:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();