I'm trying to figure out how to take a screenshot of a window that is currently not focused, so there is a good chance that the window will be partially or fully obscured by other windows.
I've found an example here on this link Get a screenshot of a window that is cover or not visible or minimized with Xcomposite extension for X11 but I can't make it work, any time I take a screenshot I get only strange output, mostly black, like I'm accessing the wrong buffer or something.
XID xid = windowID; // Checked and confirmed that the window ID is correct
XGetWindowAttributes( display, windowID, &attrributes );
XCompositeRedirectWindow (display, xid, CompositeRedirectAutomatic);
Pixmap pixmap = XCompositeNameWindowPixmap (display, xid);
// Extract the data
XRenderPictFormat *format = XRenderFindVisualFormat (display, attrributes.visual);
XRenderPictureAttributes pa;
pa.subwindow_mode = IncludeInferiors;
Picture picture = XRenderCreatePicture (display, xid, format, CPSubwindowMode, &pa);
QPixmap finalPix (attrributes.width, attrributes.height);
XRenderComposite (display, PictOpSrc, picture, None, finalPix.x11PictureHandle(), 0,0, 0,0, 0,0, attrributes.width, attrributes.height);
XFreePixmap (display, pixmap);
XCompositeUnredirectWindow (display, xid, CompositeRedirectAutomatic);
return finalPix;
(Edit: This screenshot was taken from a fully visible window, not an obscured window, so I guess currently the issue is not even that X11 doesn't draw it but my implementation seems to be not working and I can't figure out why.)
And this is how a screenshot of my konsole window looks:
First of all Qt has this feature. You can use: QScreen::grabWindow.
Problem is that documentation says:
Note on X11 that if the given window doesn't have the same depth as
the root window, and another window partially or entirely obscures the
one you grab, you will not get pixels from the overlying window. The
contents of the obscured areas in the pixmap will be undefined and
uninitialized.
So this will simplify your code, but obscured parts of window will still remains as a problem. Looks like functionality of x11 won't let to resolve this issue.
There is a good example how to use this feature.
Related
I am trying to get the color of a pixel on my screen using node.js. I want it to be returned in RGB format, e.g. (255, 0, 0). My current solution is to use screenshot-desktop to screenshot my entire screen in JPG format, decode it to get the raw pixel data, and get the color of a given pixel. However, this lags out my entire computer for 1-2 seconds as it is taking the screenshot. This is unusable as I would like to do this multiple times per second. So my question is: How can I get the color of a given pixel on the screen, without taking a full screenshot?
I am using Linux with X11. There is an X11 library for node.js, so I asssume I should use that to get the pixel color, I'm just not sure how. If you could show me how to do it in C then I can easily use node.js to do the same thing.
Thanks!
Oh my gosh I just figured it out after posting this. I was using robotjs for reading the mouse position and I totally forgot it can do screen stuff too! So, the solution would be to do
var robot = require('robotjs');
var color = robot.getPixelColor(x, y);
X11 solution using x11 node library ( I am the author ):
query windows tree with QueryTree starting at the root window
get every child geometry using GetGeometry request
if your point is not inside any child, use current window id and get 1x1 pixmap from the current image: GetImage(format, currentWindow, x, y, 1, 1, planeMask) ( 2 for format and 0xffffffff for plane mask should work ). Make sure you calculate relative x y position as you travers windows tree.
if child window covers your point query children for that window and repeat again. Note that QueryTree returns windows in bottom to top stacking order so make sure you pick last one covering your point
Once you have 1x1 pixmap from the topmost window under your point - the buffer should contain only color bytes for your image, RGB order and bit mask might depend on red_mask, green_mask, blue_mask from display.screen[0].depths[visual].
If you cache "topmost window" between requests and only start from root when no match anymore the above solution might be much more performant then the one using robotjs ( although much more low level and complicated ) Good luck!
When using the color picker widget GTK applications I often use a different color Palette to the one given by default, as shown in the picture below. While the program is running I can change the defaults colors and they stay changed, however, when I close the program those modifications disappear.
I wonder how I can make those modifications to persistent in disk.
From the tags you chose, the application name seems to be Dia. In the application, nothing lets you set this option. So the short answer is: no.
The issue is that Dia uses the now deprecated GtkColorSelectionDialog (in favor of GtkColorChooserDialog). In the deprecated version, there is a flag to tell the widget to show/hide the color palette, but that's pretty much the only control you have (see gtk_color_selection_set_has_palette).
In the new widget version (which, by the way, looks totally different), you have direct access to a gtk_color_chooser_add_palette:
void
gtk_color_chooser_add_palette (GtkColorChooser *chooser,
GtkOrientation orientation,
gint colors_per_line,
gint n_colors,
GdkRGBA *colors);
You can see you have much more options as far as customizing the palette is concerned. You even have the ability to decide the colors. This means you could save your current selection in the palette. Then, at application quit, you could save all the palette's colors in some sort of settings, and load them back at application start.
As a final note, I looked at the Dia source code and found that they seem to be looking to make the move to the new widget. Here is an excerpt:
// ...
window = self->color_select =
/*gtk_color_chooser_dialog_new (self->edit_color == FOREGROUND ?
_("Select foreground color") : _("Select background color"),
GTK_WINDOW (gtk_widget_get_toplevel (GTK_WIDGET (self))));*/
gtk_color_selection_dialog_new (self->edit_color == FOREGROUND ?
_("Select foreground color") : _("Select background color"));
selection = gtk_color_selection_dialog_get_color_selection (GTK_COLOR_SELECTION_DIALOG (self->color_select));
self->color_select_active = 1;
//gtk_color_chooser_set_use_alpha (GTK_COLOR_CHOOSER (window), TRUE);
gtk_color_selection_set_has_opacity_control (GTK_COLOR_SELECTION (selection), TRUE);
// ...
From the commented code, it seems they are trying to make the move...
How come this happens to my QPixmap painted in a QGraphicsView?
It has 0.8 opacity added by code.
It´s a PNG image. The original looks like this:
I see this kind of strange flickering that stays the same depending on the height of the QMainWindow. When i resize it, some of the pixmaps get disorted.
I have the problem that the text on three UIButtons gets wrapped to several lines in the Interface Builder, but when I run the code, the text is larger than the button and in one line only. I tried setting NSLineBreakMode and NSTextAlignment, but both didn't help.
In the Interface Builder it looks correctly like this: http://imgur.com/WplkqQV while on the simulator it looks like this: http://imgur.com/0YPpEfU. Any ideas?
Thanks in advance.
There is certainly something very odd about your example; I can't reproduce it. You must be doing something to the buttons that you have not described in the information provided by your question. If you set up your button line break mode to be Word Wrap (in the nib), and if your constraints are sensible so that the button can get wider in landscape and narrower in portrait, then it will wrap in portrait and not in landscape, which I believe is what you want. Here are screen shots of a button in the Simulator on my machine (ignore the actual widths of the buttons; it's only an example; what's important is the text wrapping):
The real problem, however, is the height. You'll notice that that isn't changing. This is because a round rect button has an intrinsic height value. If you want the height to change, to make more vertical room for the wrapped text, you will probably need to subclass or intervene in the layout process after rotation. For example, I get pretty nice results like this:
-(void)viewDidLayoutSubviews {
CGRect f = self.button.bounds;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation))
f.size.height = 44;
else
f.size.height = 60;
self.button.bounds = f;
}
I am developing a SDL OpenGL application on Ubuntu and have noticed a problem with the mouse range when a new window size is set. The initiale size of my application is 600x400 and the mouse range (x,y) reflects this. However, when a user changes the screen to any other size (using given predefined sizes), the mouse range still only reflects a 600x400 screen size and causes issues with mouse location functionality.
To set the new resolution, I call:
SDL_SetVideoMode(Width, Height, 32, SDL_OPENGL); which to my understanding should handle the mouse range resizing but doesn't seem to do so in Linux. Can anyone give me a solution to this problem?
Note: Possible hack seem to be to exit SDL and re-initialize using SDL_Init(SDL_INIT_EVERYTHING);
After some digging, I found the problem was that I was calling SDL_GetMouseState(0,0) later after the size change was made which apparently was interfering with the recalculation of the mouse range. However, I've gone through the SDL source but I can't really determine how this would effect it so. There seems to be some mouse state switching that may be causing it.
Anytime i resize the window, I execute the following to refresh my viewport:
m_ParentWindow = SDL_SetVideoMode( m_width, m_height, m_depth, m_SDL_Vid_Flags );
glViewport(0,0,m_width,m_height);
Clear();
Where Clear calls:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();