I know it's possible to rotate video output in X server to display in portrait mode as well as landscape.
I'm curious if it's possible to rotate the video output that occurs pre-X server. The white text on black background output as the machine boots (rc.sysinit, bringing up eth connections, etc.).
if you use the framebuffer-console you can use the fbcon=rotate:n command at boot time to rotate the console output.
(n = 0: no rotation, n=1 90deg clockwise, n=2 upside down, n=3: 90deg counterclockwise)
The framebuffer options are documented in the kernel source in the file
linux-source-2.6.32/Documentation/fb/fbcon.txt
Related
I am trying to get the color of a pixel on my screen using node.js. I want it to be returned in RGB format, e.g. (255, 0, 0). My current solution is to use screenshot-desktop to screenshot my entire screen in JPG format, decode it to get the raw pixel data, and get the color of a given pixel. However, this lags out my entire computer for 1-2 seconds as it is taking the screenshot. This is unusable as I would like to do this multiple times per second. So my question is: How can I get the color of a given pixel on the screen, without taking a full screenshot?
I am using Linux with X11. There is an X11 library for node.js, so I asssume I should use that to get the pixel color, I'm just not sure how. If you could show me how to do it in C then I can easily use node.js to do the same thing.
Thanks!
Oh my gosh I just figured it out after posting this. I was using robotjs for reading the mouse position and I totally forgot it can do screen stuff too! So, the solution would be to do
var robot = require('robotjs');
var color = robot.getPixelColor(x, y);
X11 solution using x11 node library ( I am the author ):
query windows tree with QueryTree starting at the root window
get every child geometry using GetGeometry request
if your point is not inside any child, use current window id and get 1x1 pixmap from the current image: GetImage(format, currentWindow, x, y, 1, 1, planeMask) ( 2 for format and 0xffffffff for plane mask should work ). Make sure you calculate relative x y position as you travers windows tree.
if child window covers your point query children for that window and repeat again. Note that QueryTree returns windows in bottom to top stacking order so make sure you pick last one covering your point
Once you have 1x1 pixmap from the topmost window under your point - the buffer should contain only color bytes for your image, RGB order and bit mask might depend on red_mask, green_mask, blue_mask from display.screen[0].depths[visual].
If you cache "topmost window" between requests and only start from root when no match anymore the above solution might be much more performant then the one using robotjs ( although much more low level and complicated ) Good luck!
I don't know if I'm misunderstanding something fundamental in how screen resolutions work, but I'm getting stuck on an issue with the Kindle Fire HD (7").
I have a responsively designed page that, as normal, uses CSS media queries to change the presentation of certain elements. This works as expected on all browsers and devices tested, except for when browsing with the Kindle Fire HD (7"). According to specs (http://en.wikipedia.org/wiki/Kindle_Fire_HD) it has a screen resolution of 1280 x 800 px. This is also verified when I check the device using WURFL's test tool at tools.scientiamobile.com.
So... I have breakpoint screen widths set for
'mobile' - 767px and below
'tablet' - 768 - 989 px
'desktop' - 990px and above
... so I'd expect the Kindle Fire to display my page in 'tablet' mode in portrait orientation, or 'desktop' mode in landscape. However instead it shows it in unexpectedly smaller breakpoints: 'mobile' mode in portrait, and 'tablet' mode in landscape.
On closer inspection, I'm not sure this is actually much to do with my webpage, or its CSS. When using this device, I also seem to be seeing 'smaller' breakpoint views of other RWD sites (e.g. in portrait mode, I get the 'tiny' breakpoint view of getbootstrap.com, which is aimed at 767px and below).
What's then strange is that, when detecting the screen size using JavaScript, I get 534 x 854px (and have also tested this again on other sites, like supportdetails.com, and got the same results).
I haven't found any similar issues reported re this device, so I'm wondering a) if anyone's encountered similar issues, or b) if I'm just misunderstanding something crucial with how screen resolutions are detected by different devices.
Thanks!
When doing media queries you need to take into account the CSS pixel ratio.
The value you need to use on the media query = (The Advertised number of pixels) / (CSS Pixel Ratio).
This wikipedia page is a good source of CSS pixel ratios to use for this:
http://en.wikipedia.org/wiki/List_of_displays_by_pixel_density
Good Luck
I have a question that is hard to search for the answer (I always end up with answers for monitor manipulation). I am writing a bash shell script to help me with my code dev and I have two monitors.
When I run my executable that I have compiled I want to tell it to run on a particular monitor (i.e. different to the monitor that I have my terminal open on so I can view the debug on one screen and have the app on another).
How would I go about doing this? Something like:
./myProject > but run on monitor 2
Where myProject is my binary executable.
Thanks all.
If you run separate displays on each monitor (less likely these days), the DISPLAY environment variable is what you want.
If you use Xinerama (spreading one logical display across multiple monitors), however:
Aside: X11 vocabulary: A "display" is one or more "screens" with input devices; e.g. keyboard and mouse, a.k.a. a "seat." A "screen" is a logical canvas that is partially or completely displayed on one or more "monitors;" when using multiple monitors for one "screen," the windows can be partially displayed on each monitor, but share the same X11 DISPLAY identifier; this is called Xinerama. The DISPLAY format is host : display-number . screen-id, so e.g. on my Xinerama set-up both monitors are part of screen 0 on a display number that counts up from 0 with each logged-in user on the same host. "Seats" are logical groups of monitor+input that are using different hardware; multiple "displays" can occur using "virtual console" switching, which is how Gnome and KDE allow multiple users to sign in on a single "seat" machine.
Most GUI toolkits allow you to specify the window's geometry using the --geometry or -geometry switch.
Qt uses the older MIT-style -geometry form. GTK+/Gnome uses the GNU-style --geometry.
This assumes that you're allowing Qt to post-process your command-line, e.g. passing argv into QtApplication or similar.
The “logical display” will have a resolution which is the sum of the resolutions in each direction of the arrangement of your monitors. For example, I have 2 × 1920×1080 displays hooked up right now. xrandr reports:
Screen 0: minimum 320 x 200, current 3840 x 1080, maximum 8192 x 8192
To display a window on the right-hand monitor, I can give a geometry string that has its x co-ordinates between 1920…3839 (inclusive).
The usual format is: width x height ± x-offset ± y-offset — but the width and height are optional, if you prefer to take the defaults. The ± are + to count relative to the top/left, or - to count relative to the bottom/right.
So, for example:
gedit --geometry 800x600+1920+0 # set size at top-left of right screen
gedit --geometry +1920+100 # default size at top-left of right screen
gedit --geometry -0+0 # default size at top-right of entire display
Unfortunately, the only programmatic way I know of to determine the area of the display on each monitor from the shell would be to parse the output from xrandr; e.g.
$ xrandr
Screen 0: minimum 320 x 200, current 3840 x 1080, maximum 8192 x 8192
LVDS1 connected (normal left inverted right x axis y axis)
1366x768 60.0 +
1024x768 60.0
800x600 60.3 56.2
640x480 59.9
VGA1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 287mm
1920x1080 60.0*+
1680x1050 60.0
1280x1024 60.0
1440x900 59.9
1280x720 60.0
1024x768 60.0
800x600 60.3
640x480 60.0
720x400 70.1
HDMI1 connected 1920x1080+1920+0 (normal left inverted right x axis y axis) 510mm x 287mm
1920x1080 60.0*+
1680x1050 59.9
1280x1024 60.0
1440x900 59.9
1280x720 60.0
1024x768 60.0
800x600 60.3
640x480 60.0
720x400 70.1
DP1 disconnected (normal left inverted right x axis y axis)
$ xrandr | perl -ne 'if (/(\d+)x(\d+)\+(\d+)\+(\d+)/) '\
> ' { print "$3,$4 - ", $3 + $1 - 1, ",", $4 + $2 - 1, "\n" }'
0,0 - 1919,1079
1920,0 - 3839,1079
(You'd normally want to avoid splitting the Perl one-liner across two lines in the shell, but the '\…' trick there is to make it legible on SO.)
The --geometry answer given above and accepted simply won't work in many cases...
There are a lot of near-identical questions like this floating around the various StackExchange sites & AskUbuntu, the answer I've eventually found (on a Linux Mint distro based on Ubuntu 14.04) is to use wmctrl. I'm leaving an answer just since no one else has mentioned it on this thread.
(There's another called Devil's Pie and another called Compiz if you search for those too you'll find the Q&A's I'm talking about)
wmctrl is the sort of simple unix tool you're probably looking for if you're writing Bash scripts. I also saw someone suggest using xdotool, depends what the specific goal is.
wmctrl offers window matching by window title or pid (not compatible with all types of X-managed windows)
Some helpful resources:
wmctrl man page
user doc 1
user doc 2
“How to shift applications from workspace 1 to 2 using command?"”
More specific answer RE: specifying the dimensions
“Both wmctrl and xdotool work fine if you set a window maximized.”
I connect a second monitor on the left or on the right depending on where I'm working each day, and I think the solution for me will involve
finding the dimensions from xrandr (as shown in BRPocock's answer),
parsing that to tell which is the external connected monitor (VGA/HDMI etc.) rather than the inbuilt one,
specifying a dimension to give to a maximised window on the connected screen (i.e. the left/right/top/bottom offset, which will change according to side of the screen being used)
Leaving my notes and [eventually] some code produced here in case it's useful to anyone else.
Use a fifo
open a terminal window on the monitor you want the output to appear on and do
mkfifo /tmp/myfifo
cat /tmp/myfifo
then on the source terminal do
./myProject >/tmp/myfifo
This assumes it is a console app. If it is graphical then you will need another approach, which will be dependent on what windowing manager + toolkit you are using.
All you need to do is set the DISPLAY environmental variable prior to running your application.
To find out what you need to set it to, run the following on the monitor you want it to show up on:
echo $DISPLAY
You should see, for example :0.1 or :0.0.
Then you can specify that you want your app to run on that display like so:
DISPLAY=:0.1 ./my_app
As your application uses QT, you are probably using KDE. In System Settings > Window Behavior > Advanced, set Placement to Under Mouse. Click the desired monitor, ALT+Tab to switch to your terminal, and start the program.
Im using a Raspberry Pi running Raspbian Wheezy as a digital photo frame. The Pi is configured to autologin on boot and execute a bash script that starts fbi as a slideshow, like so:
fbi -noverbose -a -t 10 /home/pi/Pictures/*.jpg /home/pi/Pictures/*.png
Ive noticed that any portrait photos (ie photos that are taller than they are wide) are automatically rotated 90 degrees so that appear as landscape.
If I remove the -nonverbose switch, the dimensions are displayed underneath each image and what was once a 480x640 pixel image is displayed as 640x480. Removing the -a autozoom switch doesnt help either.
Can anyone help get my photos displaying in their original orientation regardless of aspect ratio?
I know this issue is a little old, but I've been running into this issue as well and think I found the solution this morning. I think it has to do with the EXIF data rotate flag. From what I understand, all programs can handle this flag differently, or not even acknowledge it. So I believe the solution is to rotate the images and save them that way ignoring the EXIF data.
I plan on doing it using a windows program I found located here: http://www.makeuseof.com/tag/are-your-iphone-photos-refusing-to-rotate-in-windows-explorer-here-is-the-solution/
I’m writting a program in Haskell with SDL. When I do that:
screen <- trySetVideoMode width height depth [HWSurface,OpenGL]
the program behaves correctly. Now, if I do that :
screen <- trySetVideoMode width height depth [HWSurface,OpenGL,Fullscreen]
the program starts with a black fullscreen mode, then comes back windowed and goes on that way. I add that the resolution used in the application is 1920x1080 – which is also my screen resolution.
Does anyone know why? How can I make it fullscreen?