The question
Is there a method to prevent an X session from starting the screensaver, going into power save mode, or performing a screen blank from code?
What I'm working with
Language: C/C++
GUI framework: GTK3
Hardware: Raspberry Pi 3B
Software: Raspbian 10 - Buster
My program needs to run on screen for long periods (up to 12 hours) with the GUI running without user interaction. The GUI acts as a status monitor for systems in field (if the screen goes black, something went wrong).
What I know
GTK3 can determine if the screensaver is active
GTK3 has a boolean property to report if the screensaver of the system is active (see here), but no other references are made in the documentation.
Raspbian uses screen blanking
Raspbian does not come installed with xscreensaver or other package to control the screen off time. Instead, it relies mostly on X to "blank screen". This can be managed with the xset command as a superuser. The canonical way to do this is reported in the hardware-specific Stack Exchange (here).
End-users cannot be trusted
In my case, the program will be used by folks who are barely computer literate. The result must be user-friendly and not expect the user to ever touch a terminal, let alone to make permanent changes to the startup config of X. While one option would be to distribute the program as a customized Raspbian disk image, I would like to explore other options.
I need to see an example
While there were some places to start using this question, implementing them is problematic. When I attempt to use the following MWE with and without the commented line, nothing happens. I cannot simulate the screen blanking function.
#include <X11/extensions/scrnsaver.h>
int main() {
// XScreenSaverSuspend;
XForceScreenSaver;
usleep(1000000);
return 0;
}
You have to pass parameters to the function:
void XScreenSaverSuspend(Display *dpy, Bool suspend);
#include <X11/extensions/scrnsaver.h>
int main() {
XScreenSaverSuspend (display, True);
usleep(1000000);
return 0;
}
But I don't think you have time to see the result with this program and when program ends the screensaver goes back to its previous state.
For your GTK framework, you can obtain the Display use:
Display *
gdk_x11_display_get_xdisplay (GdkDisplay *display);
Docs here.
For X:
/* use the information from the environment variable DISPLAY
to create the X connection:
*/
Display * dis = XOpenDisplay((char *)0); // or ":0.0"
A hacky, OS-specific solution:
Raspbian does not appear to require super user elevation to modify the xset. Adding the line to the code:
system("xset -dpms");
system("xset s off");
is sufficient to turn off the power management settings and the screensaver.
This is obviously sloppy, and it potentially leaves the OS in an undesirable state if the program breaks before these have a chance to be reset to default values. More elegant answers are encouraged.
Related
I am having trouble with using the python curses module for windows. I have used the wheel found here to get a script I had written on my mac to run on my desktop. For now, the script just displays a border around the window using the screen.border() method and another line across the whole width of the display. I displayed the bar across the screen using this:
dimensions = screen.getmaxyx()
screen.addstr(dimensions[0]/2, 0, "-"*dimensions[1])
I ran this in a loop, resetting dimensions each time and using getch to check for a curses.KEY_RRESIZE and then running screen.erase() to allow me to resize the window and the script will still work. When I ran this on windows after installing the wheel for python 3.7 (win32 because the amd64 one gave an error) I found that screen.getmaxyx() always returned the same value: the initial screen size, and never changed when I resized the window. I appreciate any help if anyone knows a way to fix this issue, or if I simply cannot use curses on windows, an alternative library for windows. Thank you!
Call resize_term(0, 0) after you get a KEY_RESIZE. (I’m not sure of the exact Python mapping.)
That ("on windows") is probably using PDCurses, which doesn't have a way to automatically update the screensize (e.g., as done with POSIX-based ncurses's SIGWINCH handler). Rather, it detects the window-size change and the application can call is_termresized to decide whether to tell the library to change the data structures to match using resize_term.
The python wrapper doesn't use that.
Would anyone know how to disable the virtual terminals in linux? I am using Yocto, Morty version on an i.MX6 processor. Even though our base distribution is Yocto, unfortunately we have diverged from building it with recipes, so this is more of a straight linux question than Yocto…
To give some detail as to my problem: It is for an embedded device that has an HDMI port - when I attach a terminal to the HDMI port it shows the Linux Penguin logo, a getty service and blanks out after 600 seconds. I just want to use the hdmi port as an output with nothing displayed on the output and I want it to stay on all the time.
I have found that the hdmi port maps to /dev/tty1 – when I type: echo “asdfasdf” > /dev/tty1 I see the characters output to the monitor.
Here are a few things I have tried to no avail – a lot of these are not needed if I can figure out how to disable it as a virtual terminal…
• I figured out how to disable the getty service but a cursor still blinks. I don’t even want a cursor to show
• I have tried to disable the display of the penguins by disabling the LOGO in the kernel config parameters - I commented anything with LOGO out:
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
To no avail. The logo still shows : .
• The fact that it blanks after 600 seconds is console blanking – I can see it set to 600 in the file: /sys/module/kernel/parameters/consoleblank. When I issue the command: echo -e '\033[9;0]'>/dev/tty1
It sets the console blanking to 0 and wakes the terminal. Being able to wake the console up is limited success but I would like to disable the virtual terminal altogether…
• I tried commenting out any virtual terminal defines in the config file to no avail:
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
Everything I have read suggests that /dev/tty1 is a virtual terminal or console. From what I read about the VT option, disabling the CONFIG_VT should do it:
VT — Virtual terminal Say yes here to get support for terminal devices
with display and keyboard devices. These are called "virtual" because
you can run several virtual terminals (also called virtual consoles)
on one physical terminal. You need at least one virtual terminal
device in order to make use of your keyboard and monitor. Therefore,
only people configuring an embedded system would want to say no here
in order to save some memory; the only way to log into such a system
is then via a serial or network connection. Virtual terminals are
useful because, for example, one virtual terminal can display system
messages and warnings, another one can be used for a text-mode user
session, and a third could run an X session, all in parallel.
Switching between virtual terminals is done with certain key
combinations, usually Alt-function key. If you are unsure, say yes, or
else you won't be able to do much with your Linux system.
But for some reason it doesn’t do anything!
• I found this thread; https://askubuntu.com/questions/357039/how-do-i-disable-virtual-consoles-tty1-6 among others, but none are much help since my distribution does not have any of the directories in the solutions offered in this thread or any others I have found. For instance I do not have a /etc/events.d nor do I have a /etc/default/console-setup file nor do I have a /etc/init directory… I imagine the reason for this is that my distribution uses systemd and the solutions are SysV based init maybe?
Disabling the logo or console blanking would not be necessary if I could just figure out how to disable that port as a terminal…
So does anyone have pointers or things I could try? I am relatively new (returning after 10 years - I worked with DNX 10 years ago v2.6 and it seems everything I knew about init is fairly obsolete lol) to linux so I am sure I am missing a lot…
Thanks,
- Chuck
I think I found the answer to my question. This is actually a frame buffer console documented here: Documentation/fb/fbcon.txt. From the documentation:
The framebuffer console (fbcon), as its name implies, is a text
console running on top of the framebuffer device. It has the
functionality of any standard text console driver, such as the VGA
console, with the added features that can be attributed to the
graphical nature of the framebuffer.
Commenting out the line
CONFIG_FRAMEBUFFER_CONSOLE=y
In the configuration file located in /arch/arm/configs will disable it.
Also this part of the documentation shows you how to disable it at runtime:
So, how do we unbind fbcon from the console? Part of the answer is in
Documentation/console/console.txt. To summarize:
Echo a value to the bind file that represents the framebuffer console
driver. So assuming vtcon1 represents fbcon, then:
echo 1 > sys/class/vtconsole/vtcon1/bind - attach framebuffer console
to
console layer echo 0 > sys/class/vtconsole/vtcon1/bind - detach framebuffer console from
console layer
When I issue the echo 0 command, the cursor stops blinking and starts blinking again when I issue the echo 1 command.
I think there is another way of doing it as well by modifying the Yocto build environment by putting the USE_VT="0" in the OpenEmbedded machine config file. The "USE_VT" variable is referenced by the sysvinit-inittab recipe. This answer was given to me from the Yocto Linux mailing list - but I have not tested it since we have diverged from Yocto...
I'm using NixOS with XMonad as a window manager, which was enabled via the configurations.nix. This works fine.
After booting, the initial login is done via the NixOS login gui.
On a Debian system for instance systemd can be configured to boot only to the terminal and not directly to a desktop environment. One can setup an .xinit file then to start the chosen window manager or desktop environment without using any display manager (like lightdm, kdm...). It's started then by calling startx.
How would described effect be done in Nix? I guess there's an
declarative way to do so.
Another question, partly related to this, is: After changing
xserver settings in the configurations.nix (e.g. in
services.xserver.synaptics) and rebuilding via nixos-rebuild
switch/test, what do I have to do in order to take them in effect?
Those are 2 separate questions, thus I believe you'd be much better splitting them into 2 StackOverflow questions (it's much harder now to answer e.g. only one of them). That said:
AFAIK, people building the NixOS are not aware of a way to do this in systemd. If you know of such method, I believe there might be interested to learn about it!
I suppose you want:
$ systemctl start display-manager.service # CAUTION: see NOTE below!!!
NOTE: this will kill any open X session! (I guess that this might be the reason why it's not done automatically on nixos-rebuild switch...)
By the way, you may have noticed that after nixos-rebuild switch, a message is shown, something like: "display-manager.service is not restarted". That's what led me to find the answer to this question when I needed it myself.
One way to do it is to enable startx, which will be treated as a display manager:
services.xserver.displayManager.startx.enable = true;
Another way to accomplish this is to bypass the display manager by logging in automatically from the TTY login prompt. NixOS default display manager being lightdm, you do that by adding the following lines to your configuration:
lightdm = {
enable = true;
autoLogin.enable = true;
autoLogin.user = "username";
};
By default you can't get the terminal input in Unix without waiting for the User to press enter.
How can I get the input instantly? I am using gdc on debian Linux so I can't use ncurses.
Thanks.
ncurses is a good solution that should work on almost any linux installation with any compiler...
But if you don't want to use ncurses, there's a few other options:
My terminal.d offers it and works on most terminals, but not as many as ncurses (I'd say I cover 98% of typical setups but there's a LOT of variations out there and i didn't try to be as comprehensive as ncurses): https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff/blob/master/terminal.d
Look near the bottom of the file for a version(Demo) void main(). RealTimeConsoleInput gives you an event loop with instant input and other info if you want it (mouse, resize, etc.).
You can also just change the terminal mode with the proper tcgetattr and tcsetattr calls and then do everything else normally. You'll want to import core.sys.posix.termios; and import core.sys.posix.unistd; for the functions, then the rest is done the same as in C.
Here's how to do that:
termios old;
tcgetattr(1, &old);
scope(exit) tcsetattr(1, TCSANOW, &old); // put the terminal back to how it was
auto n = old;
n.c_lflag &= ~ICANON; // turn off canonical mode
tcsetattr(1, TCSANOW, &n); // do the change
Then you can use the input instantly.
Problem:
I have an extra set of top and bottom gnome-panels for a second monitor. When I undock my lenovo Thinkpad (T510), the extra top and bottom panels remain, so I have two on top and two on the bottom. I am currently running a RHEL6/Fedora (x86_64) gnome (2.28.2) instance with xmonad (0.9.1-6.1.el6) set as the window manager, using the xmonad extensions to work within gnome.
Tried:
I've used acpi and found a code for docking and undocking, but when I try to utilize a script I found in this blog post, it gets zero for the call to xrandr. The script works when called on its own from the terminal. I've tried calling a separate looping script in its own thread and it keeps getting zero for the value, long after the screen(s) update(s).
I have figured out how to have a script loop every X seconds and check for a file which is touched into existence in the event of the script getting a zero, then performing the necessary change, but I don't like that approach.
Question:
I'm hoping someone knows a place I can drop a call to the referenced script and have my panels come and go as I would expect without needing to initiate the script manually.
Thanks!
Update: I have added a bounty of 50 (max I can do) for an answer.
Ben
I guess one of the problems listed below occures (or both):
1) looks like your xrandr snippet doesn't return proper values because the $DISPLAY environment variable is not set correctly. Acpi handler script normally runs as a user which is not the user running your current X session. That way xrandr just does not know which $DISPLAY to access.
2) if $DISPLAY is set correctly, the acpid user might still not be able to access your running xsession. You might check whether the script will work over acpi handler, if you execute xhost + as the user who is currently runging the current xsession with $DISPLAY specified in your script. This will disable access control for X. You can reenable it with xhost - again.
Check it, I hope it helps or will at least point you in which direction to dig.