Detect the power status of a display monitor dynamically - linux

Is there a way that we can find the status of a display monitor in a linux environment?
pointers on any standard C libraries / unix calls would be helpful. I got many interesting articles on how this can be achieved on win32, but none of them would point a solution for a linux environment.
i tried using xrandr, but it fails to detect the status dynamically
any pointers??

Here is a simple program using Linux Real Mode Interface:
#include "lrmi.h"
int main(void)
{
struct LRMI_regs r = {0};
r.eax = 0x4F10;
r.ebx = 0x02;
ioperm( 0, 1024, 1 );
iopl( 3 );
if( !LRMI_init() || !LRMI_int( 0x10, &r ) )
{
return -1;
}
return (r.ebx >> 8) & 0xFF;
}
Some possible return values: 0 (on), 1 (standby), 2 (suspend), 4 (off), 8 (reduced on).

Related

Is there a standard format for /sys/devices/system/cpu/cpu0/topology/thread_siblings_list?

Considering the following command.
cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list
When I run this command on my laptop running Ubuntu 16.04, I get the following output.
0,1
However, when I run it on a server running Debian 8, I get the following output.
0-1
Is the standard format or set of standard formats for this pseudo-file documented somewhere?
I searched in the Documentation directory under the kernel source and did not find a description.
There doesn't seem to be an documented way but turbostat, an internal program and part of linux-tools, expects the format to be:
A number, followed by any character as seperator, ..., the last number.
The current version is here.
/*
* get_cpu_position_in_core(cpu)
* return the position of the CPU among its HT siblings in the core
* return -1 if the sibling is not in list
*/
int get_cpu_position_in_core(int cpu)
{
char path[64];
FILE *filep;
int this_cpu;
char character;
int i;
sprintf(path,
"/sys/devices/system/cpu/cpu%d/topology/thread_siblings_list",
cpu);
filep = fopen(path, "r");
if (filep == NULL) {
perror(path);
exit(1);
}
for (i = 0; i < topo.num_threads_per_core; i++) {
fscanf(filep, "%d", &this_cpu);
if (this_cpu == cpu) {
fclose(filep);
return i;
}
/* Account for no separator after last thread*/
if (i != (topo.num_threads_per_core - 1))
fscanf(filep, "%c", &character);
}
fclose(filep);
return -1;
}

Linux DRM ( DRI ) Cannot Screen Scrape /dev/fb0 as before with FBDEV

On other Linux machines using the FBDEV drivers ( Raspberry Pi.. etc. ), I could mmap the /dev/fb0 device and directly create a BMP file that saved what was on the screen.
Now, I am trying to do the same thing with DRM on a TI Sitara AM57XX ( Beagleboard X-15 ). The code that used to work with FBDEV is shown below.
This mmap no longer seems to work the DRM. I'm using a very simple Qt5 Application with the Qt platform linuxfb plugin. It draws just fine into /dev/fb0 and shows on the screen properly, however I cannot read back from /dev/fb0 with a memory mapped pointer and have an image of the screen saved to file. It looks garbled like this:
Code:
#ifdef FRAMEBUFFER_CAPTURE
repaint();
QCoreApplication::processEvents();
// Setup framebuffer to desired format
struct fb_var_screeninfo var;
struct fb_fix_screeninfo finfo;
memset(&finfo, 0, sizeof(finfo));
memset(&var, 0, sizeof(var));
/* Get variable screen information. Variable screen information
* gives information like size of the image, bites per pixel,
* virtual size of the image etc. */
int fbFd = open("/dev/fb0", O_RDWR);
int fdRet = ioctl(fbFd, FBIOGET_VSCREENINFO, &var);
if (fdRet < 0) {
qDebug() << "Error opening /dev/fb0!";
close(fbFd);
return -1;
}
if (ioctl(fbFd, FBIOPUT_VSCREENINFO, &var)<0) {
qDebug() << "Error setting up framebuffer!";
close(fbFd);
return -1;
} else {
qDebug() << "Success setting up framebuffer!";
}
//Get fixed screen information
if (ioctl(fbFd, FBIOGET_FSCREENINFO, &finfo) < 0) {
qDebug() << "Error getting fixed screen information!";
close(fbFd);
return -1;
} else {
qDebug() << "Success getting fixed screen information!";
}
//int screensize = var.xres * var.yres * var.bits_per_pixel / 8;
//int screensize = var.yres_virtual * finfo.line_length;
//int screensize = finfo.smem_len;
int screensize = finfo.line_length * var.yres_virtual;
qDebug() << "Framebuffer size is: " << var.xres << var.yres << var.bits_per_pixel << screensize;
int linuxFbWidth = var.xres;
int linuxFbHeight = var.yres;
int location = (var.xoffset) * (var.bits_per_pixel/8) +
(var.yoffset) * finfo.line_length;
// Perform memory mapping of linux framebuffer
char* frameBufferMmapPixels = (char *)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbFd, 0);
assert(frameBufferMmapPixels != MAP_FAILED);
QImage toSave((uchar*)frameBufferMmapPixels,linuxFbWidth,linuxFbHeight,QImage::Format_ARGB32);
toSave.save("/usr/bin/test.bmp");
sync();
#endif
Here is the output of the code when it runs:
Success setting up framebuffer!
Success getting fixed screen information!
Framebuffer size is: 800 480 32 1966080
Here is the output of fbset showing the pixel format:
mode "800x480"
geometry 800 480 800 480 32
timings 0 0 0 0 0 0 0
accel true
rgba 8/16,8/8,8/0,8/24
endmode
root#am57xx-evm:~#
finfo.line_length gives the size of the actual physical scan line in bytes. It is not necessarily the same as screen width multiplied by pixel size, as scan lines may be padded.
However the QImage constructor you are using assumes no padding.
If xoffset is zero, it should be possible to construct a QImage directly from the framebuffer data using a constructor with the bytesPerLine argument. Otherwise there are two options:
allocate a separate buffer and copy only the visible portion of each scanline to it
create an image from the entire buffer (including the padding) and then crop it
If you're using DRM, then /dev/fb0 might point to an entirely different buffer (not the currently visible one) or have an different format.
fbdev is really just for old legacy that hasn't been ported DRM/KMS yet
and only has very limited modsetting capabilities.
BTW: which kernel are you using ? hopefully not that ancient and broken TI vendor kernel ...

cpu hotplug - is there a system call to disable a cpu in linux?

Linux has 'cpu hotplug' feature of enabling/disabling a cpu .
I want disable one of the computers' cpus from a C program , so my question is - how? is it possible ?
Here I found the following :
Q: How do i logically offline a CPU?
A: Do the following: #echo 0 > /sys/devices/system/cpu/cpuX/online
Coudlnt find anything about system calls though in this document , so hopefully someone can shed some light about this, Thanks !
There is no syscall for disabling a cpu in linux. What you found article is the only method. But you can rewrite the shell script to the below:
static void set_cpu_online(int cpu, int online)
{
int fd;
int ret;
char path[256];
snprintf(path, sizeof(path) - 1,
"/sys/devices/system/cpu/cpu%d/online", cpu);
fd = open(path, O_RDWR);
assert(fd > 0);
ret = write(fd, "0" + (online ? 1 : 0), 1);
assert(ret == 1);
}

Why does my process take too long to die?

Basically I'm using Linux 2.6.34 on PowerPC (Freescale e500mc). I have a process (a kind of VM that was developed in-house) that uses about 2.25 G of mlocked VM. When I kill it, I notice that it takes upwards of 2 minutes to terminate.
I investigated a little. First, I closed all open file descriptors but that didn't seem to make a difference. Then I added some printk in the kernel and through it I found that all delay comes from the kernel unlocking my VMAs. The delay is uniform across pages, which I verified by repeatedly checking the locked page count in /proc/meminfo. I've checked with programs that allocate that much memory and they all die as soon as I signal them.
What do you think I should check now? Thanks for your replies.
Edit: I had to find a way to share more information about the problem so I wrote this below program:
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>
#include <signal.h>
#include <sys/time.h>
#define MAP_PERM_1 (PROT_WRITE | PROT_READ | PROT_EXEC)
#define MAP_PERM_2 (PROT_WRITE | PROT_READ)
#define MAP_FLAGS (MAP_ANONYMOUS | MAP_FIXED | MAP_PRIVATE)
#define PG_LEN 4096
#define align_pg_32(addr) (addr & 0xFFFFF000)
#define num_pg_in_range(start, end) ((end - start + 1) >> 12)
inline void __force_pgtbl_alloc(unsigned int start)
{
volatile int *s = (int *) start;
*s = *s;
}
int __map_a_page_at(unsigned int start, int whichperm)
{
int perm = whichperm ? MAP_PERM_1 : MAP_PERM_2;
if(MAP_FAILED == mmap((void *)start, PG_LEN, perm, MAP_FLAGS, 0, 0)){
fprintf(stderr,
"mmap failed at 0x%x: %s.\n",
start, strerror(errno));
return 0;
}
return 1;
}
int __mlock_page(unsigned int addr)
{
if (mlock((void *)addr, (size_t)PG_LEN) < 0){
fprintf(stderr,
"mlock failed on page: 0x%x: %s.\n",
addr, strerror(errno));
return 0;
}
return 1;
}
void sigint_handler(int p)
{
struct timeval start = {0 ,0}, end = {0, 0}, diff = {0, 0};
gettimeofday(&start, NULL);
munlockall();
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
printf("Munlock'd entire VM in %u secs %u usecs.\n",
diff.tv_sec, diff.tv_usec);
exit(0);
}
int make_vma_map(unsigned int start, unsigned int end)
{
int num_pg = num_pg_in_range(start, end);
if (end < start){
fprintf(stderr,
"Bad range: start: 0x%x end: 0x%x.\n",
start, end);
return 0;
}
for (; num_pg; num_pg --, start += PG_LEN){
if (__map_a_page_at(start, num_pg % 2) && __mlock_page(start))
__force_pgtbl_alloc(start);
else
return 0;
}
return 1;
}
void display_banner()
{
printf("-----------------------------------------\n");
printf("Virtual memory allocator. Ctrl+C to exit.\n");
printf("-----------------------------------------\n");
}
int main()
{
unsigned int vma_start, vma_end, input = 0;
int start_end = 0; // 0: start; 1: end;
display_banner();
// Bind SIGINT handler.
signal(SIGINT, sigint_handler);
while (1){
if (!start_end)
printf("start:\t");
else
printf("end:\t");
scanf("%i", &input);
if (start_end){
vma_end = align_pg_32(input);
make_vma_map(vma_start, vma_end);
}
else{
vma_start = align_pg_32(input);
}
start_end = !start_end;
}
return 0;
}
As you would see, the program accepts ranges of virtual addresses, each range being defined by start and end. Each range is then further subdivided into page-sized VMAs by giving different permissions to adjacent pages. Interrupting (using SIGINT) the program triggers a call to munlockall() and the time for said procedure to complete is duly noted.
Now, when I run it on freescale e500mc with Linux version at 2.6.34 over the range 0x30000000-0x35000000, I get a total munlockall() time of almost 45 seconds. However, if I do the same thing with smaller start-end ranges in random orders (that is, not necessarily increasing addresses) such that the total number of pages (and locked VMAs) is roughly the same, observe total munlockall() time to be no more than 4 seconds.
I tried the same thing on x86_64 with Linux 2.6.34 and my program compiled against the -m32 parameter and it seems the variations, though not so pronounced as with ppc, are still 8 seconds for the first case and under a second for the second case.
I tried the program on Linux 2.6.10 on the one end and on 3.19, on the other and it seems these monumental differences don't exist there. What's more, munlockall() always completes at under a second.
So, it seems that the problem, whatever it is, exists only around the 2.6.34 version of the Linux kernel.
You said the VM was developed in-house. Does this mean you have access to the source? I would start by checking to see if it has anything to stop it from immediately terminating to avoid data loss.
Otherwise, could you potentially try to provide more information? You may also want to check out: https://unix.stackexchange.com/ as they would be better suited to help with any issues the linux kernel may be having.

In KDE, how can I automatically tell which "Desktop" a Konsole terminal is in?

I have multiple "desktops" that I switch between for different tasks in my KDE Linux environment. How can I automagically determine which desktop my Konsole ( kde console) window is being displayed in?
EDIT:
I'm using KDE 3.4 in a corporate environment
This is programming related. I need to programatically (a.k.a. automagically ) determine which desktop a user is on and then interact with X windows in that desktop from a python script.
Should I go around and nuke all Microsoft IDE questions as not programming related? How about Win32 "programming" questions? Should I try to close those too?
Actually EWMH _NET_CURRENT_DESKTOP gives you which is the current desktop for X, not relative to the application. Here's a C snippet to get the _WM_DESKTOP of an application. If run from the KDE Konsole in question it will give you what desktop it is on, even it is not the active desktop or not in focus.
#include <X11/Xlib.h>
#include <X11/Shell.h>
...
Atom net_wm_desktop = 0;
long desktop;
Status ret;
/* see if we've got a desktop atom */
Atom net_wm_desktop = XInternAtom( display, "_NET_WM_DESKTOP", False);
if( net_wm_desktop == None ) {
return;
}
/* find out what desktop we're currently on */
if ( XGetWindowProperty(display, window, net_wm_desktop, 0, 1,
False, XA_CARDINAL, (Atom *) &type_ret, &fmt_ret,
&nitems_ret, &bytes_after_ret,
(unsigned char**)&data) != Success || data == NULL
) {
fprintf(stderr, "XGetWindowProperty() failed");
if ( data == NULL ) {
fprintf(stderr, "No data returned from XGetWindowProperty()" );
}
return;
}
desktop = *data;
XFree(data);
and desktop should be the index of the virtual desktop the Konsole is currently in. That is not the same which head of a multi-headed display. If you want to determine which head, you need to use XineramaQueryScreens (Xinerama extension, not sure if there is a XRandR equivalent or not. Does not work for nVidia's TwinView.
Here's an excerpt from some code I wrote, that given a x and y, calculate the screen boundaries (sx, sy, and sw with screen width and sh for screen height). You can easily adapt it to simply return which "screen" or head x and y are on. (Screen has a special meaning in X11).
#include <X11/X.h>
#include <X11/extensions/Xinerama.h>
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
Bool xy2bounds(Display* d,int x, int y, int* sx, int* sy, int* sw, int* sh) {
*sx = *sy = *sw = *sh = -1; /* Set to invalid, for error condition */
XineramaScreenInfo *XinInfo;
int xin_screens = -1;
int i;
int x_origin, y_origin, width, height;
Bool found = False;
if ( d == NULL )
return False;
if ( (x < 0) || (y < 0) )
return False;
if ( True == XineramaIsActive(d) ) {
XinInfo = XineramaQueryScreens( d, &xin_screens );
if ( (NULL == XinInfo) || (0 == xin_screens) ) {
return False;
}
} else {
/* Xinerama is not active, so return usual width/height values */
*sx = 0;
*sy = 0;
*sw = DisplayWidth( d, XDefaultScreen(d) );
*sh = DisplayHeight( d, XDefaultScreen(d) );
return True;
}
for ( i = 0; i < xin_screens; i++ ) {
x_origin = XinInfo[i].x_org;
y_origin = XinInfo[i].y_org;
width = XinInfo[i].width;
height = XinInfo[i].height;
printf("Screens: (%d) %dx%d - %dx%d\n", i,
x_origin, y_origin, width, height );
if ( (x >= x_origin) && (y >= y_origin) ) {
if ( (x <= x_origin+width) && (y <= y_origin+height) ) {
printf("Found Screen[%d] %dx%d - %dx%d\n",
i, x_origin, y_origin, width, height );
*sx = x_origin;
*sy = y_origin;
*sw = width;
*sh = height;
found = True;
break;
}
}
}
assert( found == True );
return found;
}
Referring to the accepted answer.... dcop is now out of date; instead of dcop you might want to use dbus (qdbus is a command line tool for dbus).
qdbus org.kde.kwin /KWin currentDesktop
The KDE window manager, as well as GNOME and all WMs that follow the freedesktop standards support the Extended Window Manager Hints (EWMH).
These hints allow developers to access programmatically several window manager functions like maximize, minimize, set window title, virtual desktop e.t.c
I have never worked with KDE but Gnome provides such functionality so I assume that KDE has it too.
It is also possible to access a subset of these hints with pure Xlib functions. This subset are ICCCM hints. If memory serves me correct virtual desktop access is only in EWMH.
Update: Found it! (_NET_CURRENT_DESKTOP)
With dcop, the kde Desktop COmmunication Protocol, you could easily get current desktop executing
dcop kwin KWinInterface currentDesktop
command. If you are working with new kde 4.x dcop is no more used, and you can translate the command to a DBUS call. It should be quite simple to send/get dbous messages with python apis.
Sorry for my bad english,
Emilio
A new answer because MOST of the answers here get the current desktop, not the one the terminal is in (Will break if user changes desktop while script is running).
xprop -id $WINDOWID | sed -rn -e 's/_NET_WM_DESKTOP\(CARDINAL\) = ([^)]+)/\1/pg'
I tested this in a loop while changing desktops, it works ok (test script bellow, you have to check the output manually after the run).
while true
do
xprop -id $WINDOWID | sed -rn -e 's/_NET_WM_DESKTOP\(CARDINAL\) = ([^)]+)/\1/pg'
sleep 1
done
Thanks for the other answers and comments, for getting me half way there.
I was looking for the same thing, but with one more restriction, I don't want to run a shell command to achieve the result. Starting from Kimball Robinson answer, this is what I got.
Tested working in Python3.7, Debian 10.3, KDE Plasma 5.14.5.
import dbus
bus = dbus.SessionBus()
proxy = bus.get_object("org.kde.KWin", "/KWin")
int( proxy.currentDesktop(dbus_interface="org.kde.KWin") )

Resources