Setting color brightness on Linux/Xorg - linux

Is there any command (or API) to set X.Org/Linux color brightness?
In other words, I need something as handy as the xgamma command but for changing RGB brightness real-time.
Is this possibile?

Use the XF86VidMode* family of functions.
#include <X11/Xlib.h>
#include <X11/extensions/xf86vmode.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main() {
Display *display;
int screen;
int major, minor;
int i;
XF86VidModeGamma orig;
display = XOpenDisplay(NULL);
if (!display) return -1;
screen = DefaultScreen(display);
if (!XF86VidModeQueryVersion(display, &major, &minor)
|| major < 2 || major == 2 && minor < 0
|| !XF86VidModeGetGamma(display, screen, &orig)) {
XCloseDisplay(display);
return -1;
}
for (i = 0; i <= 32; i++) {
XF86VidModeGamma gamma;
gamma.red = exp2f(2 - fabs(i - 16) / 4);
gamma.green = gamma.red;
gamma.blue = gamma.red;
if (!XF86VidModeSetGamma(display, screen, &gamma)) break;
printf("gamma: %f %f %f", gamma.red, gamma.green, gamma.blue);
if (!XF86VidModeGetGamma(display, screen, &gamma)) break;
printf(" -> %f %f %f\n", gamma.red, gamma.green, gamma.blue);
sleep(1);
}
XF86VidModeSetGamma(display, screen, &orig);
XF86VidModeGetGamma(display, screen, &orig);
XCloseDisplay(display);
return 0;
}
This brings the gamma from 0.25 to 4.0 and back, and then restores the original gamma.
Or you could just repeatedly call system("xgamma -gamma %f"), with pretty much the same results.

xbacklight -set 80
You have to install this software from your repository. Works well on most laptops, at least on ThinkPads :-)

To control LCD brightness:
echo 4 > /proc/acpi/video/GFX0/LCD/brightness
The range is 1 to 8.

May Be You need XRandr?

Related

Wrong output with scanf function

so this is supposedly not a difficult question, but I've been getting this problem a few times when running my code in VS code. I am trying to separate the alphabets and numbers from the string, and I have used the method as follows (in my code) according to what is taught in the book. However, despite having the program running, the output is wrong.
Here is my code:
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <string.h>
int main(){
int weight = 0;
int height = 0;
char wunit[] = "";
char hunit[] = "";
printf("Enter the body weight: ");
scanf("%d%s",&weight,wunit);
printf("Enter the height: ");
scanf("%d%s",&height,hunit);
printf("%d,%s,%d,%s", weight, wunit, height, hunit);
return 0;
}
The thing is,if I type in 20lb for weight, and 30mt for height, what happens is that it gives the output: 20,t,30,mt; which generates this weird ‘t’ instead of lb, and I have no idea why this is the case.
Similarly, when I type 30kg for weight, and 20cm for height. It generates this weird output:30,m,0, cm. The kg becomes a 'm' and the 20 is now a '0'!? Why is that the case? The expected output would be 30,kg,20,cm
I tried simply replacing the strings, but that doesn't solve the problem fundamentally. For instance, (considering when my user puts logical inputs like lb or kg for weight), I tried this substitution and it appears to work, but doesn't fix the issue of making 20 -> 0
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <string.h>
int main(){
int weight = 0;
int height= 0;
char wunit[] = "";
char hunit[] = "";
char wunit2[] = "lb";
char dummy[] = "t";
printf("Enter the body weight: ");
scanf("%d%s",&weight,wunit);
printf("Enter the height: ");
scanf("%d%s",&height,hunit);
if (strcmp(wunit,dummy)==0){
printf("%d,%s,%d,%s\n", weight, wunit2, height, hunit);
}
//printf("%d,%s,%d,%s", weight, wunit, height, hunit);
return 0;
}
I've also tried running it in codecollab, and it shows this error of "stack smashing detected" after I run it a few times, which got me more confused, what has it to do with this?
Thanks in advance.
wunit is an array of size 1 (it is initialized to "", which in chars looks like {'\0'}). What happens when you try to put lots of characters (say, "lb", which is {'l', 'b', '\0'}) into a memory location that is smaller than it should be?
scanf happily writes as many bytes as needed, smashing anything in its way ("stack-smashing", because wunit and all those local variables are stored on the stack). Try to give scanf more space, say using
char wunit[10] = "";
And never ever use "%s" directly. Limit the maximum of characters that you will allow scanf to place, for example using "%9s" to ensure that at most 9 characters + terminator (10 total) will be read.
This works for me:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(){
int weight = 0;
int height = 0;
char wunit[10] = "";
char hunit[10] = "";
printf("Enter the body weight: ");
scanf("%d%9s",&weight,wunit);
printf("Enter the height: ");
scanf("%d%9s",&height,hunit);
printf("%d,%s,%d,%s", weight, wunit, height, hunit);
return 0;
}
Note: scanf with %s is rightfully considered very dangerous. See https://stackoverflow.com/a/2430310/15472

How to monitor changes to pseudo-filesystem on Linux?

As neither dnotify nor inotify is able to monitor changes to pseudo-filesystem content is there an automated way to discover file/directory creation/deletion inside (for example) /sys/block directory?
Of course, I can scan the directory periodically on my own but hope that there is a smarter way.
I decided to use a somewhat naive workaround and monitor /dev directory (which is supported by inotify) instead of /sys/block. Fortunately, each /sys/block entry has its counterpart inside /dev (but not vice versa) so I just check whether an entry that appeared in /dev is also present inside /sys/block.
Not very elegant but sufficient for me.
#include <stdio.h>
#include <sys/inotify.h>
#include <unistd.h>
#include <assert.h>
#include <linux/limits.h>
#include <sys/stat.h>
int main(void)
{
int fd = inotify_init();
assert(fd >= 0);
int wd = inotify_add_watch(fd, "/dev", IN_CREATE);
assert(wd >= 0);
for(;;) {
char _event[sizeof(struct inotify_event) + NAME_MAX + 1];
int res = read(fd, _event, sizeof(_event));
assert(res > 0);
struct inotify_event *event = (struct inotify_event *) _event;
if(event -> len > 0 && event -> mask & IN_CREATE && !(event -> mask & IN_ISDIR)) {
char dev_name[NAME_MAX + 1];
sprintf(dev_name, "/sys/block/%s/stat", event -> name);
struct stat statbuf;
if(0 == stat(dev_name, &statbuf))
printf("new entry appeared: %s\n", event -> name);
}
}
}

Why does my process take too long to die?

Basically I'm using Linux 2.6.34 on PowerPC (Freescale e500mc). I have a process (a kind of VM that was developed in-house) that uses about 2.25 G of mlocked VM. When I kill it, I notice that it takes upwards of 2 minutes to terminate.
I investigated a little. First, I closed all open file descriptors but that didn't seem to make a difference. Then I added some printk in the kernel and through it I found that all delay comes from the kernel unlocking my VMAs. The delay is uniform across pages, which I verified by repeatedly checking the locked page count in /proc/meminfo. I've checked with programs that allocate that much memory and they all die as soon as I signal them.
What do you think I should check now? Thanks for your replies.
Edit: I had to find a way to share more information about the problem so I wrote this below program:
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>
#include <signal.h>
#include <sys/time.h>
#define MAP_PERM_1 (PROT_WRITE | PROT_READ | PROT_EXEC)
#define MAP_PERM_2 (PROT_WRITE | PROT_READ)
#define MAP_FLAGS (MAP_ANONYMOUS | MAP_FIXED | MAP_PRIVATE)
#define PG_LEN 4096
#define align_pg_32(addr) (addr & 0xFFFFF000)
#define num_pg_in_range(start, end) ((end - start + 1) >> 12)
inline void __force_pgtbl_alloc(unsigned int start)
{
volatile int *s = (int *) start;
*s = *s;
}
int __map_a_page_at(unsigned int start, int whichperm)
{
int perm = whichperm ? MAP_PERM_1 : MAP_PERM_2;
if(MAP_FAILED == mmap((void *)start, PG_LEN, perm, MAP_FLAGS, 0, 0)){
fprintf(stderr,
"mmap failed at 0x%x: %s.\n",
start, strerror(errno));
return 0;
}
return 1;
}
int __mlock_page(unsigned int addr)
{
if (mlock((void *)addr, (size_t)PG_LEN) < 0){
fprintf(stderr,
"mlock failed on page: 0x%x: %s.\n",
addr, strerror(errno));
return 0;
}
return 1;
}
void sigint_handler(int p)
{
struct timeval start = {0 ,0}, end = {0, 0}, diff = {0, 0};
gettimeofday(&start, NULL);
munlockall();
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
printf("Munlock'd entire VM in %u secs %u usecs.\n",
diff.tv_sec, diff.tv_usec);
exit(0);
}
int make_vma_map(unsigned int start, unsigned int end)
{
int num_pg = num_pg_in_range(start, end);
if (end < start){
fprintf(stderr,
"Bad range: start: 0x%x end: 0x%x.\n",
start, end);
return 0;
}
for (; num_pg; num_pg --, start += PG_LEN){
if (__map_a_page_at(start, num_pg % 2) && __mlock_page(start))
__force_pgtbl_alloc(start);
else
return 0;
}
return 1;
}
void display_banner()
{
printf("-----------------------------------------\n");
printf("Virtual memory allocator. Ctrl+C to exit.\n");
printf("-----------------------------------------\n");
}
int main()
{
unsigned int vma_start, vma_end, input = 0;
int start_end = 0; // 0: start; 1: end;
display_banner();
// Bind SIGINT handler.
signal(SIGINT, sigint_handler);
while (1){
if (!start_end)
printf("start:\t");
else
printf("end:\t");
scanf("%i", &input);
if (start_end){
vma_end = align_pg_32(input);
make_vma_map(vma_start, vma_end);
}
else{
vma_start = align_pg_32(input);
}
start_end = !start_end;
}
return 0;
}
As you would see, the program accepts ranges of virtual addresses, each range being defined by start and end. Each range is then further subdivided into page-sized VMAs by giving different permissions to adjacent pages. Interrupting (using SIGINT) the program triggers a call to munlockall() and the time for said procedure to complete is duly noted.
Now, when I run it on freescale e500mc with Linux version at 2.6.34 over the range 0x30000000-0x35000000, I get a total munlockall() time of almost 45 seconds. However, if I do the same thing with smaller start-end ranges in random orders (that is, not necessarily increasing addresses) such that the total number of pages (and locked VMAs) is roughly the same, observe total munlockall() time to be no more than 4 seconds.
I tried the same thing on x86_64 with Linux 2.6.34 and my program compiled against the -m32 parameter and it seems the variations, though not so pronounced as with ppc, are still 8 seconds for the first case and under a second for the second case.
I tried the program on Linux 2.6.10 on the one end and on 3.19, on the other and it seems these monumental differences don't exist there. What's more, munlockall() always completes at under a second.
So, it seems that the problem, whatever it is, exists only around the 2.6.34 version of the Linux kernel.
You said the VM was developed in-house. Does this mean you have access to the source? I would start by checking to see if it has anything to stop it from immediately terminating to avoid data loss.
Otherwise, could you potentially try to provide more information? You may also want to check out: https://unix.stackexchange.com/ as they would be better suited to help with any issues the linux kernel may be having.

Why "ls" is not colored after forkpty()

Why output of ls executed here is not colored?
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <pty.h>
#include <sys/wait.h>
int main(int argc, char **argv ) {
termios termp; winsize winp;
int amaster; char name[128];
if (forkpty(&amaster, name, &termp, &winp) == 0) {
system("ls"); // "ls --color" will work here!
return 0;
}
wait(0);
char buf[128]; int size;
while (1) {
size = read(amaster, buf, 127);
if (size <= 0) break;
buf[size] = 0;
printf("%s", buf);
}
return 0;
}
According to man (and ls.c that I am inspecting) it should be colored if isatty() returns true. After forkpty() it must be true. Besides, ls DOES output in columnized mode in this example! Which means it feels it has tty as output.
Of course I do not want only ls to output color, but an arbitrary program to feel that it has real color enabled tty behind.
I just wrote a simple test:
#include <unistd.h>
int main() {
printf("%i%i%i%i%i\n", isatty(0), isatty(1), isatty(2), isatty(3), isatty(4));
}
and call it in a child part of forkpty, and it displays 11100, which means ls should be colored!
OK, as it seems the fact that ls produces no color output has nothing to do with forkpty(). It is just not color enabled by default. But now, maybe that's another question, why it is not color if it just checks isatty()?

How do I get the pixel color under the cursor?

I need a fast command line app to return the color of the pixel under the mouse cursor.
How can I build this in VC++, I need something similar to this, but ideally not in .NET so it can be run many times per second?
Off the top of my head, the straightforward way:
#include <stdio.h>
#include <Windows.h>
int main(void) {
POINT p;
COLORREF color;
HDC hDC;
BOOL b;
// Get the device context for the screen
hDC = GetDC(NULL);
if (hDC == NULL)
return 3;
// Get the current cursor position
b = GetCursorPos(&p);
if (!b)
return 2;
// Retrieve the color at that position
color = GetPixel(hDC, p.x, p.y);
if (color == CLR_INVALID)
return 1;
// Release the device context again
ReleaseDC(GetDesktopWindow(), hDC);
printf("%i %i %i", GetRValue(color), GetGValue(color), GetBValue(color));
return 0;
}
ETA: Appears to work, at least for me.
ETA2: Added some error checking
ETA3: Commented code, compiled executable and a Visual Studio Solution can be found in my SVN repository.

Resources