Where the "g_debug" output in GDM source code? - gnome

I want to know how gdm works, so I read the gdm source code. I saw a lot of g_debug output in the source code like this:
case SIGUSR1:
g_debug ("Got USR1 signal");
/* FIXME:
* Play with log levels or something
*/
ret = TRUE;
gdm_log_toggle_debug ();
break;
But I want to know where I can find the g_debug output.

You have to set the G_DEBUG environment variable to make GLib print out the debugging information.
Check the values in the documentation for Running and Debugging GLib Applications.

In Fedora 20, you can modify the gdm configuration file, /etc/gdm/custom.conf then add the following.
[debug]
Enable=True
Gesture=True
Then you can view the logs by typing journalctl -lr in the terminal.

Related

How to get cwd for relative paths?

How can I get current working directory in strace output, for system calls that are being called with relative paths? I'm trying to debug complex application that spawns multiple processes and fails to open particular file.
stat("some_file", 0x7fff6b313df0) = -1 ENOENT (No such file or directory)
Since some_file exists I believe that its located in the wrong directory. I'd tried to trace chdir calls too, but since output is interleaved its hard to deduce working directory that way. Is there a better way?
You can use the -y option and it will print the full path. Another useful flag in this situation is -P which only traces syscalls relating to a specific path, e.g.
strace -y -P "some_file"
Unfortunately -y will only print the path of file descriptors, and since your call doesn't load any it doesn't have one. A possible workaround is to interrupt the process when that syscall is run in a debugger, then you can get its working directory by inspecting /proc/<PID>/cwd. Something like this (totally untested!)
gdb --args strace -P "some_file" -e inject=open:signal=SIGSEGV
Or you may be able to use a conditional breakpoint. Something like this should work, but I had difficulty with getting GDB to follow child processes after a fork. If you only have one process it should be fine I think.
gdb your_program
break open if $_streq((char*)$rdi, "some_file")
run
print getpid()
It is quite easy, use the function char *realpath(const char *path, char *resolved_path) for the current directory.
This is my example:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
int main(){
char *abs;
abs = realpath(".", NULL);
printf("%s\n", abs);
return 0;
}
output
root#ubuntu1504:~/patches_power_spec# pwd
/root/patches_power_spec
root#ubuntu1504:~/patches_power_spec# ./a.out
/root/patches_power_spec

Create custom ./configure command line arguments

I'm updating a project to use autotools, and to maintain backwards compatibility with previous versions, I would like the user to be able to run ./configure --foo=bar to set a build option.
Based on reading the docs, it looks like I could set up ./configure --enable-foo, ./configure --with-foo, or ./configure foo=bar without any problem, but I'm not seeing anything allowing the desired behavior (specifically having a double dash -- before the option).
Any suggestions?
There's no way I know of doing this in configure.ac. You'll have to patch configure. This can be done by running the patching script in a bootstrap.sh after running autoreconf. You'll have to add your option to the ac_option processing loop. The case for --x looks like a promising one to copy or replace to inject your new option, something like:
--foo=*)
my_foo=$ac_optarg ;;
There's also some code that strips out commandline args when configure sometimes needs to be re-invoked. It'll be up to you to determine whether --foo should be stripped or not. I think this is probably why they don't allow this in the first place.
If it were me, I'd try and lobby for AC_ARG_WITH (e.g. --with-foo=bar). It seems like a lot less work.
in order to do that you have to add to your configure.ac something like this:
# Enable debugging mode
AC_ARG_ENABLE(debug,
AC_HELP_STRING([--enable-debug],[Show a lot of extra information when running]),
AM_CPPFLAGS="$AM_CPPFLAGS -DDEBUG"
debug_messages=yes,
debug_messages=no)
AC_SUBST(AM_CPPFLAGS)
AC_SUBST(AM_CXXFLAGS)
echo -e "\n--------- build environment -----------
Debug Mode : $debug_messages"
That is just a simple example to add for example a --enable-debug, it will set the DEBUG constant on the config.h file.
then your have to code something like this:
#include "config.h"
#ifdef DEBUG
// do debug
#else
// no debug
#endif

printk() doesn't print in /var/log/messages

My OS Ubuntu 12.04. I wrote this Kernel Module and i use insmod and rmmod command but there isn't anything in /var/log messages. how can i fix this problem?
/*
* hello-1.c - The simplest kernel module.
*/
#include <linux/module.h> /* Needed by all modules */
#include <linux/kernel.h> /* Needed for KERN_INFO */
int init_module(void)
{
printk(KERN_INFO "Hello world 1.\n");
/*
* A non 0 return means init_module failed; module can't be loaded.
*/
return 0;
}
void cleanup_module(void)
{
printk(KERN_INFO "Goodbye world 1.\n");
}
Check whether syslog daemon process is running, since this is the process which copies printk messages from kernel ring/log message buffer to /var/log/messages if I am correct. printk messages can be seen using dmesg utility/command or messages will be in /var/log/messages. If correct loglevel is set then printk messages will be displayed on the console right away, no need to use dmesg or no need to check in /var/log/messages. printk debug messages can also be part of /var/log/syslog.
Modern Linux distributions don't use rsyslog (or any other syslog daemon) anymore. They rely on journald which is part of systemd, so the /var/log/messages file is missing and you have to use the journalctl command to read the system log.
Firstly, you should check that whether your module is properly loaded or not, by using this command
lsmod | grep "hello-1" //hello-1 is the name of your module
Since you wrote a kernel module, which prints some message. The messages from kernel and its module can be found in /var/log/syslog or you can view these kind of messages using dmesg command.
As your module prints "Hello World 1.", you should use following command to see message from your module.
dmesg | grep "Hello World 1."
Look for this in /etc/syslog.conf, the *.info... lines. These seem to control what gets logged via printk.
*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
I found that /proc/sys/kernel/printk only really controlled the console logging levels, not the logging to the file. And I guess check syslog is running too ;) We had exactly the same issue, KERN_INFO not going to log files and this fixed it. hth
I tried to print the kernel buffer by typing the following command :
dmesg
This will print the data written in printk

linux - export output from apachetop to file

Is it possible to export output from apachetop to file? Something like this: "apachetop > file", but because apachetop is running "forever", so this command is also running forever. I just need to obtain actual output from this program and handle it in my GTK# application.
Every answer will be very appreciated.
Matej.
This might work:
{ apachetop > file 2>&1 & sleep 1; kill $! ; }
but no guarantees :)
Another way using linux is to find out the /dev/vcsN device that is being used when running the program and reading from that file directly. It contains a copy of the screen data for a given VT; I'm not sure if there is a applicable device for a pty.
Well indirectly apachetop is using the access.log file to get it's data.
Look at
/var/log/apache2/access.log
You'll simply have to parse the file to get the info you're looking for!/var/log/apache2/access.log

on-the-fly output redirection, seeing the file redirection output while the program is still running

If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>

Resources