keeping the whole gdb output log in file - linux

Is there some way to keep the gdb log in a file.
I found:
$gdb run.o
(gdb) set logging file mylog.txt
(gdb) set logging on
(gdb)r
But it just keeps the logs of commands.
is there any option to have all the logs in a file ?

Yes, and no.
GDB cannot do it because output from the program being debugged is not visible to it.
The poor-man's way to do it is using tee:
gdb <blah> | tee logfile
That will work, but you'll find that the interactive features of GDB are missing (autocomplete, paging, etc.).
My preferred method is to use the logging feature in my terminal. I use "Terminator" with its "logger" plugin enabled, but I'm sure there are other options.

is there any option to have all the logs in a file
From man script:
script makes a typescript of everything printed on your terminal.
It is useful for students who need a hardcopy record of an interactive
session as proof of an assignment, as the typescript file can be
printed out later with lpr(1).

Related

centos 7 replace systemd with a script

I want to do the following thing: I want to replace the systemd program on my CentOS 7 installation, with a script, that will eventually start a shell where the user can input commands as root on the system console.
For this, I used the init=/sbin/myInit parameter. In this script I do some initialization, and in the end I call bash to enable the user to input commands. The interactive bash always appears on the screen.
Unfortunately, there is a problem with the input, that I haven't been able to solve: the input doesn't work correctly. When I press enter, only the second enter works, but no new line is performed on the screen. And when I type some characters, not all the characters are read. It seems like the console is in some Unicode mode (stty -a shows iutf8 parameter), and bash cannot correctly read the input. The output is fine, all the echo commands in my script are correctly printed on the screen (I only print ASCII characters).
I've tried all sorts of combinations, with different LC_ALL settings (LC_ALL=C, LC_ALL=C.utf-8, no LC_ALL or LANG variables initialization), redirect stdin/stdout/stderr to /dev/console, /dev/tty0 or /dev/tty1, the TERM variable is initialized to linux value, but nothing works. I tried unicode_start and unicode_stop commands, but unicode_stop doesn't work, and I get the error message "stty: standard input: unable to perform all requested operations". If I run showkey command, and I press the enter key, the correct keycode 28 is detected, same with normal boot.
Even stranger than this is the fact that if I'm using the init=/bin/bash kernel parameter, the input works fine, and if I manually run my script, it also works fine. If I copy the bash executable to something like /bin/mys, and I use init=/bin/mys, than same input problem occurs. It seems like the initrd image treats /bin/bash differently. Maybe it performs some initialization or something, that enables the bash to read correctly from the terminal. Why do we have this difference between init=/bin/bash and init=/bin/mys? It's the same executable in the end.
What am I doing wrong ?
As a general question about the init program, can somebody explain me what the init program should do to work correctly ? Any particular signals it has to respond to, any console initialization to be performed ?
I'm trying to solve this for days, and I cannot find any solution. On the web I couldn't find any article about this. So any suggestion would be greatly appreciated. Thank you.
I managed to pull a solution out for this one.
Turns out, at boot, the console is not initialized. I looked at the differences between the output of stty -a between the normal boot and the script boot (init=/sbin/myInit), and there are some differences between them: the console on normal boot had the following flags active on it: brkint ignpar ixon imaxbell isig icanon iexten echo, while when on script boot, those flags where cleared (stty reports them with - ). I'm not sure what they mean, except the echo flag, which I quickly noticed, because on each key press, no character ware printed on the screen.
So I decided to set those flags on my init script for /dev/console. After digging the stty manual page, I found the sane option, that sets the most important flags for normal console usage. So I added the line stty -F /dev/console sane in my init script.
But, there is a small problem. It seems that some console options cannot be changed by the stty command if the console is already opened by a process. So the first thing I did in my script is to redirect the stdin/stdout/stderr to /dev/null, via the line 0/dev/null, and then add the stty sane line, then redirect back the stdin/stdout/stderr to /dev/console.
I also noticed a process called plymouth, probably started by the CentOS initrd, so before the root was mounted and init script executed. I'm not sure if that process opened the system console, but I decided to eliminate it, by reconstructing a personalized initrd, by omitting the plymouth module from dracut (option -o plymouth).
After doing this, the script worked fine, and the bash process started by the script could read correctly the console.
Hope this helps somebody someday. Cheers

Get bash autocompletion printed by stdin write

I want to write a program that will print out the autocompletions of bash.
Basically I'm writting something into bash stdin with
childProc.stdin.write("./myfi")
And would like to receive autocompletion for it like "./myfile.txt"
But childProc.stdout is empty after childProc.stdin.write("\t") so there has to be some other way to trigger autocompletion.
Any ideas?
Command-completion in only enabled in interactive shells. Bash is interactive if:
Neither a script filename nor the -c option were specified when invoking bash, and
both stdin and stderr are attached to terminals (as determined by isatty()), or
bash was started with the -i flag.
In your case, stdin is clearly a pipe, which is not a terminal. So probably command completion has been disabled. But if it were enabled, you'd see the result on stderr not stdout.
So you could try supplying the -i command line option when starting your bash shell, or you could attach its stdin, stdout and stderr file descriptors to a pseudo-tty. In either case, what you will see coming back from bash will be intermingled with terminal control codes, so you'll probably want to set TERM to something basic (like dumb).
If you want to see the completions which bash might generate, you can use the compgen built-in. compgen does not know about the customized completion settings installed by the complete command, and it is not easy to get the environment set up correctly for the -F and -C function, but other than that you can probably get it to generate whatever completion lists you would like. See the Bash manual for detailed option documentation.
I've found an answer.
The thing that made it work was Pseudoterminal.
This module actually.
https://www.npmjs.com/package/pty.js-dl

gdb in backtrack

I've just tried using gdb on BackTrack Linux and I must say that its awesome. I wonder how gdb in backtrack is configured to act this way.
When I set a breakpoint, all the register values, a part of the stack, a part of the data section and the next 10-15 instructions to be executed are printed. The same happens when I step or next through the instructions.
I find this amazing and would love to have this on my Ubuntu machine too; how could I go about doing this?
They seem to be using this .gdbinit file:
https://github.com/gdbinit/Gdbinit/blob/master/gdbinit
I'm guessing that this is done using a post command hook:
http://sourceware.org/gdb/current/onlinedocs/gdb/Hooks.html#Hooks
inside of a system wide gdbinit:
http://sourceware.org/gdb/onlinedocs/gdb/System_002dwide-configuration.html
which may or may not reference shell commands and/or use gdb python scripts.
try:
strace gdb /bin/echo 2>&1 | grep gdbinit

How do I capture all of my compiler's output to a file?

I'm building an opensource project from source (CPP) in Linux. This is the order:
$CFLAGS="-g Wall" CXXFLAGS="-g Wall" ../trunk/configure --prefix=/somepath/ --host=i386-pc --target=i386-pc
$make
While compiling I'm getting lot of compiler warnings. I want to start fixing them. My question is how to capture all the compiler output in a file?
$make > file is not doing the job. It's just saving the compiler command like g++ -someoptions /asdf/xyz.cpp I want the output of these command executions.
The compiler warnings happen on stderr, not stdout, which is why you don't see them when you just redirect make somewhere else. Instead, try this if you're using Bash:
$ make &> results.txt
The & means "redirect stdout and stderr to this location". Other shells often have similar constructs.
In a bourne shell:
make > my.log 2>&1
I.e. > redirects stdout, 2>&1 redirects stderr to the same place as stdout
Lots of good answers so far. Here's a frill:
$ make 2>&1 | tee filetokeepitin.txt
will let you watch the output scroll past.
The output went to stderr. Use 2> to capture that.
$make 2> file
Assume you want to hilight warning and error from build ouput:
make |& grep -E "warning|error"
Based on an earlier reply by #dmckee
make | tee makelog.txt
This gives you real-time scrolling output while compiling, and simultaneously write to the makelog.txt file.
Try make 2> file. Compiler warnings come out on the standard error stream, not the standard output stream. If my suggestion doesn't work, check your shell manual for how to divert standard error.
From http://www.oreillynet.com/linux/cmd/cmd.csp?path=g/gcc
The > character does not redirect the
standard error. It's useful when you
want to save legitimate output without
mucking up a file with error messages.
But what if the error messages are
what you want to save? This is quite
common during troubleshooting. The
solution is to use a greater-than sign
followed by an ampersand. (This
construct works in almost every modern
UNIX shell.) It redirects both the
standard output and the standard
error. For instance:
$ gcc invinitjig.c >& error-msg
Have a look there, if this helps:
another forum
In C shell
- The ampersand is after the greater-than symbol
make >& filename
It is typically not what you want to do. You want to run your compilation in an editor that has support for reading the output of the compiler and going to the file/line char that has the problems. It works in all editors worth considering. Here is the emacs setup:
https://www.gnu.org/software/emacs/manual/html_node/emacs/Compilation.html

How do I get gdb to ignore my shell window's size?

This is part of our unit test flow. I run gdb with the --command option to have it execute commands from a text file. The output of gdb is then directed into a file, and that file is compared to a reference file. But the problem is, gdb uses the current shell window's size to place newlines in its output. If the window is smaller, it will add more newlines to the output to make it more readable.
Is there an option in gdb to disable this, so that my test's output is always the same regardless of the shell window I run it in?
Edit: found it, I just use this as the first gdb command:
set width 80
Sometimes things are easy.
Found it, I just use this as the first gdb command:
set width 80
Sometimes things are easy.

Resources