The journald of systemd supports kernel-style logging. So, the service can write on stderr the messages starting with "<6>", and they'll be parsed like info, "<4>" - warning.
But while developing the service it's launched outside of systemd. Is there any ready-to-use utilities to convert these numbers into readable colored strings? (it would be nice if that doesn't complicate the gdb workflow)
Don't want to roll my own.
There is no tool to convert the output but a simple sed run would do the magic.
As you said journal would strip off <x> token from the beginning of your log message and convert this to log level. What I would do is check for some env. variable in the code. For ex:
if (COLOR_OUTPUT_SET)
printf ("[ WARNING ] - Oh, snap\n");
else
printf ("<4> Oh, snap\n");
Related
I've set up Sendmail so that all messages are delivered to /dev/null instead of being actually stored anywhere else. I'm trying to reduce the number of unecessary disk writes and since those messages are essentially removed I want to, if possible, skip writing them to mqueue. Is there any way to do that?
The closest I could think of is mounting a nullfs filesystem on the mqueue directory, but I'd like a "cleaner" approach using sendmail only. Is this possible?
Thanks!
Most likely you choose wrong way to solve your problem but anyway:
You can select discard mailer for all recipients in check_rcpt (Local_check_rcpt) rule set. It will act as equivalent of DISCARD in access table.
Add the following lines to sendmil.mc file, generate new sendmail.cf file and restart or HUP sendmail daemon.
LOCAL_RULESETS
SLocal_check_rcpt
# PUT TAB (\t) BEFORE $# !!!
R$* $#discard $: discard
Is it possible to dump only trace_printk() outputs in trace file? I mean filter out all functions in function tracer (or any other tracer).
In general, you can switch options off inside of the options directory, /sys/kernel/debug/tracing/options. Use ls to display all toggle-able options.
# ls
annotate context-info funcgraph-abstime funcgraph-overhead func_stack_trace hex overwrite record-cmd sym-offset trace_printk
bin disable_on_free funcgraph-cpu funcgraph-overrun function-fork irq-info printk-msg-only sleep-time sym-userobj userstacktrace
blk_classic display-graph funcgraph-duration funcgraph-proc function-trace latency-format print-parent stacktrace test_nop_accept verbose
block event-fork funcgraph-irqs
Toggle options via echo, i.e.,
echo -n "1" > /sys/kernel/debug/tracing/options/trace_printk
If you are trying to filter out any output that was not produced by trace_printk(), you would likely need to ensure that trace_printk() is the only option set.
It's always good to check out the kernel documentation when in doubt. There's also a great lwn article that helped me out when I was first learning ftrace called Secrets of the Ftrace function tracer, which includes some sections about filtering in general.
For some who cannot get more informations from this thread...
Actually, if you use nop as an current_tracer, you will ignore all the trace except trace_printk.
do
echo nop > current_tracer
and run your kernel module or anything else which includes your trace_printk.
I am somewhat familiar with various ways of calling a script from another one. I don't really need an overview of each, but I do have a few questions. Before that, though, I should tell you what my goal is.
I am working on a perl/tk program that: a) gathers information and puts it in a hash, and b) fires off other scripts that use the info hash, and some command line args. Each of these other scripts are available on the command line (using another command-line script) and need to stay that way. So I can't just put all that into a module and call it good.I do have the authority to alter the scripts, but, again, they must also be usable on the command line.
The current way of calling the other script is by using 'do', which means I can pass in the hash, and use the same version of perl (I think). But all the STDOUT (and STDERR too, I think) goes to the terminal.
Here's a simple example to demonstrate the output:
this_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use Tk;
my $mw = MainWindow->new;
my $button = $mw->Button(
-text => 'start other thing',
-command => \&start,
)->pack;
my $text = $mw->Text()->pack;
MainLoop;
sub start {
my $script_path = 'this_other_thing.pl';
if (not my $read = do $script_path) {
warn "couldn't parse $script_path: $#" if $#;
warn "couldn't do $script_path: $!" unless defined $read;
warn "couldn't run $script_path" unless $read;
}
}
this_other_thing.pl
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
print "Hello World!\n";
How can I redirect the STDOUT and STDIN (for interactive scripts that need input) to the text box using the 'do' method? Is that even possible?
If I can't use the 'do' method, what method can redirect the STDIN and STDOUT, as well as enable passing the hash in and using the same version of perl?
Edit: I posted this same question at Perlmonks, at the link in the first comment. So far, the best response seems to use modules and have the child script just be a wrapper for the module. Other possible solutions are: ICP::Run(3) and ICP in general, Capture::Tiny and associated modules, and Tk::Filehandle. A solution was presented that redirects the output and error streams, but seems to not affect the input stream. It's also a bit kludgy and not recommended.
Edit 2: I'm posting this here because I can't answer my own question yet.
Thanks for your suggestions and advice. I went with a suggestion on Perlmonks. The suggestion was to turn the child scripts into modules, and use wrapper scripts around them for normal use. I would then simply be able to use the modules, and all the code is in one spot. This also ensures that I am not using different perls, I can route the output from the module anywhere I want, and passing that hash in is now very easy.
To have both STDIN & STDOUT of a subprocess redirected, you should read the "Bidirectional Communication with Another Process" section of the perlipc man page: http://search.cpan.org/~rjbs/perl-5.18.1/pod/perlipc.pod#Bidirectional_Communication_with_Another_Process
Using the same version of perl works by finding out the name of your perl interpreter, and calling it explicitly. $^X is probably what you want. It may or may not work on different operating systems.
Passing a hash into a subprocess does not work easily. You can print the contents of the hash into a file, and have the subprocess read & parse it. You might get away without using a file, by using the STDIN channel between the two processes, or you could open a separate pipe() for this purpose. Anyway, printing & parsing the data back cannot be avoided when using subprocesses, because the two processes use two perl interpreters, each having its own memory space, and not being able to see each other's variables.
You might avoid using a subprocess, by using fork() + eval() + require(). In that case, no separate perl interpreter will be involved, the forked interpreter will inherit the whole memory of your program with all variables, open file descriptors, sockets, etc. in it, including the hash to be passed. However, I don't see from where your second perl script could get its hash when started from CLI.
I have a perl script the creates a report based on an xml definition. Currently these definitions all exist as .xml files.
So I have the script run-report.pl, which can take a path to a definition file and create the report.
Now I want to create run-reports-from-db.pl, which will generate the report definition based on same database entries. I don't want to create temp files to pass to run-report.pl, I would just like to pass in the definition somehow.
So instead of saying:
run-report.pl -def=./path/to/def.xml
I want to be able to say:
run-report.pl --stream
And have the report definition available in <STDIN>
I am sure there is pretty trivial way to do this???
If I understand your question correctly, all you need is one | (pipe).
./generate-xml-from-db.pl | ./run-report.pl --stream
Anything the first process in the pipeline prints to stdout will appear in the second process's stdin.
As long as you read from STDIN, you have it available. Notice what happens with you take the code below name it something like echo.pl run it at the command line and paste reams of text.
#!/usr/bin/perl -w
use 5.010;
use strict;
use warnings;
while ( <> ) {
say;
}
<> is the Perl shorthand for "read from STDIN".
As long as the method you're using to launch the process has a way to get a hold of the standard input and outputs, you can just write it to that handle. You have to use the ways that are available to you. In Java, for example, you'd have to get the input stream of the process, in a batch command you have to pipe it. At a GUI terminal you can cut and paste.
while examining the console output and logging messages of different software it is sometimes difficult to keep the overview. It would be much easier to make the output colorful and highlight the text phrases which are currently important.
Is there a program for Linux/UNIX shell which could be used as a filter by utilizing unix pipes to make the console output colorful according to predefined patterns and colors?
p.ex.
pattern definition:
INFO=green
WARN=yellow
ERROR=red
\d+=lightgreen
to highlight the severity of the message and also numbers.
usage:
$ chatty_software | color_filter
11:41:21.000 [green:INFO] runtime.busevents - SensorA state updated to [lightgreen:17]
11:41:21.004 [green:INFO] runtime.busevents - SensorB state updated to [lightgreen:20]
original output:
11:41:21.000 INFO runtime.busevents - SensorA state updated to 17
11:41:21.004 INFO runtime.busevents - SensorB state updated to 20
we use a sed script along these lines:
s/.* error .*/^[[31m&^[[0m/
t done
s/.* warning .*/^[[33m&^[[0m/
t done
:done
and invoke it by
sed -f log_color.sed
I guess you could do something similar?