Continuously monitor linux console logs using C/C++ application - linux

I have one third party library application which runs continuously and generates console print when some event occurs.
I want to take some action when some specific event occurs so I need to monitor console prints continuously to trigget my action.
Is it possible to write application which can continuously monitor string dumper on console(stdout) and do processing when one line is detected.
I have tried to use 'popen' function but it keeps waiting until library application stops execution.
Here is my sample code using open
#include <stdio.h>
int main()
{
FILE *fd = NULL;
char buf[512] = {0};
fd = popen ("./monitor","r");
while (fgets (buf, 512, fd) != NULL)
{
printf ("__FILE__ : message : %s\n",buf);
}
printf ("EOF detected!\n");
return 0;
}
Can anyone please let me know proper way of monitoring console logs and take action.
Thanks in advance.
Pratik

Here is an example piece o code I 've written recently that reads from stdin and prints to stdout .
void echo(int bufferSize) {
// Disable output buffering.
setbuf(stdout, NULL);
char buffer[bufferSize];
while (fgets(buffer, sizeof(buffer), stdin)) {
printf("%s", buffer);
}
}
As I understand you have a similar issue as I had initially getting delayed output because I didn't use:
setbuf(stdout, NULL);
You can also read from stdin(that's what my example code does ) just pipe your command to your c code or if you just want to filter output pipe it to grep. If it's a standardized syslog log you could also use tail on the log file:
tail -f <logfile>| <your c prgoramme>
or
for just filering
tail -f <logfile>|grep "<your string here>"
or if without log file pipe stdout logs this way:
<your app>|<your c prgoramme>
or
<your app>| grep "<your string here>"

3rd party program simulated by a shell script that writes to stdout
#!/bin/bash
while true; do
echo "something"
sleep 2
done
You want to write something like this to capture the output from the 3rd party program and then act on the information:
#!/bin/bash
while read line; do
if [[ $line == "something" ]]; then
echo "do action here"
fi
done
Then combine them with a pipe operator:
./dosomething.sh | act.sh

Related

How to read stdout from a sub process in bash in real time

I have a simple C++ program that counts from 0 to 10 with an increment every 1 second. When the value is incremented, it is written to stdout. This program intentionally uses printf rather than std::cout.
I want to call this program from a bash script, and perform some function (eg echo) on the value when it is written to stdout.
However, my script waits for the program to terminate, and then process all the values at the same time.
C++ prog:
#include <stdio.h>
#include <unistd.h>
int main(int argc, char **argv)
{
int ctr = 0;
for (int i = 0; i < 10; ++i)
{
printf("%i\n", ctr++);
sleep(1);
}
return 0;
}
Bash script:
#!/bin/bash
for c in $(./script-test)
do
echo $c
done
Is there another way to read the output of my program, that will access it in real time, rather than wait for for the process to terminate.
Note: the C++ program is a demo sample - the actual program I am using also uses printf, but I am not able to make changes to this code, hence the solution needs to be in the bash script.
Many thanks,
Stuart
As you correctly observed, $(command) waits for the entire output of command, splits that output, and only after that, the for loop starts.
To read output as soon as is available, use while read:
./script-test | while IFS= read -r line; do
echo "do stuff with $line"
done
or, if you need to access variables from inside the loop afterwards, and your system supports <()
while IFS= read -r line; do
echo "do stuff with $line"
done < <(./script-test)
# do more stuff, that depends on variables set inside the loop
You might be more lucky using a pipe:
#!/bin/bash
./script-test | while IFS= read -r c; do
echo "$c"
done

perl multithreading: capturing stdio of subthread childs with "mixed" results

I wrote a massively multithreaded application in perl which basically scans a file- or directory-structure for changes (either using inotify or polling). When it detects changes, it launches subthreads that execute programs with the changed files as an argument, according to a configuration.
This works quite nice so far, with the exception that my application also tries to capture stdout and stderr of the externally executed programs and write them to log files in a structured manner.
I am, however, experiencing an occasional but serious mixup of output here, in the way that every when and then (usually under heavy workload, of course, so that the normal tests always run fine) stdout from a program on thread A gets into the stdout pipe FH of another program running on thread B at the very same time.
My in-thread code to run the externally executed programs and capture the output from them looks like this:
my $out;
$pid = open($out, "( stdbuf -oL ".$cmd." | stdbuf -oL sed -e 's/^/:::LOG:::/' ) 2>&1 |") or xlog('async execution failed for: '.$cmd, LOG_LEVEL_NORMAL, LOG_ERROR);
# catch all worker output here
while(<$out>)
{
if( $_ =~ /^:::LOG:::/ )
{
push(#log, $wname.':::'.$acnt.':::'.time().$_);
} else {
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::'.$_);
}
if (time() - $last > 1)
{
mlog($acnt, #log);
$last = time();
#log = ();
}
}
close($out);
waitpid($pid, 0);
push(#log, $wname.':::'.$acnt.':::'.time().':::LOG:::--- thread finished ---');
stdbuf is being used here to suppress buffering delays whereever possible and the sed pipe is being used to avoid the need of handling multiple fds in the reader while still being able to separate normal output from errors.
Captured log lines are being stuffed into a local array by the while loop and every other second contents of that array are handed over to a thread-safe global logging method using semaphores that makes sure nothing gets mixed up.
To avoid unneccesary feedback loops from you: I certainly have made sure (using debug output) that the output really is mixed up on the thread level already and is not a result of locking mistakes later in the output chain!
My Question is: how can it be, that the thread-locally defined $out FH from thread A does receive output that definitely comes from a totally different program running in thread B and therefor should end up in the separately defined thread-local $out FH of thread B? Did I make a grave mistake at some point here or is it just that perl threading is a mess? And, finally, what would be a recommended way to separate the data properly (and preferably in some elegant way)?
Update: due to popular demand I have added the full thread method here:
sub async_command {
my $wname = shift;
my $cmd = shift;
my $acnt = shift;
my $delay = shift;
my $errlog = shift;
my $last = time();
my $pid = 0;
my #log;
my $out;
push(#log, $wname.':::'.$acnt.':::'.$last.':::LOG:::--- thread started ---'.($delay ? ' (sleeping for '.$delay.' seconds)':''));
push(#log, $wname.':::'.$acnt.':::'.$last.':::ERR:::--- thread started ---') if ($errlog);
if ($delay) { sleep($delay); }
# Start worker with output pipe. stdbuf prevents unwanted buffering
# sed tags stdout vs stderr
$pid = open($out, "( stdbuf -oL ".$cmd." | stdbuf -oL sed -e 's/^/:::LOG:::/' ) 2>&1 |") or xlog('async execution failed for: '.$cmd, LOG_LEVEL_NORMAL, LOG_ERROR);
# catch all worker output here
while(<$out>)
{
if( $_ =~ /^:::LOG:::/ )
{
push(#log, $wname.':::'.$acnt.':::'.time().$_);
} else {
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::'.$_);
}
if (time() - $last > 1)
{
mlog($acnt, #log);
$last = time();
#log = ();
}
}
close($out);
waitpid($pid, 0);
push(#log, $wname.':::'.$acnt.':::'.time().':::LOG:::--- thread finished ---');
push(#log, $wname.':::'.$acnt.':::'.time().':::ERR:::--- thread finished ---') if ($errlog);
mlog($acnt, #log);
byebye();
}
So... here you can see that #log as well as $out are thread-local variables. The xlog (global log) and mlog-methods (worker logs) actually use Thread::Queue for further processing. I just dont want to use it more than once a second per thread to avoid too much locking overhead.
I have duplicated the push(#log... statements into xlog() calls for debugging. Since the worker name $wname is somewhat tied to the $cmd executed and $acnt is a number unique for each thread, I came to see clearly that there is log output being read from the $out FH that definitely comes from a different $cmd than the one executed in this thread, while $acnt and $wname stay the ones that actually belong to the thread. Also I can see that these log lines then do NOT appear on the $out FH in the other thread where they should be.

Get the content on the command line with an external promgram

I would like to write a small program which will analyize my current input on the command line and generate some suggesstions like those search engines do.
The problems is how can an external program get the content on command line? For example
# an external program started and got passed in the PID of the shell below.
# the user typed something in the shell like this...
<PROMPT> $ echo "grab this command"
# the external program now get 'echo "grab this command"'
# and ideally the this could be done in realtime.
More over, can I just modify the content of current command line?
EDIT
bash uses libreadline to manage the command line, but still I can not imagine how to make use of this.
You could write your own shell wrapper using c. Open bash in a process using popen and use fgetc and fputc to write the data to the process and the output file.
A quick dirty hack could look like this (bash isn't started in interactive mode, but otherwise should work fine. --> no prompt):
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <signal.h>
pid_t pid;
void kill_ch(int sig) {
kill(pid, SIGKILL);
}
/**
*
*/
int main(int argc, char** argv) {
int b;
FILE *cmd = NULL;
FILE *log = NULL;
signal(SIGALRM, (void (*)(int))kill_ch);
cmd = popen("/bin/bash -s", "r+");
if (cmd == NULL) {
fprintf(stderr, "Error: Failed to open process");
return EXIT_FAILURE;
}
setvbuf(cmd, NULL, _IOLBF, 0);
log = fopen("out.txt", "a");
if (log == NULL) {
fprintf(stderr, "Error: Failed to open logfile");
return EXIT_FAILURE;
}
setvbuf(log, NULL, _IONBF, 0);
pid = fork();
if (pid != 0)
goto EXEC_WRITE;
else
goto EXEC_READ;
EXEC_READ:
while (1) {
b = fgetc(stdin);
if (b != EOF) {
fputc((char) b, cmd);
fputc((char) b, log);
}
}
EXEC_WRITE:
while (1) {
b = fgetc(cmd);
if (b == EOF) {
return EXIT_SUCCESS;
}
fputc(b, stdout);
fputc(b, log);
}
return EXIT_SUCCESS;
}
I might not fully understand your question but I think you'd basically have two options.
The first option would be to explicitly call your "magic" program by prefixing your call with it like so
<PROMPT> $ magic echo "grab this command"
(magic analyzes $* and says...)
Your input would print "grab this command" to stdout
<PROMPT> $
In this case the arguments to "magic" would be handled as positional parameters ($*, $1 ...)
The second option would be to wrap an interpreter-like something around your typing. E.g. the Python interpreter does so if called without arguments. You start the interpreter, which will basically read anything you type (stdin) in an endless loop, interpret it, and produce some output (typically on stdout).
<PROMPT> $ magic
<MAGIC_PROMPT> $ echo "grab this command"
(your magic interpreter processes the input and says...)
Your input would print "grab this command" to stdout
<MAGIC_PROMPT> $

How to prefill command line input

I'm running a bash script and I'd like to prefill a command line with some command after executing the script. The only condition is that the script mustn't be running at that time.
What I need is to ...
run the script
have prefilled text in my command line AFTER the script has been stopped
Is it even possible? All what I tried is to simulate a bash script using
read -e -i "$comm" -p "[$USER#$HOSTNAME $PWD]$ " input
command $input
But I'm looking for something more straightforward.
You need to use the TIOCSTI ioctl. Here's an example C program that shows how it works:
#include <sys/ioctl.h>
main()
{
char buf[] = "date";
int i;
for (i = 0; i < sizeof buf - 1; i++)
ioctl(0, TIOCSTI, &buf[i]);
return 0;
}
Compile this and run it and "date" will be buffered as input on stdin, which your shell will read after the program exits. You can roll this up into a command that lets you stuff anything into the input stream and use that command in your bash script.

TAP::Harness perl tests tee output

I am running my tests using TAP::Harness , when I run the tests from command line on a Linux system I get the test results on STDOUT as it is run but when i try to capture the output to a file as well as STDOUT using perl harness.pl | tee out.tap the results are buffered and displayed only at the end, I tried passing in a file handle to the new but the results are still buffered before being written to a file , Is there a way not to buffer the output, I have a long running suite and would like to look at the results while the tests are running as well as capture the output.
TAP::Harness version 3.22 and perl version 5.8.8
here is the sample code
harness.pl
#!/usr/bin/perl
use strict;
use warnings;
use TAP::Harness;
$|++;
my #tests = ('del.t',);
my $harness = TAP::Harness->new( {
verbosity => 1,
} );
$harness->runtests(#tests);
and the test del.t
use Test::More qw /no_plan/;
$|++;
my $count =1;
for (1 ..20 ) {
ok ( $count ++ == $_, "Pass $_");
sleep 1 if ( $count % 5 == 0 ) ;
}
Using script instead of tee does what you want:
script -c 'perl harness.pl' file
Found a simple change to make tee work as well: Specify a formatter_class:
my $harness = TAP::Harness->new( {
verbosity => 1,
formatter_class => 'TAP::Formatter::Console',
} );
This is because TAP::Harness normally uses a different default one if the output is not a tty, which is what is causing the buffering you're seeing.

Resources