Perl: tail file in background while running another system command in loop - linux

I'm trying to write a Perl script to capture system log output while a loop runs a system command at intervals. I want the script to do the equivalent of something I often do on the (unix) command line: taking Java process thread dumps by tailing /var/log/jbossas/default/console.log into a new file in the background, while running kill -QUIT [PID] an arbitrary number of times at intervals, in the foreground. I do not need to examine or process the log file output while it's being tailed, I just want it to go to a new file while my loop runs; once the loop exits, the background task should exit too.
# basic loop
# $process is PID given as argument
my $duration = 6;
my $dumps = 0;
until ($dumps == $duration) {
system "kill -QUIT $process";
$dumps++;
print STDOUT "$dumps of $duration thread dumps sent to log.\n";
print STDOUT "sleeping for $frequency seconds\n";
sleep 30;
}
Somehow I need to wrap this loop in another loop that will know when this one exits, and then exit the background log tailing task. I realize that this should be trivial in Perl, but I am not sure of how to proceed, and other questions or examples I've found are not doing quite what I'm trying to do here. Seems like using Perl's system blocks my proceeding into the inner loop; exec forks off the tail job so I'm not sure how I'd exit it after my inner loop runs. I'd strongly prefer to use only core Perl modules, and not File::Tail or any additional CPAN modules.
Thanks in advance for any feedback, and feel free to mock my Perlessness. I've looked for similar questions answered here, but if I've missed one that seems to address my problem, I'd appreciate your linking me to it.

This is probably best suited with an event loop. Read up on the answer to Making a Perl daemon that runs 24/7 and reads from named pipes, that'll give you an intro on reading a filehandle in an event loop. Just open a pipe to the tail output, print it off to the file, run the kill on a timer event, then once the timer events are done just signal an exit.

Related

In Linux, how can I wait until a process I didn't start finishes?

I have a monitoring program that I'd like to check on various processes in the system, and know when they terminate. I'd also like to know their exit code, in case they crash. However, my program is not a parent of the processes to be monitored.
In Windows, this is easy: OpenProcess for SYNCHRONIZE rights, WaitForMultipleObjectsEx to wait for any of them to terminate, then GetExitCodeProcess to find out why it terminated (with NTSTATUS error codes if the reason was an exception).
But in Linux, the equivalent of these, waitpid, only works on your own child processes, not unrelated processes. We tried ptrace, but this caused its own issues, such as greatly slowing down signal processing.
This program is intended to run as root.
Is there a way to implement this, other than just polling /proc/12345 until it disappears?
Can't think of an easy way to collect the termination statuses, but as for simple death events, you can, as root, inject an open call to a file you'll have the other end of and then you can do select on your end of the file descriptor.
When the other end dies, it'll generate a close event on the filedescriptor you have the other end of.
A (very ugly) example:
mkfifo /tmp/fifo #A channel to communicate death events
sleep 1000 & #Simulate your victim process
echo $! #Make note of the pid you want
#In another terminal
sudo gdb -ex "attach $thePid" -ex ' call open("/tmp/fifo",0,0)' -ex 'quit'
exec 3>/tmp/fifo
ruby -e 'fd = IO.select([IO.for_fd(3)]); puts "died" '
#In yet another terminal
kill $thePid #the previous terminal will print `died` immediately
#even though it's not the parent of $thePid

How to use tcl thread as inter process communication method?

I am trying to search information if inter process communication can happen with tcl threads. I am biginner on this topic so right now just collecting information. I understand that sender and receiver mechanism to be coded to pass data between processess. And tcl thread package provides send command. Also thread can be used as timer for spawn process inside the same.
Is it possible to recieve data from thread to another thread?
Thanking you.
#contains of test.tcl
puts stdout "hello from wish"
# end of file
# set cmd
set exe {wish85.exe}
set exepath [list $exe test.tcl]
# This next line is slightly magical
set f [open |$exepath r+]
# Use the next line or you'll regret it!
puts $f {fconfigure stdout -buffering line}
fileevent $f readable "getline $f"
proc getline f {
if {[gets $f line]<0} {
close $f ;
return ;
}
puts "line=$line"
}
You need to be much clearer in your mind about what you are looking for. Threads are not processes! With Tcl, every Tcl interpreter context (the thing you make commands and variables in) is bound to a single thread, and every thread is coupled to a single process.
Tcl has a Thread package for managing threads (it should be shipped with any proper distribution of Tcl 8.6) and that provides a mechanism for sending messages between threads, thread::send. Those messages? They're executable scripts, which means that they are really flexible.
For communication between processes, things are much more complicated because you have to consider both discovery of the other processes and security (because processes are a security boundary by design). Here are some of the options:
Tcl is very good at running subprocesses and talking with them via pipes. For example, you can run a subordinate interpreter in just a couple of lines using open:
# This next line is slightly magical
set mypipeline [open |[list [info nameofexecutable]] r+]
# Use the next line or you'll regret it!
puts $mypipeline {fconfigure stdout -buffering line}
It even works well with the fileevent command, so you can do asynchronous processing within each interpreter. (That's really quite uncommon in language runtimes, alas.)
The send command in Tk lets you send scripts to other processes using the same display (I'm not sure if this works on Windows) much as thread::send does with threads in the same process.
The comm package in Tcllib does something very similar, but uses general sockets as a communication fabric.
On Windows, you can use the dde command in Tcl to communicate with other processes. I don't think Tcl registers a DDE server by default, but it's pretty easy to do (provided you are running the event loop, but that's a common requirement for most of the IPC mechanisms to work at their best).
More generally, you can think in terms of running webservices and so on, but that's getting quite complicated!

Linux schedule task when another is done

I have a task/process currently running. I would like to schedule another task to start when the first one finished.
How can I do that in linux ?
(I can't stop the first one, and create a script to start one task after the other)
Somewhat meager spec, but something along the line of
watch -n 1 'pgrep task1 || task2'
might do the job.
You want wait.
Either the system call in section 2 of the manual, one of it's varients like waitpid or the shell builtin which is designed explicitly for this purpose.
The shell builtin is a little more natural because both processes are childred of the sell, so you write a script like:
#!/bin/sh
command1 arguments &
wait
command2 args
To use the system calls you will have to write a program that forks, launches the first command in the child then waits before execing the second program.
The manpage for wait (2) says:
wait() and waitpid()
The wait() system call suspends execution of the current process until one of its children terminates. The call wait(&status) is equivalent to:
waitpid(-1, &status, 0);
The waitpid() system call suspends execution of the current process until a child
specified by pid argument has changed state.

Timing traps in a shell script

I have a shell script background process that runs "nohupped". This process shall receive signals in a trap, but when playing around with some code, I noticed that some signals are ignored if the interval between them is too small. The execution of the trap function takes too much time and therefore the subsequent signal goes
unserved. Unfortunately, the trap command doesn't have some kind of signal queue, that's why I am asking: What is the best way to solve this problem?
A simple example:
function receive_signal()
{
local TIMESTAMP=`date '+%Y%m%d%H%M%S'`
echo "some text" > $TIMESTAMP
}
trap receive_signal USR1
while :
do
sleep 5
done
The easiest change, without redesigning your approach, is to use realtime signals, which queue.
This is not portable. Realtime signals themselves are an optional extension, and shell and utility support for them are not required by the extension in any case. However, it so happens that the relevant GNU utilities on Linux — bash(1) and kill(1) — do support realtime signals in a commonsense way. So, you can say:
trap sahandler RTMIN+1
and, elsewhere:
$ kill RTMIN+1 $pid_of_my_process
Did you consider multiple one line trap statements? One for each signal you want to block or process?
trap dosomething 15
trap segfault SEGV
Also you want to have the least possible code in a signal handler for the reason you just encountered.
Edit - for bash you can code your own error handling / signal handling in C, or anything else using modern signal semantics if you want with dynamically loadable modules:
http://cfajohnson.com/shell/articles/dynamically-loadable/

input output file redirection and shell pipes in a simple shell program implemented with c

I have written a program that gets command line arguments such as ls, cat and executes them. Now I need to extend this program for i/o redirection and do shell pipes as well.
Here is my program for the simple shell.
if ((pid = fork()) == -1 ) { /* error exit - fork failed */
perror("Fork failed");
exit(-1);
}
if (pid == 0) { /* this is the child */
printf("This is the child ready to execute: %s\n",argv[1]);
execvp(argv[1],&argv[1]);
perror("Exec returned");
exit(-1);
} else {
wait(pid,0,0);
printf("The parent is exiting now\n");
...
}
I don't know how to add pipes and redirection in this same program!
dup(pipeID[0]);
close(pipeID[0]);
close(pipeID[1]);
execlp(argv[3],argv[3],argv[4],0);
I know that I have to use dup() or dup2() for redirection and pipe() too, but how do I do it all together in the same program?
There are many SO questions that address some or all of these issues. The more relevant search terms in the SO search buffer are [c] [shell] (tags shell and C). Questions include:
Writing my own shell in C: how do I run Unix executables?
Redirecting the output of a child process?
How can I implement my own basic Unix shell in C?
Writing Linux shell?
Shell program and pipes in C?
You can probably come up with a better selection if you try harder.
There are a number of issues you'll need to address:
Pipes need to be set up before the fork that creates the two processes connected by the pipe.
I/O Redirection can be done in the child (only).
You need to parse the command line to split it into command names, arguments, and I/O redirections.
You need to be careful to ensure that enough of the pipe file descriptors are closed (that's usually all of them by the time you're done).
Given a pipe line sort file1 file2 | uniq -c | sort -n, which process or processes is the parent shell going to wait for? All of them? Just the first? Just the last? Why?
The decisions on the previous point will affect the number of pipes opened in the parent vs the number opened by the children. You could always set up all the needed pipes in the parent, and let everyone do lots of closing. If you're only going to wait for the last process, you can optimize so that a given process in the chain has at most its input pipe and its output pipe open. Not trivial, but doable.

Resources