How do I send output of two commands' output to standard out in parallel? - linux

I want to use xinput to monitor # of keystrokes and # of mouse movement presses. For simplification let's say what I want is these two commands:
xinput test 0
xinput test 1
to write to the screen at the same time.
I am using this in a Perl script like:
open(my $fh, '-|', 'xinput test 0') or die $!;
while(my $line = <$fh>) {
...stuff to keep count instead of logging directly to file
}
EDIT:
something like:
open(my $fh, '-|', 'xinput test 0 & xinput test 1') or die $!;
doesn't work.

I'm not sure what you want to do with the output, but it sounds like you want to run the commands simultaneously. In that case, my first thought would be to fork the Perl process once per command and then exec the child processes to the commands you care about.
foreach my $command ( #commands ) { # filter #commands for taint, etc
if( fork ) { ... } #parent
else { # child
exec $command or die "Could not exec [$command]! $!";
}
}
The forked processes share the same standard filehandles. If you need their data in the parent process, you'd have to set up some sort of communication between the two.
There are also several Perl frameworks on CPAN for handling asynchronous multi-process stuff, such as POE, AnyEvent, and so on. They'd handle all these details for you.

If you want to write both command on the console simultaneously, simply run them on the background:
xinput test 0 &
xinput test 1 &
But first you have to make sure that the console is set into the regime which allows that, otherwise the background processes will get stopped when trying to write on console. This code will switch off the stty tostop option:
stty -tostop

Related

Can I capture the output of another process that I started?

I'm currently using > /dev/null & to have Perl script A run Perl script B totally independently, and it works fine. Script B runs without throwing back any output and stays alive when Script A ends, even when my terminal session ends.
Not saying I need it, but is there a way to recapture its output if I wanted to?
Thanks
Your code framework may look like this:
#!/usr/bin/perl
# I'm a.pl
#...
system "b.pl > ~/b.out &";
while (1)
{
my $time = localtime;
my ($fsize, $mtime) = (stat "/var/log/syslog")[7,9];
print "syslog: size=$fsize, mtime=$mtime at $time\n";
sleep 60;
}
while the b.pl may look like:
#!/usr/bin/perl
# I'm b.pl
while (1)
{
my $time = localtime;
my $fsize_a = (stat "/var/log/auth.log")[7];
my $fsize_s = (stat "/var/log/syslog")[7];
print "fsize: syslog=$fsize_s auth.log=$fsize_a at $time\n";
sleep 60;
}
a.pl and b.pl do their job independently.
b.pl is called by a.pl as a background job, which sends its output to b.out(won't mess up the screen of a.pl)
You can read b.out from some other terminal, or after a.pl is finished(or when a.pl is put to background temporarily)
About terminating the two scripts:
`ctrl-c` for a.pl
`killall b.pl` for b.pl
Note:
b.pl will never terminate even when you terminate your terminal (Assumed that your terminal is run as a desktop application), so you don't need the `nohup` command to help. (Perhaps only useful in console)
If your b.pl may spit out error messages from time to time, then you still need to deal with its stderr. It's left as your homework.

How to get PID of perl daemon in init script?

I have the following perl script:
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
Proc::Daemon::Init;
my $continue = 1;
$SIG{TERM} = sub { $continue = 0 };
while ($continue) {
# stuff
}
I have the following in my init script:
DAEMON='/path/to/perl/script.pl'
start() {
PID=`$DAEMON > /dev/null 2>&1 & echo $!`
echo $PID > /var/run/mem-monitor.pid
}
The problem is, this returns the wrong PID! This returns the PID of the parent process which is started when the daemon is run, but that process is immediately killed off. I need to get the PID of the child process!
The Proc::Daemon says
Proc::Daemon does the following:
...
9. The first child transfers the PID of the second child (daemon) to the parent. Additionally the PID of the daemon process can be written into a file if 'pid_file' is defined. Then the first child exits.
and then later, under new ( %ARGS )
pid_file
Defines the path to a file (owned by the parent user) where the PID of the daemon process will be stored. Defaults to undef (= write no file).
Also look at Init() method description. This all implies that you may want to use new first.
The point is that it is the grand-child process that is the daemon. However, the childr passes the pid along and it is available to the parent. If pid_file => $file_name is set in the constructor (the daemon's) pid is written to that file.
A comment asks to not have shell script rely on a file written by another script.
I can see two ways to do that.
Print the pid, returned by the $daemon->Init(), from the parent and pick it up in the shell. This is defeated by redirects in the question, but I don't know why they are needed. The parent and child exit right as all is set up, while the daemon is detached from everything.
Shell script can start the Perl script with the desired log-file name as an argument, letting it write the daemon pid to that file by the above process. The file is still output by Perl, but what matters about it is decided by the shell script.
I'd like to include a statement from my comment below. I consider these superior to two other things that come to mind: picking the filename from a config-style file kept by the shell is more complicated, while parsing the process table may be unreliable.
I've seen this before and had to resort to using STDERR to send back the childs PID to the calling shell script. I've always assumed it was due to the mentioned unreliability of exit codes - but details were not clear in the documentation. Please try something like this:
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
if( my $pid = Proc::Daemon::Init() ) {
print STDERR $pid;
exit;
}
my $continue = 1;
$SIG{TERM} = sub { $continue = 0 };
while ($continue) {
sleep(20);
exit;
}
With a calling script like this:
#!/bin/bash
DAEMON='./script.pl'
start() {
PID=$($DAEMON 2>&1 >/dev/null)
echo $PID > ./mem-monitor.pid
}
start;
When the bash script is ran, it will capture the STDERR output (containing the correct PID), and store it in the file. Any STDOUT the Perl script produces would be sent to /dev/null - though this is unlikely as the 1st level Perl script does (in this case) exit fairly early on.
Thank you to the suggestions from zdim and Hakon. They are certainly workable, and got me on the right track, but ultimately I went a different route. Rather than relying on $!, I used ps and awk to get the PID, as follows:
DAEMON='/path/to/perl/script.pl'
start() {
$DAEMON > /dev/null 2>&1
PID=`ps aux | grep -v 'grep' | grep "$DAEMON" | awk '{print $2}'`
echo $PID > /var/run/mem-monitor.pid
}
This works and satisfies my OCD! Note the double quotes around "$DAEMON" in grep "$DAEMON".

Run a Perl script in the background and send it commands

I want to make a script that will prevent requests to certain domains for a certain amount of time,
and kill specific processes during the same amount of time.
I would like to have something like a daemon, to which I could send commands.
For example, to see how much time is left, with some_script timeleft,
to start the daemon which would be created by something like some_script start,
or to add a new domain/process, etc.
What I'm stuck on is :
How to create a daemon? I have seen this
I don't know how to send a command to a daemon from a command line
I hope I have been clear enough in my explanation.
I would probably use the bones of the answer you refer to, but add:
a handler for SIGHUP which re-reads the config file of IPs to suppress, and,
a handler for SIGUSR1 which reports how much time there is remaining.
So, it would look like this roughly:
#!/usr/bin/perl
use strict;
use warnings;
use Proc::Daemon;
Proc::Daemon::Init;
my $continue = 1;
################################################################################
# Exit on SIGTERM
################################################################################
$SIG{TERM} = sub { $continue = 0 };
################################################################################
# Re-read config file on SIGHUP
################################################################################
$SIG{HUP} = sub {
# Re-read some config file - probably using same sub that we used at startup
open(my $fh, '>', '/tmp/status.txt');
print $fh "Re-read config file\n";
close $fh;
};
################################################################################
# Report remaining time on SIGUSR1
################################################################################
$SIG{USR1} = sub {
# Subtract something from something and report difference
open(my $fh, '>', '/tmp/status.txt');
print $fh "Time remaining = 42\n";
close $fh;
};
################################################################################
# Main loop
################################################################################
while ($continue) {
sleep 1;
}
You would then send the HUP signal or USR1 signal with:
pkill -HUP daemon.pl
or
pkill -USR1 daemon.pl
and look in /tmp/status.txt for the output from the daemon. The above commands assume you stored the Perl script as daemon.pl - adjust if you used a different name.
Or you could have the daemon write its own pid in a file on startup and use the -F option to pkill.
There are a few ways to communicate with a daemon, but I would think UNIX domain sockets would make the most sense. In perl, IO::Socket::UNIX would be a thing to look at.

How to handle updates from an continuous process pipe in Perl

I am trying to follow log files in Perl on Fedora but unfortunately, Fedora uses journalctl to read binary log files that I cannot parse directly. This, according to my understanding, means I can only read Fedora's log files by calling journalctl.
I tried using IO::Pipe to do this, but the problem is that $p->reader(..) waits until journalctl --follow is done writing output (which will be never since --follow is like tail -F) and then allows me to print everything out which is not what I want. I would like to be able to set a callback function to be called each time a new line is printed to the process pipe so that I can parse/handle each new log event.
use IO::Pipe;
my $p = IO::Pipe->new();
$p->reader("journalctl --follow"); #Waits for process to exit
while (<$p>) {
print;
}
I assume that journalctl is working like tail -f. If this is correct, a simple open should do the job:
use Fcntl; # Import SEEK_CUR
my $pid = open my $fh, '|-', 'journalctl --follow'
or die "Error $! starting journalctl";
while (kill 0, $pid) {
while (<$fh>) {
print $_; # Print log line
}
sleep 1; # Wait some time for new lines to appear
seek($fh,0,SEEK_CUR); # Reset EOF
}
open opens a filehandle for reading the output of the called command: http://perldoc.perl.org/functions/open.html
seek is used to reset the EOF marker: http://perldoc.perl.org/functions/seek.html Without reset, all subsequent <$fh> calls will just return EOF even if the called script issued additional output in the meantime.
kill 0,$pid will be true as long as the child process started by open is alive.
You may replace sleep 1 by usleep from Time::HiRes or select undef,undef,undef,$fractional_seconds; to wait less than a second depending on the frequency of incoming lines.
AnyEvent should also be able to do the job via it's AnyEvent::Handle.
Update:
Adding use POSIX ":sys_wait_h"; at the beginning and waitpid $pid, WNOHANG) to the outer loop would also detect (and reap) a zombie journalctl process:
while (kill(0, $pid) and waitpid($pid, WNOHANG) != $pid) {
A daemon might also want to check if $pid is still a child of the current process ($$) and if it's still the original journalctl process.
I have no access to journalctl, but if you avoid IO::Pipe and open the piped output directly then the data will not be buffered
use strict;
use warnings 'all';
open my $follow_fh, '-|', 'journalctl --follow' or die $!;
print while <$follow_fh>;

How can I change the current directory in a thread-safe manner in Perl?

I'm using Thread::Pool::Simple to create a few working threads. Each working thread does some stuff, including a call to chdir followed by an execution of an external Perl script (from the jbrowse genome browser, if it matters). I use capturex to call the external script and die on its failure.
I discovered that when I use more then one thread, things start to be messy. after some research. it seems that the current directory of some threads is not the correct one.
Perhaps chdir propagates between threads (i.e. isn't thread-safe)?
Or perhaps it's something with capturex?
So, how can I safely set the working directory for each thread?
** UPDATE **
Following the suggestions to change dir while executing, I'd like to ask how exactly should I pass these two commands to capturex?
currently I have:
my #args = ( "bin/flatfile-to-json.pl", "--gff=$gff_file", "--tracklabel=$track_label", "--key=$key", #optional_args );
capturex( [0], #args );
How do I add another command to #args?
Will capturex continue die on errors of any of the commands?
I think that you can solve your "how do I chdir in the child before running the command" problem pretty easily by abandoning IPC::System::Simple as not the right tool for the job.
Instead of doing
my $output = capturex($cmd, #args);
do something like:
use autodie qw(open close);
my $pid = open my $fh, '-|';
unless ($pid) { # this is the child
chdir($wherever);
exec($cmd, #args) or exit 255;
}
my $output = do { local $/; <$fh> };
# If child exited with error or couldn't be run, the exception will
# be raised here (via autodie; feel free to replace it with
# your own handling)
close ($fh);
If you were getting a list of lines instead of scalar output from capturex, the only thing that needs to change is the second-to-last line (to my #output = <$fh>;).
More info on forking-open is in perldoc perlipc.
The good thing about this in preference to capture("chdir wherever ; $cmd #args") is that it doesn't give the shell a chance to do bad things to your #args.
Updated code (doesn't capture output)
my $pid = fork;
die "Couldn't fork: $!" unless defined $pid;
unless ($pid) { # this is the child
chdir($wherever);
open STDOUT, ">/dev/null"; # optional: silence subprocess output
open STDERR, ">/dev/null"; # even more optional
exec($cmd, #args) or exit 255;
}
wait;
die "Child error $?" if $?;
I don't think "current working directory" is a per-thread property. I'd expect it to be a property of the process.
It's not clear exactly why you need to use chdir at all though. Can you not launch the external script setting the new process's working directory appropriately instead? That sounds like a more feasible approach.

Resources