ZSH - wrap all commands with a function - linux

I'd like to print the PID of any process I run in my command at the beginning (even if it's not a background process).
I came up with this:
fn() {
eval $1
}
Now whenever I run a command, I want it to be processed as
fn "actual_command_goes_here"&; wait
So that this will trigger a background process (which will print the PID) and then run the actual command. Example:
$ fn "sleep 5; date +%s; sleep 5; date +%s; sleep 2"&; wait
[1] 29901
1479143885
1479143890
[1] + 29901 done fn "..."
My question is: is there any way to create a wrapper function for all commands in Bash/ZSH? That is, when ever I run ls, it should actually run fn "ls"&; wait.

Related

Run while loop in background and execute the entire script - Bash

I am trying to run a condition at specific interval in while loop but the issue is that it is only executing the test function, I am trying that the test function keep on getting executed at specific interval and the script should move to the next function as well keeping test function running in the background. Any help would be appreciated
test(){
while true;
do
echo "Hello"
sleep 5
done
}
function2(){
echo "I am inside function2"
}
test
function2
You can tell bash to run a command in a separate subshell, asynchronously, with a single trailing ampersand &. So you can make test run separately like this:
test(){
while true;
do
echo "Hello"
sleep 5
done
}
function2(){
echo "I am inside function2"
}
test &
function2
However, this will cause the script to terminate before test is finished running! You can make the program wait for test & to finish at a later point in the program by using wait:
test(){
while true;
do
echo "Hello"
sleep 5
done
}
function2(){
echo "I am inside function2"
}
test & p1=$!
function2
wait $p1

How do I stdout into terminal executing std::process::Command in Rust language

fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "docker","build", "-t", "postgres:latest", "-", "<>", "dockers/PostgreSql"])
.output()
.expect("failed to execute process");
println!("{:?}", output);
}
Above code runs fine but prints output only after docker script is completely ran, But I want to see all command output in my Linux terminal as it happens and want to see output as it happens,
I tried all combinations given in documentation and read it many times, not understanding, how do I redirect stdout to my terminal window,
According to the documentation stdout has default behavior depending on how you launch your subprocess:
Defaults to inherit when used with spawn or status, and defaults to piped when used with output.
So stdout is piped when you call output(). What does piped mean? This means the child process's output will be directed to the parent process (our rust program in this case). std::process::Command is kind enough to give us this as a string:
use std::process::{Command, Stdio};
let output = Command::new("echo")
.arg("Hello, world!")
.stdout(Stdio::piped())
.output()
.expect("Failed to execute command");
assert_eq!(String::from_utf8_lossy(&output.stdout), "Hello, world!\n");
// Nothing echoed to console
Now that we understand where the stdout is currently going, if you wish to have the console get streamed output, call the process using spawn():
use std::process::Command;
fn main() {
let output = Command::new("/bin/bash")
.args(&["-c", "echo hello world"])
.spawn()
.expect("failed to execute process");
println!("{:?}", output);
}
Notice also in this later example, I pass the full echo hello world command in one string. This is because bash -c splits its arg by space and runs it. If you were in your console executing a docker command through a bash shell you would say:
bash -c "docker run ..."
The quotes above tell the terminal to keep the third arg together and not split it by space. The equivalent in our rust array is to just pass the full command in a single string (assuming you wish to call it through bash -c of course).
I had luck with the other answer, but I had to modify it a little. For me, I
needed to add the wait method:
use std::{io, process::Command};
fn main() -> io::Result<()> {
let mut o = Command::new("rustc").arg("-V").spawn()?;
o.wait()?;
Ok(())
}
otherwise the parent program will end before the child.
https://doc.rust-lang.org/std/process/struct.Child.html#method.wait

Why bash background task ignores SIGINT?

I noticed that sleep can't be killed by SIGINT when spawned by:
(sleep 1000 &)
I wonder why is so.
All of the following are killed by SIGINT:
sleep 1000
sleep 1000 &
(sleep 1000)
( ( (sleep 1000) ) )
( ( (sleep 1000)& ) )
so I figure it must have something to do with non-interactive bash (brackets are required to enter subshell) and task have to be run in the background.
I wrote a short C program to test the behaviour and found that sa_handler is set to SIG_IGN -- explaining phenomenon, but why exactly is so?
I haven't found any information whether it is an intended feature (though considering length of manual I may have simply missed it) and if so, what was the reason behide it.
I include the C code for those interested:
#include <stdlib.h>
#include <stdio.h>
#include <signal.h>
int main() {
struct sigaction oldact;
if(sigaction(SIGINT, NULL, &oldact) != 0) {
printf("Error in sigaction\n");
exit(1);
}
if(oldact.sa_flags & SA_SIGINFO) {
printf("Using sa_sigaction\n");
} else {
if(oldact.sa_handler == SIG_DFL) {
printf("Default action\n");
} else if(oldact.sa_handler == SIG_IGN) {
printf("Ignore signal\n");
} else {
printf("Other action\n");
}
}
return 0;
}
EDIT:
pilcrow answer is great and I accepted it. I wanted to add as to why posix says so that, according to signal(7) both SIGINT and SIGQUIT are from keyboard. So it kinda makes sens to ignore them in processes detached from one (and not job controled by bash).
EDIT2:
Checkout Mark Plotnick comment for true explanation WHY.
This construct, (sleep 1000 &) puts your sleep command in a grandchild subshell with no job control:
your-shell
\
a compound-list in a subshell
\
an asynchronous list in a subshell
The first subshell, ( compound-list ) (a grouping command construct), simply runs the backgrounded command & (an asynchronous list) and then exits. The asynchronous list is run in its own subshell.
That final subshell is too far removed from your initial shell for job control to be meaningful.
Per POSIX, "[i]f job control is disabled ... when the shell executes an asynchronous list, the commands in the list shall inherit from the shell a signal action of ignored (SIG_IGN) for the SIGINT and SIGQUIT signals."
Thus your sleep is run with SIGINT set to ignore.

How to avoid buffering on stdin and stdout?

When reading from std::io::stdin(), input is buffered until EOF is encountered. I'd like to process lines as they arrive, rather than waiting for everything to be buffered.
Given a shell function bar that runs echo bar every second forever, I'm testing this with bar | thingy. It won't print anything until I ^C.
Here's what I'm currently working with:
use std::io;
use std::io::timer;
use std::time::Duration;
fn main() {
let mut reader = io::stdin();
let interval = Duration::milliseconds(1000);
loop {
match reader.read_line() {
Ok(l) => print!("{}", l),
Err(_) => timer::sleep(interval),
}
}
}
When reading from std::io::stdin(), input is buffered until EOF is encountered
Why do you say this? Your code appears to work as you want. If I compile it and run it:
$ ./i
hello
hello
goodbye
goodbye
yeah!
yeah!
The first of each pair of lines is me typing into the terminal and hitting enter (which is what read_line looks for). The second is what your program outputs.
Err(_) => timer::sleep(interval)
This is a bad idea - when input is closed (such as by using ^D), your program does not terminate.
Edit
I created a script bar:
#!/bin/bash
set -eu
i=0
while true; do
echo $i
sleep 1
done
And then ran it:
./bar | ./i
0
0
0
Your program continues to work.

Perl 5.8: possible to get any return code from backticks when SIGCHLD in use

When a CHLD signal handler is used in Perl, even uses of system and backticks will send the CHLD signal. But for the system and backticks sub-processes, neither wait nor waitpid seem to set $? within the signal handler on SuSE 11 linux. Is there any way to determine the return code of a backtick command when a CHLD signal handler is active?
Why do I want this? Because I want to fork(?) and start a medium length command and then call a perl package that takes a long time to produce an answer (and which executes external commands with backticks and checks their return code in $?), and know when my command is finished so I can take action, such as starting a second command. (Suggestions for how to accomplish this without using SIGCHLD are also welcome.) But since the signal handler destroys the backtick $? value, that package fails.
Example:
use warnings;
use strict;
use POSIX ":sys_wait_h";
sub reaper {
my $signame = shift #_;
while (1) {
my $pid = waitpid(-1, WNOHANG);
last if $pid <= 0;
my $rc = $?;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
# system can be made to work by not using $?, instead using system return value
my $rc = system("echo hello 1");
print "hello \$?=$?\n";
print "hello rc=$rc\n";
# But backticks, for when you need the output, cannot be made to work??
my #IO = `echo hello 2`;
print "hello \$?=$?\n";
exit 0;
Yields a -1 return code in all places I might try to access it:
hello 1
wait()=-1, rc=-1
hello $?=-1
hello rc=0
wait()=-1, rc=-1
hello $?=-1
So I cannot find anywhere to access the backticks return value.
This same issue has been bugging me for a few days now. I believe there are 2 solutions required depending on where you have your backticks.
If you have your backticks inside the child code:
The solution was to put the line below inside the child fork. I think your statement above "if I completely turn off the CHLD handler around the backticks then I might not get the signal if the child ends" is incorrect. You will still get a callback in the parent when the child exits because the signal is only disabled inside the child. So the parent still gets a signal when the child exits. It's just the child doesn't get a signal when the child's child (the part in backticks) exits.
local $SIG{'CHLD'} = 'DEFAULT'
I'm no Perl expert, I have read that you should set the CHLD signal to the string 'IGNORE' but this did not work in my case. In face I believe it may have been causing the problem. Leaving that out completely appears to also solve the problem which I guess is the same as setting it to DEFAULT.
If you have backticks inside the parent code:
Add this line to your reaper function:
local ($!, $?);
What is happening is the reaper is being called when your code inside the backticks completes and the reaper is setting $?. By making $? local it does not set the global $?.
So, building on MikeKull's answer, here is a working example where the fork'd child uses backticks and still gets the proper return code. This example is a better representation of what I was doing, while the original example did not use forks and could not convey the entire issue.
use warnings;
use strict;
use POSIX ":sys_wait_h";
# simple child which returns code 5
open F, ">", "exit5.sh" or die "$!";
print F<<EOF;
#!/bin/bash
echo exit5 pid=\$\$
exit 5
EOF
close F;
sub reaper
{
my $signame = shift #_;
while (1)
{
my $pid = waitpid(-1, WNOHANG);
print "no child waiting\n" if $pid < 0;
last if $pid <= 0;
my $rc = $? >> 8;
print "wait()=$pid, rc=$rc\n";
}
}
$SIG{CHLD} = \&reaper;
if (!fork)
{
print "child pid=$$\n";
{ local $SIG{CHLD} = 'DEFAULT'; print `./exit5.sh`; }
print "\$?=" . ($? >> 8) . "\n";
exit 3;
}
# sig CHLD will interrupt sleep, so do multiple
sleep 2;sleep 2;sleep 2;
exit 0;
The output is:
child pid=32307
exit5 pid=32308
$?=5
wait()=32307, rc=3
no child waiting
So the expected return code 5 was received in the child when the parent's reaper was disabled before calling the child, but as indicated by ikegami the parent still gets the CHLD signal and a proper return code when the child exits.

Resources