I would like to execute any bash command. I found Command::new but I'm unable to execute "complex" commands such as ls ; sleep 1; ls. Moreover, even if I put this in a bash script, and execute it, I will only have the result at the end of the script (as it is explain in the process doc). I would like to get the result as soon as the command prints it (and to be able to read input as well) the same way we can do it in bash.
Command::new is indeed the way to go, but it is meant to execute a program. ls ; sleep 1; ls is not a program, it's instructions for some shell. If you want to execute something like that, you would need to ask a shell to interpret that for you:
Command::new("/usr/bin/sh").args(&["-c", "ls ; sleep 1; ls"])
// your complex command is just an argument for the shell
To get the output, there are two ways:
the output method is blocking and returns the outputs and the exit status of the command.
the spawn method is non-blocking, and returns a handle containing the child's process stdin, stdout and stderr so you can communicate with the child, and a wait method to wait for it to cleanly exit. Note that by default the child inherits its parent file descriptor and you might want to set up pipes instead:
You should use something like:
let child = Command::new("/usr/bin/sh")
.args(&["-c", "ls sleep 1 ls"])
.stderr(std::process::Stdio::null()) // don't care about stderr
.stdout(std::process::Stdio::piped()) // set up stdout so we can read it
.stdin(std::process::Stdio::piped()) // set up stdin so we can write on it
.spawn().expect("Could not run the command"); // finally run the command
write_something_on(child.stdin);
read(child.stdout);
Related
From what I've learned, also stated in the answer to this thread, redirection of stdout works as follows:
When we do something like: ls > dirlist
bash does the followings:
forks a process, which still runs bash
in the subprocess, open the file dirlist for writing on file descriptor 1
calling exec passing to it the ls executable.
this way, when ls writes to FD 1, it actually writes to the file.
With this in mind, I wonder about the following:
$ foo() { echo "hello" ; }
$ foo > file
$ cat file
hello
as far as I know, functions run in the same shell process, so how does redirection works in that case?
Redirection itself is just a shell construct, so the shell can make it work however it wants. Every command, whether external processes or shell builtins, has its own idea of standard output, and standard output is inherited just as it is by child processes from parent processes. In this case, the command foo either inherits its standard output from the shell or takes whatever file a shell redirection specifies. Once inside the function, echo writes to whatever file it inherits from foo.
Put another way, for its own built-in commands (which includes functions, compound statements like while, if, etc) the shell effectively simulates exec without actually calling exec.
I want to prevent a bash command from executing that has been chained using ; from running while the previous command is still running.
e.g. I write and submit command a; command b, but while command a is running I change my mind and want to prevent command b from running.
I cannot use kill because the subsequent command is not actually executing. Does bash have a queue of commands that can be manipulated?
To clarify, I am sure it is possible to make a new script or something that would allow me to create a queue, but that is not what this question is about. I specifically want to know if bash can prevent commands after a semicolon from running after I've 'submitted' them.
Consider these two scripts:
runner.sh
#!/bin/bash
while true
do
next_command=$(head -1 next_commands.list)
$next_command
sleep 60 #added to simulate processing time
done
next_commands.list
id
ls
echo hello
You can modify the content of the next_commands.list file to create a type of queue of which commands should be executed next.
I am trying to call a script within my bash script which needs Ctrl+c Signal to stop. I need to stop that using Ctrl+c only when I see repeated output behavior from the called Script and then continue with the rest of the script.
FLOW of Script A.sh:
1. environment setup for A.sh
2. call script B.sh
3. If you see repeated behavior in the output pattern of the called script B.sh, send Ctrl+c
4. continue with the rest of script code.
Afaik, ctrl+C is SIGINT signal. You should be able to use pkill command to send interrupt signal.
pkill -SIGINT B.sh
I won't give you the full code for it (you will remember it better if you produce it yourself) but I'll give you the idea..
do your env setup for a.sh
run b.sh and pipe both stdout and stderr to awk..
in awk add every line to a hashmap and increment the counter per insert..
3.1. check if any of the elements in the hashmap have a value greater than one..
3.2. if value is greater then one then use a system call to do pkill -SIGINT b.sh
I am running a shell script, something like sh script.sh in bash. The script contains many lines, some of which take seconds and others take days to execute. How can I kill the sh command but not kill its command currently running (the current line from the script)?
You haven't specified exactly what should happen when you 'kill' your script., but I'm assuming that you'd like the currently executing line to complete and then exit before doing any more work.
This is probably best achieved only by coding your script to behave in such a way as to receive such a kill command and respond in an appropriate way - I don't think that there is any magic to do this in linux.
for example:
You could trap a signal and then set a variable
Check for existence of a file (e.g touch /var/tmp/trigger)
Then after each line in your script, you'd need to check to see if each the trap had been called (or your trigger file created) - and then exit. If the trigger has not been set, then you continue on and do the next piece of work.
To the best of my knowledge, you can't trap a SIGKILL (-9) - if someone sends that to your process, then it will die.
HTH, Ace
The only way I can think of achieving this is for the parent process to trap the kill signal, set a flag, and then repeatedly check for this flag before executing another command in your script.
However the subprocesses need to also be immune to the kill signal. However bash seems to behave different to ksh in this manner and the below seems to work fine.
#!/bin/bash
QUIT=0
trap "QUIT=1;echo 'term'" TERM
function terminated {
if ((QUIT==1))
then
echo "Terminated"
exit
fi
}
function subprocess {
typeset -i N
while ((N++<3))
do
echo $N
sleep 1
done
}
while true
do
subprocess
terminated
sleep 3
done
I assume you have your script running for days and then you don't just want to kill it without knowing if one of its children finished.
Find the pid of your process, using ps.
Then
child=$(pgrep -P $pid)
while kill -s 0 $child
do
sleep 1
done
kill $pid
I have an issue when using this command
system("konsole --new-tab --workdir<dir here> -e perlprogram.pl &");
It opens perlprogram.pl which has:
system("mpg321 song.mp3");
I want to do this because mpg321 stalls the main perl script. so i thought by opening it in another terminal window it would be ok. But when I do run the first script all it does is open a new tab and do nothing.
Am I using konsole correctly?
Am I using konsole correctly?
Likely, no. But that depends. This question can be decomposed into two issues:
How do I achieve concurrency, so that my program doesn't halt while I execute an external command
How do I use konsole.
1. Concurrency
There are multiple ways to do that. Starting with the fork||exec('new-program'), to system 'new-program &', or even open.
system will invoke the standard shell of your OS, and execute the command you provided. If you provide multiple arguments, no shell escaping is done, and the specified program execed directly. (The exec function has the same interface so far). system returns a number that specifies if the command ran correctly:
system("my-command", "arg1") == 0
or die "failed my-command: $?";
See perlfunc -f system for the full info on what this return value signifies…
The exec never returns if successfull, but morphs your process into executing the new program.
fork splits your process in two, and executes the child and the process as equal copies. They only differ in the return value of fork: The parent gets the PID of the child; the child gets zero. So the following executes a command asynchronously, and lets your main script execute without further delay.
my #command = ("mpg321", "song.mp3");
fork or do {
# here we are in the child
local $SIG{CHLD} = 'IGNORE'; # don't pester us with zombies
# set up environment, especially: silence the child. Skip if program is well-behaved.
open STDIN, "<", "/dev/null" or die "Can't redirect STDIN";
open STDOUT, ">", "/dev/null" or die "Can't redirect STDOUT";
exec {$command[0]} #command;
# won't ever be executed on success
die qq[Couldn't execute "#command"];
};
The above process effectively daemonizes the child (runs without a tty).
2. konsole
The command line interface of that program is awful, and it produces errors half the time when I run it with any parameters.
However, your command (plus a working directory) should actually work. The trailing ampersand isn't neccessary, as the konsole command returns immediately. Something like
# because I `say` hello, I can be certain that this actually does something.
konsole --workdir ~/wherever/ --new-tab -e perl -E 'say "hello"; <>'
works fine for me (opens a new tab, displays "hello", and closes when I hit enter). The final readline there keeps the tab open until I close it. You can keep the tab open until after the execution of the -e command via --hold. This allows you to see any error messages that would vanish otherwise.