How can I run and end the script() linux command from Perl? - linux

#!/usr/bin/perl
$sim = "multiq";
`make SCHED=$sim`;
`script > scripter`;
`echo hi`;
print pack("c", 04);
~
This script hangs when script is called. Not sure how to get the perl script to keep running.

Note that backticks (‘‘) run a command and return its output. If you're going to ignore the output, use system as in
system("make SCHED=$sim") == 0 or die "$0: make exited " . ($? >> 8)
If you want to fire-and-forget a program (that is, start it in the background without worrying about when it completes), you can use
system("script >scripter &");

You're have to run that all in one child process if you want it to all interact. See the perlipc for various ways to handle that.

you might want to look at Expect to control an interactive session

backticks (‘‘) run a command and return its output.So you can store and manipulate it further. If you want to ignore the output, use system(). Also if you want to run any process in background use & in the prefix of the process. For example you wish to start Gimp using your script simply say
system("gimp &");

Related

Can I prevent a subsequent chained bashed command from running?

I want to prevent a bash command from executing that has been chained using ; from running while the previous command is still running.
e.g. I write and submit command a; command b, but while command a is running I change my mind and want to prevent command b from running.
I cannot use kill because the subsequent command is not actually executing. Does bash have a queue of commands that can be manipulated?
To clarify, I am sure it is possible to make a new script or something that would allow me to create a queue, but that is not what this question is about. I specifically want to know if bash can prevent commands after a semicolon from running after I've 'submitted' them.
Consider these two scripts:
runner.sh
#!/bin/bash
while true
do
next_command=$(head -1 next_commands.list)
$next_command
sleep 60 #added to simulate processing time
done
next_commands.list
id
ls
echo hello
You can modify the content of the next_commands.list file to create a type of queue of which commands should be executed next.

Can I react on entered command in bash?

I would like to configure my bash in a way so that I react on the event that the user enters a command. The moment they press Enter I would like my bash to run a script I installed first (analog to any PROMPT_COMMAND which is run each time a prompt is given out). This script should be able to
see what was entered,
maybe change it,
maybe even make the shell ignore it (i. e. make it not execute the line),
decide on whether the text shall be inserted in the history or not,
and maybe similar things.
I have not found a proper way to do this. My current implementations are all flawed and use things like debug traps to intervene before executing a command or (HISTTIMEFORMAT='%s '; history 1) to ask the history after the command execution is complete about things when the command was started etc (but that is only hindsight which is not really what I want).
I'd expect something like a COMMAND_INTERCEPTION variable which would work similar to PROMPT_COMMAND but I'm not able to find anything like it.
I also considered to use command line completion to achieve my goal but wasn't able to find anything about reacting on sending a finished command in this, but maybe I just didn't find it.
Any help appreciated :)
You can use the DEBUG trap and the extdebug feature, and peek into BASH_COMMAND from the trap handler to see the running command. (Though as noted in comments, the debug trap is sprung on every simple command, not every command line. Also subshells elude it.)
The debug handler can prevent the command from running, but can't change it directly. Though of course you could run any command inside the debugger, possibly using BASH_COMMAND and eval to build it and then tell the shell to ignore the original command.
This would prevent running anything starting with ls:
$ preventls() { case "$BASH_COMMAND" in ls*) echo "no!"; return 1 ;; esac; }
$ shopt -s extdebug
$ trap preventls DEBUG
$ ls -l
no!
Use trap - DEBUG to remove the trap. Tested on Bash 4.3.30.

Is there a way to get an output from already running program?

I am trying to write a script that starts rtmpsrv and waits for some output from it. rtmpsrv gives the desirable output and continues running, but the script is waiting for a termination of rtmpsrv. How do I gain access to the output of rtmpsrv without stopping it?
Well, I'm not familiar with rtmpsrv, but unless necessary you should wait for it to finish. However, you can probably redirect its output to a file, and then grep the file to see if it contains the string you are looking for.
(fictional code... you can expect syntax hell, just want to give you an idea)
nohup rtmpsrv >log.rtmpsrv 2>&1 &
...
while :; do
if ! result=$(grep "your desired line" log.rtmpsrv); then
echo "success: found $result"
break
fi
done
Note: the if constructs should work as per http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.html - just to have nicer code, as #Charles Duffy suggested.
The most simple way is this:
rtmpsrv > logfile &
Then you can search logfile for the text that you're looking for. Meanwhile, rtmpsrv will do it's thing, completely unaware of your script.
This question contains examples how to wait in your script for a certain pattern to appear in the logfile (so you don't have to search it again and again): Do a tail -F until matching a pattern
Note: If you start the rtmpsrv process in the script and the script terminates, it will probably kill the rtmpsrv process. To avoid that use nohup.
Just attach to the process using gdb -p <pid> where the pid is the process id of your script.
You can find the pid of your running script by doing smth like this ps ax | grep <My Script's Name>
http://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/

Run a shell command through Perl in a specific terminal

First off, I am pretty new to Perl so I may be missing something obvious. This is not the typical "I want to run a shell command through Perl" question.
I don't want to capture all of the shell output. I have a program/script that intelligently writes to the terminal. I didn't write it and don't know how it all works, but it seems to move the view to the appropriate place after printing some initialization, then erase previous terminal output and write over it (updates) until it finally completes. I would like to call this from my perl script rather than printing everything to a file to grab it after, since printing to a file does not keep the intelligence of the printout.
All I need to do is:
open an xterm in my perl script
make a system call in that terminal
have that terminal stay up until I manually exit it
Can I do this in perl?
Thanks.
system 'xterm', '-hold', '-e', $program;
where $program is the terminal-aware program you want to run.
-hold causes xterm to stay open after the program exits, waiting for you to close it manually.
-e specifies the program or command line to run. It and its argument must appear last on the xterm command line.
Try doing this by example :
#!/usr/bin/env perl
use strict; use warnings;
use autodie;
open my $term, '| xterm -hold -e $(</dev/stdin)';
foreach my $dir (qw|/etc /usr /home|) {
print $term "ls $dir\n"; # do anything else you'd like than "ls $dir" here
}
close $term;

What if we close the terminal before finishing the command?

Let me explain better. What is gonna happen if I run a command in Linux and before it's done and you could enter another command I close the terminal. Would it still do the command or not?
Generally, you must expect that closing your terminal will hangup your command. But fear not! Linux has a solution for that too!
To ensure that your command completes, use the nohup argument first. Simply place it before whatever you are trying to do:
nohup ./some_program
nohup ./do_a_thing -frx -file input_file.txt
nohup grep "something" giant_list_of_files/* > temp_file.txt
The nohup command stands for "no hangup" and it will ensure that the command you execute continues to run, even if you close your terminal.
It depends on the process and your environment (job control shell options, VNC, etc). But typically, no. The process will get a "hangup" signal (message) from the operating system, and upon receiving that, will quit.
The nohup command, for example, arranges for processes to ignore the hangup signal from the OS. There are many ways to achieve the same result.
I would say it will abort att the status you are in just before the session close.
If you want to be sure to complete the job, you will need to use the nohup command.
http://en.wikipedia.org/wiki/Nohup
Read about nohups and daemons (-d)...
A good link is [link]What's the difference between nohup and a daemon?
Worth look at screen command, Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.

Resources