Using konsole to emulate a terminal through Perl - linux

I have an issue when using this command
system("konsole --new-tab --workdir<dir here> -e perlprogram.pl &");
It opens perlprogram.pl which has:
system("mpg321 song.mp3");
I want to do this because mpg321 stalls the main perl script. so i thought by opening it in another terminal window it would be ok. But when I do run the first script all it does is open a new tab and do nothing.
Am I using konsole correctly?

Am I using konsole correctly?
Likely, no. But that depends. This question can be decomposed into two issues:
How do I achieve concurrency, so that my program doesn't halt while I execute an external command
How do I use konsole.
1. Concurrency
There are multiple ways to do that. Starting with the fork||exec('new-program'), to system 'new-program &', or even open.
system will invoke the standard shell of your OS, and execute the command you provided. If you provide multiple arguments, no shell escaping is done, and the specified program execed directly. (The exec function has the same interface so far). system returns a number that specifies if the command ran correctly:
system("my-command", "arg1") == 0
or die "failed my-command: $?";
See perlfunc -f system for the full info on what this return value signifies…
The exec never returns if successfull, but morphs your process into executing the new program.
fork splits your process in two, and executes the child and the process as equal copies. They only differ in the return value of fork: The parent gets the PID of the child; the child gets zero. So the following executes a command asynchronously, and lets your main script execute without further delay.
my #command = ("mpg321", "song.mp3");
fork or do {
# here we are in the child
local $SIG{CHLD} = 'IGNORE'; # don't pester us with zombies
# set up environment, especially: silence the child. Skip if program is well-behaved.
open STDIN, "<", "/dev/null" or die "Can't redirect STDIN";
open STDOUT, ">", "/dev/null" or die "Can't redirect STDOUT";
exec {$command[0]} #command;
# won't ever be executed on success
die qq[Couldn't execute "#command"];
};
The above process effectively daemonizes the child (runs without a tty).
2. konsole
The command line interface of that program is awful, and it produces errors half the time when I run it with any parameters.
However, your command (plus a working directory) should actually work. The trailing ampersand isn't neccessary, as the konsole command returns immediately. Something like
# because I `say` hello, I can be certain that this actually does something.
konsole --workdir ~/wherever/ --new-tab -e perl -E 'say "hello"; <>'
works fine for me (opens a new tab, displays "hello", and closes when I hit enter). The final readline there keeps the tab open until I close it. You can keep the tab open until after the execution of the -e command via --hold. This allows you to see any error messages that would vanish otherwise.

Related

how to write a shell command to execute a command in an ALREADY open terminal window

I have a process that depends on the internet, which dies randomly due to a spotty connection.
I am writing a cron script, so that it checks every minute if the process is running, and restarts it...
the process doesn't kill the terminal window it's in.
I don't want to kill the terminal - then spawn a new one.
I want the shell script I'm writing to execute in the window that's already open...
I'm using i3-sensible-terminal right now, but any terminal would do.
if ! ps -a | grep x123abc > /dev/null ; then
$CMD
fi
I have not yet located the information I need to have that run in a specific terminal.
changing the CMD to include a terminal seems to only open up a new window...
Suggesting a different design to separate running your script from observing your script output.
Write script named worker "that depends on the internet, which dies randomly due to a spotty connection." appends ALL its output to log file \home\$USER\worker.log.
Or just redirect ALL output from script named worker to log file \home\$USER\worker.log.
worker > \home\$USER\worker.log 2>&1
Run script name worker as a restartable service with systemd unit service.
Here is a good article explaining this practice: https://dev.to/setevoy/linux-systemd-unit-files-edit-restart-on-failure-and-email-notifications-5h3k
Continue to observe the log file \home\$USER\worker.log using tailf command
tailf \home\$USER\worker.log

Bash completion sometimes meshes up my terminal when the completion function reads a file

So I've been having a problem with some cli programs. Sometimes when I kill the running process with Ctrl+C, it leaves the terminal in a weird state (e.g. echo is turned off). Now that is to be expected for many cases, as killing a process does not give it a chance to restore the terminal's state. But I've discovered that for many other cases, bash completion is the culprit. As an example, try the following:
Start a new bash session as follows: bash --norc to ensure that no completions are loaded.
Define a completion function: _completion_test() { grep -q foo /dev/null; return 1; }.
Define a completion that uses the above function: complete -F _completion_test rlwrap.
Type exactly the following: r l w r a p Space c a t Tab Enter (i.e. rlwrap cat followed by a Tab and then by an Enter).
Wait for a second and then kill the process with Ctrl+C.
The echo of the terminal should have not been turned off. So if you type any character, it will not be echoed by the terminal.
What is really weird is that if I remove the seemingly harmless grep -q foo /dev/null from the completion function, everything works correctly. In fact, adding a grep -q foo /dev/null (or even something even simpler such as cat /dev/null) to any completion function that was installed in my system, causes the same issue. I have also reproduced the problem with programs that don't use readline and without Ctrl+C (e.g. find /varTab| head, with the above completion defined for find).
Why does this happen?
Edit: Just to clarify, the above is a contrived example. In reality, what I am trying to do, is more like this:
_completion_test() {
if grep -q "$1" /some/file; then
#do something
else
#do something else
fi
}
For a more concrete example, try the following:
_completion_test() {
if grep -q foo /dev/null; then
COMPREPLY=(cats)
else
return 1
fi
}
But the mere fact that I am calling grep, causes the problem. I don't see why I can't call grep in this case.
Well, the answer to this is very simple; it's a bug:
This happens when a programmable completion function calls an external command during the execution of a completion function. Bash saves the tty state after every successful job completes, so it can restore it if a job is killed by a signal and leaves the terminal in an undesired state. In this case, we need to suppress that if the job that completes is run during programmable completion, since the terminal settings at that time are as readline sets them for line editing. This fix will be in the release version of bash-4.4.
You're just doing a wrong implementation of the completion function. See the manual
-F function
The shell function function is executed in the current shell environment. When it is executed, $1 is the name of the command
whose arguments are being completed, $2 is the word being completed,
and $3 is the word preceding the word being completed, as described
above (see Programmable Completion). When it finishes, the possible
completions are retrieved from the value of the COMPREPLY array
variable.
for example the following implemenation:
_completion_test() { COMPREPLY=($(cat /dev/null)); return 1; }
doesn't break the terminal.
Regarding your original question why your completion function breaks terminal, I've played a little with strace and I saw that there are ioctl calls with -echo argument. I assume that when you are terminating it with Ctrl+C the ioctl with echo argument just isn't being called in order to restore the original state. Typing stty echo will bring the echo back.

bash how to close /dev/tty?

I want my interactive bash to run a program that will ultimately do things like:
echo Error: foobar >/dev/tty
and in another(python) component tries to prompt for and read a password from /dev/tty.
I want such reads and writes to fail, but not block.
Is there some way to close /dev/tty in the parent script and then run the program?
I tried
foo >&/tmp/outfile
which does not work.
What does sort of work is the 'at' command:
at now
at> foobar >&/tmp/outfile
/dev/tty is not open in your parent script. /dev/tty isn't a file descriptor but a path in the filesystem.
A line of script such as:
echo foobar > /dev/tty
opens a new descriptor for itself. To make that fail, we have to remove /dev/tty, or otherwise not make it work: change the permissions, or replace it with a nonexistent device. Needless to say, these are bad ideas.
If we want to run a script which does some useful things for us, but also does I/O to and from /dev/tty that we don't want (and we cannot change the script), the solutions range from creating an environment for that script in which the controlling terminal is some pseudo-tty (whose master side just throws data away), to doing a chroot to an environment in which /dev/tty is the same device as /dev/null.
Regarding the first option, there are utilities which create a pseudo tty, such as Expect.
For instance:
$ expect -c "spawn ./badscript ; expect"
will run badscript in an environment where its /dev/tty is a pseudo-tty that is connected to the expect interpreter. An echo foo > /dev/tty issued by badscript will still show up on your terminal, but now how it gets to your terminal is that expect reads it from badscript via the pseudo-tty device, and then repeats it; badscript is blocked from writing to your tty directly. Of course, with some scripting in expect's language, you can prevent this.
/dev/tty refers to the controlling terminal of the process. If you mean to get rid of it by describing it as closing, you must invoke setsid() which will make the process be controlling TTY-less, but one must be aware to always pass O_NOCTTY else a opened file, if a TTY, can become the controlling TTY.[2]

Run a shell command through Perl in a specific terminal

First off, I am pretty new to Perl so I may be missing something obvious. This is not the typical "I want to run a shell command through Perl" question.
I don't want to capture all of the shell output. I have a program/script that intelligently writes to the terminal. I didn't write it and don't know how it all works, but it seems to move the view to the appropriate place after printing some initialization, then erase previous terminal output and write over it (updates) until it finally completes. I would like to call this from my perl script rather than printing everything to a file to grab it after, since printing to a file does not keep the intelligence of the printout.
All I need to do is:
open an xterm in my perl script
make a system call in that terminal
have that terminal stay up until I manually exit it
Can I do this in perl?
Thanks.
system 'xterm', '-hold', '-e', $program;
where $program is the terminal-aware program you want to run.
-hold causes xterm to stay open after the program exits, waiting for you to close it manually.
-e specifies the program or command line to run. It and its argument must appear last on the xterm command line.
Try doing this by example :
#!/usr/bin/env perl
use strict; use warnings;
use autodie;
open my $term, '| xterm -hold -e $(</dev/stdin)';
foreach my $dir (qw|/etc /usr /home|) {
print $term "ls $dir\n"; # do anything else you'd like than "ls $dir" here
}
close $term;

TCL - open a new terminal, do some operations in the opened terminal and close it

How can I open a new terminal from TCL code, do some operations (e.g. ls -l), get the results of those operations and close that terminal?
Does the exec command open a new terminal and all the operations are invoked in the terminal or when I call for example "cd .." with exec, that command has nothing to do with the linux terminal and linux commands, those are just pure tcl commands that have the same name as linux standard commands?
Sounds like you want Expect.
Any command you pass to exec will be sent to the system to be executed. exec does not open a terminal window to do this: it does not need to open a GUI window like a terminal just to interact with the underlying system.
A couple of specific notes about your example commands:
parsing the output of ls or ls -l is not recommended. Suppose you have an odd but valid filename like "foo\nbar". You're better off iterating over the results of Tcl's glob command.
cd happens to be a Tcl command.
I have done my task with this:
set cvsUpdStr [exec $pathToCvsInYourSystem -qn upd]
It does not open a terminal, but it does the task:
executes a command
results is being stored in cvsUpdStr and can be used later
Also it is possible to use it with catch to understand if it was executed correctly or to avoid errors:
if {[catch {exec $pathToCvsInYourSystem -qn upd} result]} {puts $result}

Resources