bash how to close /dev/tty? - linux

I want my interactive bash to run a program that will ultimately do things like:
echo Error: foobar >/dev/tty
and in another(python) component tries to prompt for and read a password from /dev/tty.
I want such reads and writes to fail, but not block.
Is there some way to close /dev/tty in the parent script and then run the program?
I tried
foo >&/tmp/outfile
which does not work.
What does sort of work is the 'at' command:
at now
at> foobar >&/tmp/outfile

/dev/tty is not open in your parent script. /dev/tty isn't a file descriptor but a path in the filesystem.
A line of script such as:
echo foobar > /dev/tty
opens a new descriptor for itself. To make that fail, we have to remove /dev/tty, or otherwise not make it work: change the permissions, or replace it with a nonexistent device. Needless to say, these are bad ideas.
If we want to run a script which does some useful things for us, but also does I/O to and from /dev/tty that we don't want (and we cannot change the script), the solutions range from creating an environment for that script in which the controlling terminal is some pseudo-tty (whose master side just throws data away), to doing a chroot to an environment in which /dev/tty is the same device as /dev/null.
Regarding the first option, there are utilities which create a pseudo tty, such as Expect.
For instance:
$ expect -c "spawn ./badscript ; expect"
will run badscript in an environment where its /dev/tty is a pseudo-tty that is connected to the expect interpreter. An echo foo > /dev/tty issued by badscript will still show up on your terminal, but now how it gets to your terminal is that expect reads it from badscript via the pseudo-tty device, and then repeats it; badscript is blocked from writing to your tty directly. Of course, with some scripting in expect's language, you can prevent this.

/dev/tty refers to the controlling terminal of the process. If you mean to get rid of it by describing it as closing, you must invoke setsid() which will make the process be controlling TTY-less, but one must be aware to always pass O_NOCTTY else a opened file, if a TTY, can become the controlling TTY.[2]

Related

why can't I write to the standard input of my terminal device from another terminal

I have two pts terminals open in my Gnome desktop manager Ubuntu.
What I am trying to do is to write something to the terminal /dev/pts/0 using the terminal /dev/pts/1 using redirection like:
##in pts/1
echo date > /dev/pts/0
But in pts/0, only date is simply printed and pressing enter doesn't execute it. So i guessed the comamnd is not going to the standard input of pts/0 .So I tried piping the output of echo date to /dev/pts/0 which gave me permission denied error to which i became root and changed permissions of it and still i couldn't get date command to run in pts/0 .
I am trying these things for learning purposes. So i really am confused how its all working here and what i should do to get it done.
Writing to a terminal device just prints output on the terminal. If it stuffed the text back into the input buffer, then everything you printed to stdout would loop back into stdin, since they're both connected to the same terminal device.
In order to put data into a pseudo-tty's input buffer, you have to write to its master device. Unfortunately, they don't have distinct names in the filesystem on Linux. There's a single /dev/ptmx device, and the master process uses grantpt() to create a slave that's linked to it before spawning the child that uses it as its controlling terminal. So there's nothing in the filesystem that you can write to that will feed into the pty's input buffer.
You can do it by doing this commands, (from /dev/pts/1 or another tty):
exec 1>/dev/pts/0
to deactivate
exec 1>/dev/pts/1 #or your actually original tty address.
Basically you are supplanting the tty stdin.
Edited for more details.
"exec" in this case starts a new bash and you can feed this with a new set of environment variables that normally you can not change on the fly. For more details please do "man exec".
"1>/dev/pts/0" here we are saying, "whatever I type on this new bash, write it to this another one, and indeed it will do it, but all the stdout will be displayed at the original tty.
Good luck learning linux, I hope you enjoy it.

Linux shell wrap a program's stdin and stdout using pipes

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

Using konsole to emulate a terminal through Perl

I have an issue when using this command
system("konsole --new-tab --workdir<dir here> -e perlprogram.pl &");
It opens perlprogram.pl which has:
system("mpg321 song.mp3");
I want to do this because mpg321 stalls the main perl script. so i thought by opening it in another terminal window it would be ok. But when I do run the first script all it does is open a new tab and do nothing.
Am I using konsole correctly?
Am I using konsole correctly?
Likely, no. But that depends. This question can be decomposed into two issues:
How do I achieve concurrency, so that my program doesn't halt while I execute an external command
How do I use konsole.
1. Concurrency
There are multiple ways to do that. Starting with the fork||exec('new-program'), to system 'new-program &', or even open.
system will invoke the standard shell of your OS, and execute the command you provided. If you provide multiple arguments, no shell escaping is done, and the specified program execed directly. (The exec function has the same interface so far). system returns a number that specifies if the command ran correctly:
system("my-command", "arg1") == 0
or die "failed my-command: $?";
See perlfunc -f system for the full info on what this return value signifies…
The exec never returns if successfull, but morphs your process into executing the new program.
fork splits your process in two, and executes the child and the process as equal copies. They only differ in the return value of fork: The parent gets the PID of the child; the child gets zero. So the following executes a command asynchronously, and lets your main script execute without further delay.
my #command = ("mpg321", "song.mp3");
fork or do {
# here we are in the child
local $SIG{CHLD} = 'IGNORE'; # don't pester us with zombies
# set up environment, especially: silence the child. Skip if program is well-behaved.
open STDIN, "<", "/dev/null" or die "Can't redirect STDIN";
open STDOUT, ">", "/dev/null" or die "Can't redirect STDOUT";
exec {$command[0]} #command;
# won't ever be executed on success
die qq[Couldn't execute "#command"];
};
The above process effectively daemonizes the child (runs without a tty).
2. konsole
The command line interface of that program is awful, and it produces errors half the time when I run it with any parameters.
However, your command (plus a working directory) should actually work. The trailing ampersand isn't neccessary, as the konsole command returns immediately. Something like
# because I `say` hello, I can be certain that this actually does something.
konsole --workdir ~/wherever/ --new-tab -e perl -E 'say "hello"; <>'
works fine for me (opens a new tab, displays "hello", and closes when I hit enter). The final readline there keeps the tab open until I close it. You can keep the tab open until after the execution of the -e command via --hold. This allows you to see any error messages that would vanish otherwise.

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

Resources