For this little script:
package require Tcl 8.4
package require Expect 5.40
spawn gnome-terminal
while {1} {
puts -nonewline "Enter your name: "
flush stdout
set name [gets stdin]
puts "Hello $name"
}
how can I write to the spawned gnome-terminal so that user input is echoed to both terminals?
You run Expect inside the gnome-terminal, not the other way round. Expect is a command-line program really, and gnome-terminal is really not (it's a graphical terminal emulator). In particular, gnome-terminal ignores its stdin and stdout entirely; it effectively creates those for other programs to use. Meanwhile, Expect controls other programs by talking to their stdin and stdout (with trickery with extra virtual terminals); this means that the interface it uses to its subprocesses is something that gnome-terminal basically ignores from the outside.
Though in this case, why not use Tk to pop up a GUI to ask for the password instead? Instead of putting up a proxy to ask the question, you can ask it directly. This can make for a much richer interface if you desire…
Related
I am writing automation scripts (perl/bash). Many of them benefit from some basic terminal GUI. I figured I'd use standard ANSI sequences for basic drawing. Before drawing in terminal I do clear but doing that I lose some terminal command history. I want to be able to restore terminal command history when my program exists. Many terminal programs (e.g. less, man, vim, htop, nmon, whiptail, dialog etc) do exactly that. All of them restore terminal window bringing the user back to where he was prior to calling the program with all the history of commands previously executed.
To be honest I don't even know where to start searching. Is it a command from curses library? Is it an ANSI escape sequence? Should I mess with tty? I am stuck and any pointers would be really helpful.
EDIT: I'd like to clarify that I am not really asking "how to use the alternative screen". I am looking for a way to preserve terminal command history. One possible answer to my question could be "use alternative screen". The question "what is alternative screen and how to use it" is a different question which in turn already has answers posted elsewhere. Thanks :)
You should use the alternate screen terminal capability. See
Using the "alternate screen" in a bash script
An answer to "how to use the alternate screen":
This example should illustrate:
#!/bin/sh
: <<desc
Shows the top of /etc/passwd on the terminal for 1 second
and then restores the terminal to exactly how it was
desc
tput smcup #save previous state
head -n$(tput lines) /etc/passwd #get a screenful of lines
sleep 1
tput rmcup #restore previous state
This'll only work on a terminal has the smcup and rmcup capabilities (e.g., not on Linux console (=a virtual console)).
Terminal capabilities can be inspected with infocmp.
On a terminal that doesn't support it, my tput smcup simply return an exit status of 1 without outputting the escape sequence.
Note:
If you intend to redirect the output, you might want to write the escape sequences directly to /dev/tty so as to not dirty your stdout with them:
exec 3>&1 #save old stdout
exec 1>/dev/tty #write directly to terminal by default
#...
cat /etc/passwd >&3 #write actual intended output to the original stdout
#...
In Ubuntu 13.04 Using VMware, I have two terminals(PID 1000 - /dev/pts0, PID 2000 - /dev/pts2)
If I do this from terminal 2(/dev/pts2) ...
echo 'ls -al' > /proc/1000/fd/0
I can see that 'ls -al' prompts up in terminal 0(/dev/pts0)
however, this is just a visual result, not a real command input for terminal 0.
What I want is redirect the actual command input from terminal 2 to terminal 0 via /proc/pid(terminal 0's)/fd/0 and execute command from terminal 0.
Is this possible??, if it is, how can I do this?
thank you in advance.
This is not possible, because a bash does two things when the keyboard event <ENTER> happens.
Printing a newline.
Executing the entered command, if the command is completed.
The logic, when a command is completed is not simple. It depends on conditional statements, backslashes etc.
Redirecting a '\n' character to the stdin will only execute the first step. I guess it is impossible by design, because a shell, that could be controlled by another shell is horrible for every security engineer.
On a multi-user linux, you would be able to write and execute commands on shells, that are running by different users (e.g. root). You are able to do nasty things (e.g. blame other users for doing forbidden things).
If you still need a solution:
You could write a script, that reads commands from a pipe and executes them under a different user, but beware: This isn't secure.
There is a difference between a terminal and a shell. When you see a pts window, there are both a terminal-emulator (pts) and a shell (bash) running there. Bash is reading lines from the pts device and executes commands. Bash writes its stdout/stderr back into the pts device, and the programs started by bash do so too. But the pts itself is just a glorified serial terminal. It displays characters written into it, and you (bash) can read characters typed into it. Usually it also echoes (displays) the typed characters.
When you write into a pts device from another terminal, it displays the characters, but these characters cannot be read from the pts device. You (bash) can only read from the pts what a user types.
The confusing thing is that displaying characters written into the pts device (this is what you tried) and echoing the typed characters look exactly the same.
Edit: This question was originally bash specific. I'd still rather have a bash solution, but if there's a good way to do this in another shell then that would be useful to know as well!
Okay, top level description of the problem. I would like to be able to add a hook to bash such that, when a user enters, for example $cat foo | sort -n | less, this is intercepted and translated into wrapper 'cat foo | sort -n | less'. I've seen ways to run commands before and after each command (using DEBUG traps or PROMPT_COMMAND or similar), but nothing about how to intercept each command and allow it to be handled by another process. Is there a way to do this?
For an explanation of why I'd like to do this, in case people have other suggestions of ways to approach it:
Tools like script let you log everything you do in a terminal to a log (as, to an extent, does bash history). However, they don't do it very well - script mixes input with output into one big string and gets confused with applications such as vi which take over the screen, history only gives you the raw commands being typed in, and neither of them work well if you have commands being entered into multiple terminals at the same time. What I would like to do is capture much richer information - as an example, the command, the time it executed, the time it completed, the exit status, the first few lines of stdin and stdout. I'd also prefer to send this to a listening daemon somewhere which could happily multiplex multiple terminals. The easy way to do this is to pass the command to another program which can exec a shell to handle the command as a subprocess whilst getting handles to stdin, stdout, exit status etc. One could write a shell to do this, but you'd lose much of the functionality already in bash, which would be annoying.
The motivation for this comes from trying to make sense of exploratory data analysis like procedures after the fact. With richer information like this, it would be possible to generate decent reporting on what happened, squashing multiple invocations of one command into one where the first few gave non-zero exits, asking where files came from by searching for everything that touched the file, etc etc.
Run this bash script:
#!/bin/bash
while read -e line
do
wrapper "$line"
done
In its simplest form, wrapper could consist of eval "$LINE". You mentioned wanting to have timings, so maybe instead have time eval "$line". You wanted to capture exit status, so this should be followed by the line save=$?. And, you wanted to capture the first few lines of stdout, so some redirecting is in order. And so on.
MORE: Jo So suggests that handling for multiple-line bash commands be included. In its simplest form, if eval returns with "syntax error: unexpected end of file", then you want to prompt for another line of input before proceeding. Better yet, to check for proper bash commands, run bash -n <<<"$line" before you do the eval. If bash -n reports the end-of-line error, then prompt for more input to add to `$line'. And so on.
Binfmt_misc comes to mind. The Linux kernel has a capability to allow arbitrary executable file formats to be recognized and passed to user application.
You could use this capability to register your wrapper but instead of handling arbitrary executable, it should handle all executable.
What happens when I type perl and push enter in terminal?
I just did and nothing happened but what is going on behind the curtains?
If I type python I enter some Tron world but not when typing perl or maybe soon I will be surrounded by centaurs and satyrs. I will update if I am.
The shell you are in interprets the line you've entered (performs substitutions etc.) and then executes the resulting command. To find out what it will execute, I propose to use type perl. This will show you whether the shell interprets this as an alias, a shell function or a direct command somewhere in the $PATH.
In your case, I assume it will execute /usr/bin/perl.
This program then will wait (quite silently) for input. Perl isn't as talkative as Python because it isn't meant to be used interactively.
But you can then type print 5, press Enter and then Ctrl-d (the last one means "end of file"). Then you probably will see a 5 being printed, and perl will terminate (due to the EOF).
perl interpreter is expecting program from standard input,
perl
print 11; # hit <Ctrl+D>
11 # program executed
I am fairly new to Perl programming, but I have a fair amount of experience with Linux. Let’s say I have the following code:
while(1) {
my $text = <STDIN>;
my $text1 = <STDIN>;
my $text2 = <STDIN>;
}
Now, the main question is: Does STDIN in Perl read directly from /dev/stdin on a Linux machine or do I have to pipe /dev/stdin to the Perl script?
If you don't feed anything to the script, it will sit there waiting for you to enter something. When you do, it will be put into $text and then the script will continue to wait for you to enter something. When you do, that will go into $text1. Subsequently, the script will once again wait for you to enter something. Once that is done, the input will go into $text2. Then, the whole thing will repeat indefinitely.
If you invoke the script as
$ script < input
where input is a file, the script will read lines from the file similar to above, then, when the stream runs out, will start assigning undef to each variable for an infinite period of time.
AFAIK, there is no programming language where reading from the predefined STDIN (or stdin) file handle requires you to invoke your program as:
$ script < /dev/stdin
It reads directly from the STDIN file descriptor. If you run that script it will just wait for input; if you pipe data to it, it will loop until all the data is consumed and then wait forever.
You may want to change that to:
while (my $test = <STDIN>) {
# blah de blah
}
so an EOF will terminate your program.
Perl's STDIN is, by default, just hooked up to whatever the standard input file descriptor is. Beyond that, Perl doesn't really care how or where the data came from. It's the same to Perl if you're reading the output from a pipe, redirecting a file, or typing interactively at the terminal.
If you care about each of those situations and you want to handle each differently, then you might try different approaches.