How does Linux Expect script work? - linux

I ever tried to input password by I/O redirection like echo <password> | ssh <user>#<host>, but it didn't work of course. Then I got that ssh actually reads password directly from /dev/tty instead of STDIN, so I/O redirection doesn't work for it.
As far as I know, Expect script is the standard way for this kind of job. I'm curious about how Expect works? I guess it runs the target program in a child process, and it changes the /dev/tty of the child process to refer to another place, but I don't know how.

It uses something called a pseudo-TTY which looks to the called program like a TTY, but allows for programmed control. See e.g. Don Libes' Exploring Expect p498f

Related

Linux cpu-to-cpu inter-process communications

I have written a Linux C program, which runs on an embedded processor, and which behaves like a shell -- either interactively, giving a prompt, parsing user commands, and executing them, in an indefinite loop, or non-interactively -- reading and parsing a command off the invocation command. I want to run the same program concurrently on another embedded processor, which is reachable using e/net (e.g. ssh), and use it for some commands because the second processor has access to some hardware that the first processor does not. I usually need to capture and process the outputs from that remote command. Currently, I invoke the program on the second processor for each command -- e.g.
system ("ssh other-cpu my_program "do this command > /vtmp/out_capt");
system ("scp other-cpu:/vtmp/out_capt .")
This works, but it is slow. Is there a better way, perhaps using pipes? If someone could point me to their pick of a best way to do this kind of IPC I would appreciate it.
You could get rid of scp and just save the output from ssh on the local machine. Something like this:
ssh other-cpu '( my_program command )' > file.log
Or if you want to run multiple commands:
ssh other-cpu > file.log << EOF
my_program command
my_program other_command
EOF
There are a few ways to do this, with varying speed and complexity. [Naturally :-)], the fastest requires the most setup.
(1) You can replace your two command sequence with an output pipe.
You create a single pipe via pipe(2). You do a fork(2). The child attaches the output fildes of the pipe to stdout. The child does execvp("ssh","remote","my_program","blah"). The parent reads the results from the input fildes of the pipe. No need for a temp file.
This is similar to what you're currently doing in that you do an ssh for each remote command you want to execute, but eliminating the temp file and the scp.
(2) You can modify my_program to accept various commands from stdin. I believe you already do this in the program's "shell" mode.
You create two pipes via pipe(2). Again, fork a child. Attach the output fildes of the "from_remote" pipe as before to stdout. But, now, attach the input fildes of the "to_remote" pipe to stdin.
In the parent, using the fildes for output of the "to_remote" pipe, send a command line. The remote reads this line [via the input fildes of the "to_remote" pipe, parses it the way a shell would, and fork/execs the resulting command.
After the child program on the remote has terminated, my_program can output a separater line.
The parent reads the data as before until it sees this separater line.
Now, any time the local wants to do something on the remote, the pipes are already set up. It can just write subsequent commands to the output fildes of the "to_remote" pipe and repeat the process.
Thus, no teardown and recreation required. Only one ssh needs to be set up. This is similar to setting up a server with a socket, but we're using ssh and pipes.
If the local wishes to close the connection, it can close the pipe on its end [or send a (e.g.) "!!!stop" command]
If your remote target commands are text based, the separater is relatively easy (i.e. some string that none of your programs would ever output like: _jsdfl2_werou_tW__987_).
If you've got raw binary data, my_program may have to filter/encapsulate the data in some way (e.g. similar to what the PPP protocol does with its flag character)
(3) You can create a version of my_program (e.g. my_program -server) that acts like a server that listens on a socket [in shell mode].
The "protocol" is similar to case (2) above, but may be a bit easier to set up because a network socket is inherently bidirectional (vs. the need for two pipe(2) calls above).
One advantage here is that you're communicating directly over a TCP socket, bypassing the overhead of the encryption layer.
You can either start the remote server at boot time or you can use a [one-time] ssh invocation to kick it off into the background.
There is one additional advantage. Instead of the "separater line" above, the local could make a separate socket connection to the server for each command. This is still slower than above, but faster than creating the ssh on each invocation.

Restricting pipes and redirects (Python3)

I have a program that takes standard input from the user and runs through the command line. Is there someway to make a program ignore pipes and redirects?
For example: python program.py < input.txt > output.txt would just act as if you put in python program.py
There is no simple way to find the terminal the user launched you with in the general case. There are some techniques you can use, but they will not always work.
You can use os.isatty() to detect whether a file (such as sys.stdin or sys.stdout) appears to be an interactive terminal session. It is possible you are hooked up to a terminal session other than the one the user used to launch your program, so this is not foolproof. Such a terminal session might even be under the control of a program rather than a human.
Under Unix, processes have a notion of a "controlling terminal." You may be able to talk to that via os.ctermid(). But the user can manipulate this value before launching your process. You also may not have a controlling terminal at all, e.g. if running as a daemon.
You can inspect the parent process and see if any of its file descriptors are hooked up to terminal sessions. Unfortunately, I'm not aware of any cross-platform way to do that. On Linux, I'd start with os.getppid() and the /proc filesystem (see proc(5)). If the parent process has exited (e.g. the user ran your_program.py & disown; exit under bash), this will not work. But in that case, there isn't much you can do anyway.

how to log just the output of commands with expect

I'm using expect to execute a bunch of commands in a remote machine. Then, i'm calling the expect script from a shell script.
I don't want the expect script to log to stdout the sent commands but i want it to log the output of the commands, so my shell script can do other things depending on that results.
log_user 0
Hides both the commands and the results, so it doesn't fit my needs. How can i tell expect to log the results?
Hmm... I'm not sure you can do that, since the reason for seeing the commands you send is because the remote device echoes them back to you. This is standard procedure, and is done so that a user sees what he or she types when interacting with the device.
What I'm trying to say is that both the device output to issued commands, and the echoed-back commands, are part of the spawned process's stdout, therefore I don't believe you can separate one from the other.
Now that I think of it, I think you can configure a terminal to not display echoed commands... but not sure how you would go about doing that with a spawned process that is not using an interactive terminal.
Let us know if you find a way, I'd be interested of knowing if there is one.

Take user input from the background

What I'm trying to accomplish is to have a process running in background from a Linux terminal which takes user input and does things according to that input even if the terminal window is not focused, so I can work with other GUI applications, and then when I push some pre-defined buttons, something might alter the program's state without loosing the focus of my current window. Just as simple as that (not that simple for me though).
I don't ask for an specific kind of implementation. I'm fine with anything that may work: C, C++, Java, Linux Bash script... The only requisite is that it works under Linux.
Thank you very much
Well you can have your server read a FIFO or a unix domain socket (or even a message queue). Then write a client that takes command line input and writes it to the pipe/queue from some other terminal session. With FIFOs you can just echo input from the command line itself to the pipe but FIFOs come with their own headaches. The "push the button and magic happens" is a lot trickier but maybe that was badly phrased?

user command line not using stdin and stdout

My program is invoked by another process, and communicates with it via stdin and stdout. I want to interact with my program via a command line interface, but obviously the usual method of just running it in a terminal doesn't work. I'm looking for the simplest possible way of achieving this.
My program is currently written in Lua, but might become C or something else. Running under GNU/Linux, but something simple that works under Windows as well would be great.

Resources