Run bash commands and simulate key strokes remotely - node.js

In NodeJS, I can spawn a process and listen to its stdout using spawn().stdout. I'm trying to create an online terminal which will interact with this shell. I can't figure out how to send keystrokes to the shell process.
I've tried this:
echo -e "ls\n" > /proc/{bash pid}/fd/0
This doesn't really do anything other than output ls and a line break. And when I try to tail -f /proc/{bash pid}/fd/0, I can no longer even send keystrokes to the open bash terminal.
I'm really just messing around trying to understand how the bash process will interpret ENTER keys. I don't know if that is done through stdin or not since line breaks don't work.

I don't believe you can "remote control" an already started normal Bash session in any meaningful way. What you can do is start a new shell which reads from a named pipe; you can then write to that pipe to run commands:
$ cd "$(mktemp --directory)"
$ mkfifo pipe1
$ bash < pipe1 &
$ echo touch foo > pipe1
[1]+ Done bash < pipe1
$ ls
foo pipe1
See How to write several times to a fifo without having to reopen it? for more details.

Related

Record Bash Interaction, saving STDIN, STDOUT seperately

So I want to record my bash interaction, which I know I can do with script, or ttyrec. Except I want one feature more than they have. Save input (i.e STDIN), and output (i.e. STDOUT) separately.
So something like (where I typed the first "Hello World!"), except of course script takes one [file] arg, not two:
user#pc:~$ script input.txt output.txt
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
Script done
So input.txt looks like:
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
user#pc:~$ exit
And output.txt looks like:
Hello World!
exit
So I want a program like script which saves STDIN & STDOUT separately. Since currently, this would be the normal output of script (which I do not want, and need seperated):
Script started
user#pc:~$ paste > textfile.txt
Hello World!
user#pc:~$ cat textfile.txt
Hello World!
user#pc:~$ exit
exit
Script done
Does this exist, or is this possible?
Note the usage of paste command, since I had considered filtering the ouput file based on user#pc:~$, but in my case (as with paste) this would not work.
empty
empty is packaged for various linux distributions (it is empty-expect on ubuntu).
open two terminals
terminal 1 : run empty -f -i in.fifo -o out.fifo bash
terminal 1 : run tee stdout.log <out.fifo
terminal 2 : run stty -icanon -isig eol \001; tee stdin.log >in.fifo
type commands into terminal 2, watch the output in terminal 1
fix terminal settings with stty icanon isig -echo
log stderr separately from stdout with exec 2>stderr.log
when finished, exit the bash shell; both tee commands will quit
stdout.log and stdin.log contain the logs
Some other options:
peekfd
You could try peekfd (part of the psmisc package). It probably needs to be run as root:
peekfd -c pid fd fd ... > logfile
where pid is the process to attach to, -c says to attach to children too, and fd are list of file descriptors to watch (basically 0, 1, 2). There are various other options to tweak the output.
The logfile will need postprocessing to match your requirements.
SystemTap and similar
Over on unix stackexchange, use of the SystemTap tool has been proposed.
However, it is not trivial to configure and you'll still have to write a module that separates stdin and stdout.
sysdig and bpftrace also look interesting.
LD_PRELOAD / strace / ltrace
Using LD_PRELOAD, you can wrap lowlevel calls such as write(2).
You can run your shell under strace or ltrace and record data passed to system and library functions (such as write). Lots of postprocessing needed:
ltrace -f -o ltrace.log -s 10000000000 -e write bash
patch ttyrec
ttyrec.c is only 500 lines of fairly straightforward code and looks like it would be fairly easy to patch to use multiple logfiles.

Why does running gradlew in a non-interactive bash session closes the session's stdin?

I've noticed this strange issue of scripts exiting successfully early in a CI system when using gradlew. The following steps help outline this.
Create a file called script with the contents:
./gradlew
echo DONE
Get a random gradlew from somewhere
Run cat script | bash
Notice that DONE never appears
AFAICT, running bash non-interactively causes the exec java blah at the end of gradlew to somehow allow java to close stdin and never allow the echo DONE to be read from the script being read in via stdin from cat. Supporting facts of this are:
Changing the script to ./gradlew; echo DONE will print DONE
Replacing ./gradlew with ./gradlew < /dev/null will print DONE
If you have an exec something somewhere (within gradlew in your case), you are replacing the current process image (bash) with something else (java).
From help exec:
exec [-cl] [-a name] [command [arguments ...]] [redirection ...]
Replace the shell with the given command.
So the problem is not that stdin is getting closed, what is happening is that the new process (java) will be the one reading that input ("echo DONE") and probably doing nothing with it.
Explanation with example
Consider this script.sh:
#!/bin/bash
echo Hello
exec cat
echo World
If you execute it providing some input for cat:
$ ./script.sh <<< "Nice"
Hello
Nice
You may expect also the word World be printed on the screen... WRONG!
Here nothing happens because anything else is executed after the exec command.
Now, if you pipe the script to bash:
$ cat script.sh | bash
Hello <- bash interpreted "echo Hello" and printed Hello
echo World <- cat read "echo World" and printed it (no interpertation ocurred)
Here you can clearly see the process image replacement in action.

Linux server: How do I use nohup make sure job is working?

What I know and what I've tried: I have a script in R (called GAM.R) that I want to run in the background that outputs .rdata, .pdf, and .jpg files. Running this from the command line is relatively simple:
$ Rscript GAM.R
However, this code takes a very long time to run so I would love to send it to the background and let it work even after I have logged out and turned the computer off. I understand this is pretty easy, as well, and my code would look like this:
$ nohup Rscript GAM.R >/dev/null 2>&1 &
I used this to see if it was working:
$ fg
nohup Rscript GAM.R > /dev/null 2>&1
The problem: I don't know how to check if the code is working (is there a way I can see its progress?) and I don't know where the outputs are going. I can see the progress and output with the first code so I must not be too far off. It doesn't seem that the second code's outputs are going where the first code's outputs went.
Your command line is diverting all output to /dev/null aka, The Bit Bucket.
Consider diverting it to a temporary file:
$ nohup Rscript GAM.R >/tmp/GAM.R.output 2>&1 &
Then you can tail /tmp/GAM.R.output to see the results, it will show the last 10 lines of the file by default. You can use tail -f to show the end of the file, plus new output in real time.
Note, the /tmp/ filesystem is not guaranteed to exist between reboots. You can put the file somewhere else (like ~/GAM.R.output if you need to be sure.
Note, however, that if you turn your computer off, then all processing gets aborted. For this to work your machine must continue to run and not go to sleep, or shutdown.
What you are doing is that with the > you are redirecting the output of your script to /dev/null and by doing 2>&1 you are redirecting the error output to the same place. Finally nohup executes your process and detach it from the current terminal.
So to sum up what you are doing is creating a process and redirecting its output and error output to a file called null that is stored under /dev.
To answer your question I suggest you redirect your outputs to a folder that you can access as normal user and not super user. Then to make sure that everything is ok you can print this file.
For example you can do :
nohup Rscript GAM.R >/home/username/documents/output_file 2>&1 &
and then to see the file from a terminal you can do:
cat /home/username/documents/output_file
Lastly I don't think that your program will keep on running if your turn off your pc and I don't think there is a way to do that.
If you want to run your program in the background and access the output of the program you can easily do that by writing
exec 3< <(Rscript GAM.R)
And then when you wish to check the output of the program run
cat <&3 # or you can use 'cat /dev/fd/3'
Excellent! Thanks everyone for your helpful answers, particularly #Greg Tarsa. Ultimately I needed to use:
$ nohup Rscript GAM.R >/usr/emily/gams/2017_03_14 2>&1 &
The above is used to run the script and save the screen output to emily/gams (called "2017_03_14", a file to be made by the command, not a folder as I had origionally thought). This also outputs my .rdata, .pdf, and .jpg output filesf from the script to usr/emily.
Then I can see progress and running programs using:
$ tail -f 2017_03_14 #Shows the last 10 lines of the program's progress
$ ps #shows your running projects
$ ps -fu emily #see running projects regardless of session, where username==emily
In the spirit of completeness, I can also note here that to cancel a process, you can use:
$ kill -HUP processid #https://kb.iu.edu/d/adqw

Linux shell wrap a program's stdin and stdout using pipes

So, I have this interactive program that is running on an embedded linux ARM platform with no screen and that I cannot modify. To interact with it I have to ssh into the embedded linux distro, and run the program which is some sort of custom command line with builtin commands, and it does not exit, only SIGINT will quit the program.
I'm trying to automate it by letting it run in the background and communicate with it using pipes by sending SSH commands like this ssh user#host echo "command" > stdinpipe. This part works, I've been provided with an example like this in a shell script (I cannot use bash, I only have ash installed on the machine) :
#!/bin/sh
mkfifo somePipe
/proc/<PID>/exe < somePipe 2>&1 &
I can now easily command the program by writing to the pipe like
echo "command" > somePipe
and it outputs everything inside the terminal. The problem is that while it works if I have an SSH session open, it won't if I only send commands one by one as I said earlier (I'm using paramiko in python with the exec_command() method, just in case, but I don't think that is relevant, I could use invoke_session() but I don't want to have to deal with recv())
So I figured out I'd redirect the output of the program to a pipe. That's where problems arise. My first attempt was this one (please ignore the fact that everything is run as root and stored in the root home folder, that's how I got it and I don't have the time to make it cleaner now, plus I'm not the one managing the software) :
cd /root/binary
mkfifo outpipe
mkfifo inpipe
./command_bin &
# find PID automatically
command_pid=$(ps -a | egrep ' * \.\/command_bin *' | grep -v grep | awk '{print $1}')
/proc/${command_pid}/exe < inpipe 2>&1 &
echo "process ./command_bin running on PID ${command_pid}"
That alone works within the terminal itself. Now if I leave the SSH session open and open another terminal and type ssh root#host "echo command > /root/binary/inpipe" the code gets executed, but then it outputs the command I just typed and its result into the other terminal that stayed open. So it is obviously not an option, I have to capture the output somehow.
If I change ./command_bin & for ./command_bin >outpipe & the program never starts, I have no idea why, I know that because $command_pid is empty and I cannot find the process with ps -A
Now if instead I replace /proc/${command_pid}/exe < inpipe 2>&1 & with /proc/${command_pid}/exe < inpipe &>outpipe & the program starts, I can write to inpipe just fine with echo "command" > inpipe when the script finished running, however if I try any of cat < outpipe, tail outpipe it just hangs, and does nothing. I've tried using nohup when starting the command but it doesn't really help. I've also tried using a normal file for redirecting the output instead of a fifo, but with the exact same results.
I've spent the entire day on this thing and I cannot get it to work. Why is this not working ? Also I am probably just using an awful way to do this, is there any other way ? The only thing that's mandatory here is that I have to connect through ssh to the board and the command line utility has to stay open because it is communicating with onboard devices (using I2C, OneWire protocols etc).
To keep it simple I want to be able to write to the program's stdin whenever I want, get its stdout to go somewhere else (some file, buffer, I do not care) that I can easily retrieve later after an arbitrary amount of time with cat, tail or some other command with ssh.

process exits after getting command from pipe

On Ubuntu, I start a command line program (gnu backgammon) and let it get its commands from a pipe (commandpipe), like so
$ gnubg -t < commandpipe
from another terminal, I do
$ echo "new game" > commandpipe
This works fine, a new game is started, but after the program has finished processing that command, the process exits.
How can I prevent the backgammon process from exiting? I would like to continue sending commands to it via the commandpipe.
This is only because you used echo, which immediately quits after echoing. I believe when a program quits its file descriptors are closed. (OK, it's not an actual program in bash, it's a builtin, but I don't think this matters.) If you wrote an actual interactive program, e.g. with a GUI (and remember, StackOverflow is for programming questions, not Unix questions, which belong over there) and redirected its stdout to the named pipe, you would not have this problem.
The reader gets EOF and thus closes the FIFO. So you need a loop, like this:
$ while (true); do cat myfifo; done | ts
jan 05 23:01:56 a
jan 05 23:01:58 b
And in another terminal:
$ echo a > myfifo
$ echo b > myfifo
Substitute ts with gnubg -t
The problem is that the file descriptor is closed, and when the last write file descriptor is closed, it sends a signal to the read process.
As a quick hack, you can do this:
cat < z # read process in one terminal
cat > z & # Keep write File Descriptor open. Put in background (or run in new terminal)
echo hi > z # This will close the FD, but not signal the end of input
But you should really be writing in a real programming language where you can control your file descriptors.
To avoid EOF, you could use tail:
tail -f commandpipe | gnubg -t

Resources