I'm trying to have a node.js eval piped to a ssh2 stream. And I'm having some interesting issues. Mainly how either ssh2 or ssh clients interpret the data. Let's take a look.
when you run the node process, you get this nice prompt
so this is how my code looks
where sshStream is a ssh2 stream, now because I'm piping the stream, this is how it looks on the other side:
basically the cursor is moving down below, and not staying where it used to, if I type s, then it doesn't get fixed, and the terminal gets even more messed up, does anyone know the causes? How should I fix this?
fixNewLines is basically a pipe that replaces ('\n' into '\r\n' because apparently, that's important for the ssh protocol, otherwise you don't get the desired behaviour)
The reason this happens is because processes behave differently if they're on a PTY than if they're called directly, because there is no PTY, you see that behaviour, you would have to create your own Pseudo-PTY in order to correctly receive and interpret the characters as if you were on a real shell.
Related
I am using Python 3.8.10 and fabric 2.7.0.
I have a Connection to a remote host. I am executing a command such as follows:
resObj = connection.run("cat /usr/bin/binaryFile")
So in theory the bytes of /usr/bin/binaryFile are getting pumped into stdout but I can not figure out what wizardry is required to get them out of resObj.stdout and written into a file locally that would have a matching checksum (as in, get all the bytes out of stdout). For starters, len(resObj.stdout) !== binaryFile.size. Visually glancing at what is present in resObj.stdout and comparing to what is in /usr/bin/binaryFile via hexdump or similar makes them seem about similar, but something is going wrong.
May the record show, I am aware that this particular example would be better accomplished with...
connection.get('/usr/bin/binaryFile')
The point though is that I'd like to be able to get arbitrary binary data out of stdout.
Any help would be greatly appreciated!!!
I eventually gave up on doing this using the fabric library and reverted to straight up paramiko. People give paramiko a hard time for being "too low level" but the truth is that it offers a higher level API which is pretty intuitive to use. I ended up with something like this:
with SSHClient() as client:
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(hostname, **connectKwargs)
stdin, stdout, stderr = client.exec_command("cat /usr/bin/binaryFile")
In this setup, I can get the raw bytes via stdout.read() (or similarly, stderr.read()).
To do other things that fabric exposes, like put and get it is easy enough to do:
# client from above
with client.open_sftp() as sftpClient:
sftpClient.put(...)
sftpClient.get(...)
also was able to get the exit code per this SO answer by doing:
# stdout from above
stdout.channel.recv_exit_status()
The docs for recv_exit_status list a few gotchas that are worth being aware of too. https://docs.paramiko.org/en/latest/api/channel.html#paramiko.channel.Channel.recv_exit_status .
Moral of the story for me is that fabric ends up feeling like an over abstraction while Paramiko has an easy to use higher level API and also the low level primitives when appropriate.
I have a small issue with my code, running in Python 3. I'm trying to fool Raspbian, in order to make it believe a tty is an external device.
However, I can't read a single word I wrote previously with os.write(slave, text.encode()), using something like os.read(slave, 512).
I open the tty as follow master, slave = os.openpty()
I think I'm missing a parameter or something, but I can't find out what.
I tried accessing the tty in another terminal, with a cat <, with a subprocess, but the program still block when it has to read.
Please explain what is the problem.
Regards.
I think your mistake here is that you are trying to read the slave. If you read the master instead you should get your output.
Quote from: http://www.rkoucha.fr/tech_corner/pty_pdip.html
A pseudo-terminal is a pair of character mode devices also called pty. One is master and the other is slave and they are connected with a bidirectional channel. Any data written on the slave side is forwarded to the output of the master side. Conversely, any data written on the master side is forwarded to the output of the slave side as depicted in figure 2.
RPI
I'm looking for a way to get resize events on an xterm as an alternative to the winch signal. I need to get a signal for the xterm resize that is remote compatible, that is, could be used over a serial line/telnet/ssh/whatever. The winch signal is only for local machine tasks.
I know that vi/curses can do this because I have tried ssh and use vi to edit a file, and it responds to resizing of the window.
So after a bit of research, it does seem there is no in-band method for this. I don't think it would be difficult to add. You have two sides to a connection, ssh, serial or whatever, let's say A and B where A is on the host and B is the remote. To get behavior equivalent to a local windowed task, I would say at least two things are essential:
Mouse movement/button messaging.
Window resize messaging.
The second, can be a two step process. If the B task gets an alert that the window size has changed, it can use in band queries to determine what the new size is, using the (perhaps infamous) method of setting the cursor to the end of an impossibly large screen and then reading the actual location of the cursor that results, then restoring the location, all done with standard in-band controls.
Why care about in-band vs. out of band? Well, nobody cares now :-), but serial lines used to be quite popular, and in-band, that is, actual escape sequences sent down the line are going to work, but telnet or ssh "out of band" communication is not.
It would not be difficult to add, a local program that shims the ssh or telnet, or even xterm itself, can get the winch message and change that to an escape alert to the B side. Here the mouse messaging is instructive. The messages consist of two actions:
The B side must enable such messages, so that the A side won't send unknown escapes to it.
The A side must have an escape to signify winch.
Thus a new escape from B to A, and one from A to B.
I'm actually fairly satisfied with winch. What was wanted is to have the same capabilities as vi/curses (based on my observation). This kind of support is already far better than Windows, which (as far as I know) implements none of this remote side support.
Let's take e.g. "top" application which displays system information and periodically updates it.
I want to run it using node.js and display that information (and updates!).
Code I've come up with:
#!/usr/bin/env node
var spawn = require('child_process').spawn;
var top = spawn('top', []);
top.stdout.on('readable', function () {
console.log("readable");
console.log('stdout: '+top.stdout.read());
});
It doesn't behave the way I expected. In fact it produces nothing:
readable
stdout: null
readable
stdout:
readable
stdout: null
And then exits (that is also unexpected).
top application is taken just as an example. Goal is to proxy those updates through the node and display them on the screen (so same way as running top directly from command line).
My initial goal was to write script to send file using scp. Done that and then noticed that I am missing progress information which scp itself displays. Looked around at scp node modules and they also do not proxy it. So backtracked to common application like top.
top is an interactive console program designed to be run against a live pseudo-terminal.
As to your stdout reads, top is seeing that its stdin is not a tty and exiting with an error, thus no output on stdout. You can see this happen in the shell if you do echo | top it will exit because stdin will not be a tty.
Even if it was actually running though, it's output data is going to contain control characters for manipulating a fixed-dimension console. (like "move the cursor to the beginning of line 2"). It is an interactive user interface and a poor choice as a programmatic data source. "Screen scraping" and interpreting this data and extracting meaningful information is going to be quite difficult and fragile. Have you considered a cleaner approach such as getting the data you need out of the /proc/meminfo file and other special files the kernel exposes for this purpose? Ultimately top is getting all this data from readily-available special files and system calls, so you should be able to tap into data sources that are convenient for programmatic access instead of trying to screen scrape top.
Now of course, top has analytics code to do averages and so forth that you may have to re-implement, so both screen-scraping and going through clean data sources have pros and cons, and aspects that are easy and difficult. But my $0.02 would be focus on good data sources instead of trying to screen scrape a console UI.
Other options/resources to consider:
The free command such as free -m
vmstat
and other commands described in this article
the expect program is designed to help automate console programs that expect a terminal
And just to be clear, yes it is certainly possible to run top as a child process, trick it into thinking there's a tty and all the associated environment settings, and get at the data it is writing. It's just extremely complicated and is analogous to trying to get the weather by taking a photo of the weather channel on a TV screen and running optical character recognition on it. Points for style, but there are easier ways. Look into the expect command if you need to research more about tricking console programs into running as subprocesses.
Let's say I have 10 programs (in terminals) working in tandem: {p1,p2,p3,...,p10}.
It's hard to keep track of all STDOUT debug statements in their respective terminal. I plan to create a GUI to keep track of each STDOUT such that, if I do:
-- Click on p1 would "tail" program 1's output.
-- Click on p3 would "tail" program 4's output.
It's a decent approach but there may be better ideas out there? It's just overwhelming to have 10 terminals; I'd rather have 1 super terminal that keeps track of this.
And unfortunately, linux "screen" is not an option. RESTRICTIONS: I only have the ability to either: redirect STDOUT to a file. (or read directly from STDOUT).
If you are looking for a creative alternative, I would suggest that you look at sockets.
If each program writes to the socket (rather than STDOUT), then your master terminal can act as a server and organize the output.
Now from what you described, it seems as though you are relatively constrained to STDOUT, however it could be possible to do something like this:
# (use netcat (or nc on some systems) to write to a socket on the provided port)
./prog1 | netcat localhost 12312
I'm not sure if this fits in the requirements of what you are doing (and it might be more effort than it is worth!), but it could provide a very stable solution.
EDIT: As was pointed out in the comments, netcat does exactly what you would need to make this work.