I am using Python 3.8.10 and fabric 2.7.0.
I have a Connection to a remote host. I am executing a command such as follows:
resObj = connection.run("cat /usr/bin/binaryFile")
So in theory the bytes of /usr/bin/binaryFile are getting pumped into stdout but I can not figure out what wizardry is required to get them out of resObj.stdout and written into a file locally that would have a matching checksum (as in, get all the bytes out of stdout). For starters, len(resObj.stdout) !== binaryFile.size. Visually glancing at what is present in resObj.stdout and comparing to what is in /usr/bin/binaryFile via hexdump or similar makes them seem about similar, but something is going wrong.
May the record show, I am aware that this particular example would be better accomplished with...
connection.get('/usr/bin/binaryFile')
The point though is that I'd like to be able to get arbitrary binary data out of stdout.
Any help would be greatly appreciated!!!
I eventually gave up on doing this using the fabric library and reverted to straight up paramiko. People give paramiko a hard time for being "too low level" but the truth is that it offers a higher level API which is pretty intuitive to use. I ended up with something like this:
with SSHClient() as client:
client.set_missing_host_key_policy(AutoAddPolicy())
client.connect(hostname, **connectKwargs)
stdin, stdout, stderr = client.exec_command("cat /usr/bin/binaryFile")
In this setup, I can get the raw bytes via stdout.read() (or similarly, stderr.read()).
To do other things that fabric exposes, like put and get it is easy enough to do:
# client from above
with client.open_sftp() as sftpClient:
sftpClient.put(...)
sftpClient.get(...)
also was able to get the exit code per this SO answer by doing:
# stdout from above
stdout.channel.recv_exit_status()
The docs for recv_exit_status list a few gotchas that are worth being aware of too. https://docs.paramiko.org/en/latest/api/channel.html#paramiko.channel.Channel.recv_exit_status .
Moral of the story for me is that fabric ends up feeling like an over abstraction while Paramiko has an easy to use higher level API and also the low level primitives when appropriate.
Related
I'm more of a HW engineer who's currently trying to use Python at work. What I want to accomplish via Python is read the CAN-FD output from the DUT and use it for monitoring purposes in the measurement setup. But, I think I didn't get the correct result. Because it shows the same message(id) even there so much more. Based on my understanding from other examples, this should shows the stream of all the messages since there was no filters. Is there anyone who can help me solve this issue or have the similar experience?
import can
def _get_message(msg):
return msg
bus = can.interface.Bus(bustype='vector',app_name ='app', channel=1, bitrate=500000)
buffer = can.BufferedReader()
can.Notifier(bus,[_get_message,buffer])
while True:
msgs = bus.recv(None)
print(msgs)
You don't say what you expected to see on your bus, but I assume there are a lot of different messages on it.
Simplify your problem to start with - set up a bus which has only a single message on it at a low rate. You might have to write some code for that, or you might get some tools which can send periodic messages with counters in for example. Make sure you can capture that correctly first. Then add a second message at a faster rate.
You will proably learn a selection of useful things from these exercises which mean that when you go back to your full-system you have more success, or at least have more ideas on what to debug :)
First of all, thanks for the answers and comments. the root cause was the incorrect HW configuration for the interface. I thought if HW configuration is wrong in the first place, there will be no output at all. But it turned out it is not. After proper configuration, I could see the output stream I expected.
I'm trying to have a node.js eval piped to a ssh2 stream. And I'm having some interesting issues. Mainly how either ssh2 or ssh clients interpret the data. Let's take a look.
when you run the node process, you get this nice prompt
so this is how my code looks
where sshStream is a ssh2 stream, now because I'm piping the stream, this is how it looks on the other side:
basically the cursor is moving down below, and not staying where it used to, if I type s, then it doesn't get fixed, and the terminal gets even more messed up, does anyone know the causes? How should I fix this?
fixNewLines is basically a pipe that replaces ('\n' into '\r\n' because apparently, that's important for the ssh protocol, otherwise you don't get the desired behaviour)
The reason this happens is because processes behave differently if they're on a PTY than if they're called directly, because there is no PTY, you see that behaviour, you would have to create your own Pseudo-PTY in order to correctly receive and interpret the characters as if you were on a real shell.
TL;DR
I'm browsing through a number of solutions on npm and github looking for something that would allow me to read and write to the same file in two different places at the same time. So far I'm having trouble actually finding anything like this. Is there a module of some sort that will allow that?
Background
In essence my requirement is that in a large file I need to, in the following order:
read
transform
write
Ideally the usage would be something like:
const fd = fs.open(file, "r+");
const read = createReadStreamSomehowFrom(fd);
const write = createWriteStreamSomehowFrom(fd);
read
.pipe(new Transform(transform() {...}))
.pipe(write);
I could do that with standard fs.create[Read/Write]Stream but there's no way to control the flow of both streams and if my write position goes beyond read position then I'm reading something I just wrote...
The use case is the same as perl -p -i -e, read and write to the same file (meaning the same inode) asynchronously and replace the contents without loading everything into memory.
I would expect this a real world use case, yet all implementations I found actually load the whole file into memory and then save it. Am I missing a known module here or is there a need to actually write something like this?
Hmm... a tough one it seems. :)
So here's for the record - I found no such module and actually discussed this with some people responsible for a nice in-file replacing module. Seeing no way to solve this I decided to write it from scratch and here it is:
signicode/rw-stream repo on github
rw-stream at npm
The module works on a simple principle that no byte can be written until it has been consumed in the readable stream and it's fairly simple underneath (couple fs.read/write ops with keeping eye on the point of read and write).
If you find this useful then I'm happy. :)
Let's take e.g. "top" application which displays system information and periodically updates it.
I want to run it using node.js and display that information (and updates!).
Code I've come up with:
#!/usr/bin/env node
var spawn = require('child_process').spawn;
var top = spawn('top', []);
top.stdout.on('readable', function () {
console.log("readable");
console.log('stdout: '+top.stdout.read());
});
It doesn't behave the way I expected. In fact it produces nothing:
readable
stdout: null
readable
stdout:
readable
stdout: null
And then exits (that is also unexpected).
top application is taken just as an example. Goal is to proxy those updates through the node and display them on the screen (so same way as running top directly from command line).
My initial goal was to write script to send file using scp. Done that and then noticed that I am missing progress information which scp itself displays. Looked around at scp node modules and they also do not proxy it. So backtracked to common application like top.
top is an interactive console program designed to be run against a live pseudo-terminal.
As to your stdout reads, top is seeing that its stdin is not a tty and exiting with an error, thus no output on stdout. You can see this happen in the shell if you do echo | top it will exit because stdin will not be a tty.
Even if it was actually running though, it's output data is going to contain control characters for manipulating a fixed-dimension console. (like "move the cursor to the beginning of line 2"). It is an interactive user interface and a poor choice as a programmatic data source. "Screen scraping" and interpreting this data and extracting meaningful information is going to be quite difficult and fragile. Have you considered a cleaner approach such as getting the data you need out of the /proc/meminfo file and other special files the kernel exposes for this purpose? Ultimately top is getting all this data from readily-available special files and system calls, so you should be able to tap into data sources that are convenient for programmatic access instead of trying to screen scrape top.
Now of course, top has analytics code to do averages and so forth that you may have to re-implement, so both screen-scraping and going through clean data sources have pros and cons, and aspects that are easy and difficult. But my $0.02 would be focus on good data sources instead of trying to screen scrape a console UI.
Other options/resources to consider:
The free command such as free -m
vmstat
and other commands described in this article
the expect program is designed to help automate console programs that expect a terminal
And just to be clear, yes it is certainly possible to run top as a child process, trick it into thinking there's a tty and all the associated environment settings, and get at the data it is writing. It's just extremely complicated and is analogous to trying to get the weather by taking a photo of the weather channel on a TV screen and running optical character recognition on it. Points for style, but there are easier ways. Look into the expect command if you need to research more about tricking console programs into running as subprocesses.
Let's say I have 10 programs (in terminals) working in tandem: {p1,p2,p3,...,p10}.
It's hard to keep track of all STDOUT debug statements in their respective terminal. I plan to create a GUI to keep track of each STDOUT such that, if I do:
-- Click on p1 would "tail" program 1's output.
-- Click on p3 would "tail" program 4's output.
It's a decent approach but there may be better ideas out there? It's just overwhelming to have 10 terminals; I'd rather have 1 super terminal that keeps track of this.
And unfortunately, linux "screen" is not an option. RESTRICTIONS: I only have the ability to either: redirect STDOUT to a file. (or read directly from STDOUT).
If you are looking for a creative alternative, I would suggest that you look at sockets.
If each program writes to the socket (rather than STDOUT), then your master terminal can act as a server and organize the output.
Now from what you described, it seems as though you are relatively constrained to STDOUT, however it could be possible to do something like this:
# (use netcat (or nc on some systems) to write to a socket on the provided port)
./prog1 | netcat localhost 12312
I'm not sure if this fits in the requirements of what you are doing (and it might be more effort than it is worth!), but it could provide a very stable solution.
EDIT: As was pointed out in the comments, netcat does exactly what you would need to make this work.