Nonblocking read from a pipe in Linux - linux

I would like to read /sys/kernel/debug/tracing/trace_pipe in non-blocking way using Linux command-line tools. For instance, cat cannot be used, because it will be blocked. This is similar to this, with the difference that I don't want to use Python.

The concept of ‘non-blocking’ doesn't apply to command-line tools. However, you could run an instance of cat in the background by appending an ampersand to the invocation, like so:
cat /sys/kernel/debug/tracing/trace_pipe &
Now, the command returns immediately, and every time a line is readable from the file, it gets printed to the terminal (and messes up whatever you were typing).
You could also use tail -F if the file itself doesn't block.

Related

Why isn't this command running properly?

So I'm making a better command-line frontend for APT and I'm putting on some finishing touches and when the code below runs.
Command::new("unbuffer")
.arg("apt")
.arg("list")
.arg("|")
.arg("less")
.arg("-r")
.status()
.expect("Something went wrong.");
it spits out:
E: Command line option 'r' [from -r] is not understood in combination with the other options.
but when I just run unbuffer apt list | less -r manually in my terminal it works perfectly. How do I get it to run properly when calling it in Rust?
Spawning a process via Command uses the system's native functionality to create a process. This is a low level feature and has little to do with your shell/terminal that you are used to. In particular, your shell (e.g. bash or zsh, running inside of your terminal) offers a lot more features. For example, piping via | is such a feature. Command does not support these features as the low level system's API doesn't.
Luckily, the low level interface offers other means of achieving a lot of stuff. Piping for example is mostly just redirecting the standard inputs and outputs. You can do that with Command::{stdin, stdout, sterr}. Please see this part of the documentation for more information.
There are a few very similar questions, which are not similar enough to warrent closing this as a dupe though:
Execute a shell command
Why does the compgen command work in the Linux terminal but not with process::Command?: mentions shell built-in commands that do not work with Command.
Executing find using std::process::Command on cygwin does not work

Sensitive Data in Command Line Interfaces

I know it's frowned upon to use passwords in command line interfaces like in this example:
./commandforsomething -u username -p plaintextpassword
My understanding that the reason for that (in unix systems at least) is because it'll be able to be read in the scrollback as well as the .bash_history file (or whatever flavor shell you use).
HOWEVER, I was wondering if it was safe to use that sort of interface with sensitive data programatically while programming things. For example, in perl, you can execute a command using two ``, the exec command, or system command (I'm not 100% sure on the differences between these apart from the return value from the two backticks being the output of the executed command versus the return value... but that's a question for another post I guess).
So, my question is this: Is it safe to do things LIKE
system("command", "userarg", "passwordarg");
as it essentially does the same thing, just without getting posted in scrollback or history? (note that I only use perl as an example - I don't care about the answer specific to perl but instead the generally accepted principle).
It's not only about shell history.
ps shows all arguments passed to the program. The reason why passing arguments like this is bad is that you could potentially see other users' passwords by just looping around and executing ps. The cited code won't change much, as it essentially does the same.
You can try to pass some secrets via environment, since if the user doesn't have an access to the given process, the environment won't be shown. This is better, but is a pretty bad solution too (e.g.: in case program fails and dumps a core, all passwords will get written to disk).
If you use environment variables, use ps -E which will show you environment variables of the process. Use it as a different users than the one executing the program. Basically simulate the "attacker" and see if you can snoop the password. On a properly configured system you shouldn't be able to do it.

Linux: How to monitor and incoming ssh session on an attached monitor

I am ssh'ing into my rasperrybi and running a python script. I need to leave the program running overnight and capture the output of the program on a monitor that is attached to the pi. How can I accomplish this?
You could background your process so it will run when your ssh session is no longer running. End command with '&'. Then you can use the system command 'wall' to send messages to other users. That should display messages form your process on the console.
The tee command allows you to take standard out from a program, append it to a file, and also send it to standard out. For example:
$ echo "Hello world" | tee -a teetest.txt
Hello world
$ cat teetest.txt
Hello world
$
Using this method, your Python script's output can still be sent to the monitor connected to your R-Pi, but it will also be captured to a file.
In addition, your operating system may have a program installed called script. Its purpose is to do pretty much exactly what you're looking for -- capture traffic (optionally in to and) out from a program that you're running. Only it would be used to "wrap" your Python script rather than merely process its output after the fact.
Usage varies between unices, but in FreeBSD (which I use on my R-Pi), you could do this:
script output.txt ./myscript.py
If you're using something other than FreeBSD, try reading the man page included with your OS in order to learn usage and options.

Reading the console output of a process through SSH, linux

I have a process running on Slackware 13.37 that outputs data to its terminal window. Is it possible to read/redirect this output to ssh window/PuTTY without terminating/restarting the process?
You can capture the output using shell redirection or via a program such as script -- provided that your program was started "in the usual way". (It is possible to write via special devices to other terminals, given appropriate permissions).
One assumes that you already know about redirecting output, e.g.,
foo >bar
but have some case, e.g., a GUI program which continues to write to the terminal.
Without worrying about interfering with a program by redirecting its output, I would run script and start the program within the shell that script starts. Then anything written from that shell would go to the typescript file (by default).
On the other hand, GUI programs which start a terminal window may/may not be configurable to allow customizing them with a startup script that can capture output.
As noted in How to open process again in linux terminal?, it is possible to attach to a running process with strace, given its process-ID. Using the -e option as described in 7 Strace Examples to Debug the Execution of a Program in Linux, you could just extract write calls.
Keep in mind with strace that nonprinting characters from the writes are converted to printable text, and that strace displays function arguments up to a fixed limit (which you can adjust using the -s option). The output of strace can be redirected (and it need not be run on the same terminal as the original process).

Modifying a file that is being used as an output redirection by another program

If I have a file of which some output are redirected to, what will happen if I modify that file from another program? Will both changes be recorded to the file?
To illustrate:
Terminal 1 (a file is used to store output of a program using either tee or the redirection operator >:
$ ./program | tee output.log
Terminal 2 (at the same time, the log file is being modified by another program, e.g. vim):
$ vim output.log
It depends on the program and the system calls they make.
vim for example will not write to the file until you issue the ":w" or ":x" commands. It will then detect that the file has changed and makes you confirm the overwriting.
If the program does open(2) on the file with the O_APPEND flag, before each write(2) the file offset is positioned at the end of the file, as if with lseek(2).
So if you have two commands that append like "tee" they will take turns appending.
However, with NFS you still may get corrupted files if more than one process appends data to a file at once, because NFS doesn't support appending to a file and the kernel has to simulate it.
The effect of two or more processes modifying the data of the same file (inode in tech lingo) is undefined. The result depends on the particular order the writing processes are scheduled. This is a classic case of a race condition, i.e. a result depends on the particular order of process execution.

Resources