Reading the console output of a process through SSH, linux - linux

I have a process running on Slackware 13.37 that outputs data to its terminal window. Is it possible to read/redirect this output to ssh window/PuTTY without terminating/restarting the process?

You can capture the output using shell redirection or via a program such as script -- provided that your program was started "in the usual way". (It is possible to write via special devices to other terminals, given appropriate permissions).
One assumes that you already know about redirecting output, e.g.,
foo >bar
but have some case, e.g., a GUI program which continues to write to the terminal.
Without worrying about interfering with a program by redirecting its output, I would run script and start the program within the shell that script starts. Then anything written from that shell would go to the typescript file (by default).
On the other hand, GUI programs which start a terminal window may/may not be configurable to allow customizing them with a startup script that can capture output.
As noted in How to open process again in linux terminal?, it is possible to attach to a running process with strace, given its process-ID. Using the -e option as described in 7 Strace Examples to Debug the Execution of a Program in Linux, you could just extract write calls.
Keep in mind with strace that nonprinting characters from the writes are converted to printable text, and that strace displays function arguments up to a fixed limit (which you can adjust using the -s option). The output of strace can be redirected (and it need not be run on the same terminal as the original process).

Related

Why isn't this command running properly?

So I'm making a better command-line frontend for APT and I'm putting on some finishing touches and when the code below runs.
Command::new("unbuffer")
.arg("apt")
.arg("list")
.arg("|")
.arg("less")
.arg("-r")
.status()
.expect("Something went wrong.");
it spits out:
E: Command line option 'r' [from -r] is not understood in combination with the other options.
but when I just run unbuffer apt list | less -r manually in my terminal it works perfectly. How do I get it to run properly when calling it in Rust?
Spawning a process via Command uses the system's native functionality to create a process. This is a low level feature and has little to do with your shell/terminal that you are used to. In particular, your shell (e.g. bash or zsh, running inside of your terminal) offers a lot more features. For example, piping via | is such a feature. Command does not support these features as the low level system's API doesn't.
Luckily, the low level interface offers other means of achieving a lot of stuff. Piping for example is mostly just redirecting the standard inputs and outputs. You can do that with Command::{stdin, stdout, sterr}. Please see this part of the documentation for more information.
There are a few very similar questions, which are not similar enough to warrent closing this as a dupe though:
Execute a shell command
Why does the compgen command work in the Linux terminal but not with process::Command?: mentions shell built-in commands that do not work with Command.
Executing find using std::process::Command on cygwin does not work

ulimit Linux connection limit

I have a question about ulimit:
ulimit -u unlimited
ulimit -n 60000
If I execute these in a screen, will they forever be kept as a setting on the screen until I kill the screen or do I have to run it every time I run the program?
What I want to do is irrelevant, I just want to know if they will be kept as a setting within the screen.
ulimit is a bash builtin. It invokes the setrlimit(2) system call.
That syscall modifies some limit in its -shell- process (likewise the cd builtin calls chdir(2) and modifies the working directory of your shell process).
In a bash shell, $$ expands to the pid of that shell process. So you can use ps $$ (and even compose it, e.g. like in touch /tmp/foo$$ or cat /proc/$$/status)
So the ulimit applies to your shell and stay the same until you do another ulimit command (or until your shell terminates).
The limits of your shell process (and also its working directory) are inherited by every process started by fork(2) from your shell. These processes include those running your commands in that same shell. Notice that changing the limit (or the working directory) of some process don't affect those of the parent process. Notice that execve(2) don't change limits or working directories.
Limits (and working directory) are properties of processes (not of terminals, screens, windows, etc...). Each process has its own : limits and working directory, virtual address space, file descriptor table, etc... You could use proc(5) to query them (try in some shell to run cat /proc/self/limits and cat /proc/$$/maps and ls -l /proc/self/cwd /proc/self/fd/). See also this. Limits (and working directory) are inherited by child process started with fork(2) which has its own copy of them (so limits are not shared, but copied ... by fork).
But if you start another terminal window, it is running another shell process (which has its own limits and working directory).
See also credentials(7). Be sure to understand how fork(2) and execve(2) work, and how your shell uses them (for every command starting a new process, practically most of them).
You mention kill(1) in some comments. Be sure to read its man page (and every man page mentioned here!). Read also kill(2) and signal(7).
A program can call by itself setrlimit(2) (or chdir(2)) but that won't affect the limits (or working directory) of its parent process (often your shell). Of course it would affect future fork-ed child processes of the process running that program.
I recommend reading ALP (a freely downloadable book about Linux programming) which has several chapters explaining all that. You need several books to explain the details.
After ALP, read intro(2), be aware of existing syscalls(2), play with strace(1) and your own programs (writing a small shell is very instructive; or study the code of some existing one), and read perhaps Operating Systems: Three Easy pieces.
NB. The screen(1) utility manages several terminals, each having typically its shell process. I don't know if you refer to that utility. Read also about terminal emulators, and the tty demystified page.
The only way to really kill some screen is with a hammer, like this:
(image of a real hammer hitting a laptop found with Google, then cut with gimp, and put temporarily on my web server; the original URL is probably https://www.istockphoto.com/fr/photo/femme-daffaires-stress%C3%A9-%C3%A0-lordinateur-crash-arrive-et-d%C3%A9truisent-le-moniteur-gm172788693-5836585 and I understand the license permits me to do that.)
Don't do that, you'll be sorry.
Apparently, you are talking of sending a signal (with kill(1) or killall(1) or pkill(1) to some process running the screen(1) program, or to its process group. It is not the same.

Linux: How to monitor and incoming ssh session on an attached monitor

I am ssh'ing into my rasperrybi and running a python script. I need to leave the program running overnight and capture the output of the program on a monitor that is attached to the pi. How can I accomplish this?
You could background your process so it will run when your ssh session is no longer running. End command with '&'. Then you can use the system command 'wall' to send messages to other users. That should display messages form your process on the console.
The tee command allows you to take standard out from a program, append it to a file, and also send it to standard out. For example:
$ echo "Hello world" | tee -a teetest.txt
Hello world
$ cat teetest.txt
Hello world
$
Using this method, your Python script's output can still be sent to the monitor connected to your R-Pi, but it will also be captured to a file.
In addition, your operating system may have a program installed called script. Its purpose is to do pretty much exactly what you're looking for -- capture traffic (optionally in to and) out from a program that you're running. Only it would be used to "wrap" your Python script rather than merely process its output after the fact.
Usage varies between unices, but in FreeBSD (which I use on my R-Pi), you could do this:
script output.txt ./myscript.py
If you're using something other than FreeBSD, try reading the man page included with your OS in order to learn usage and options.

Nonblocking read from a pipe in Linux

I would like to read /sys/kernel/debug/tracing/trace_pipe in non-blocking way using Linux command-line tools. For instance, cat cannot be used, because it will be blocked. This is similar to this, with the difference that I don't want to use Python.
The concept of ‘non-blocking’ doesn't apply to command-line tools. However, you could run an instance of cat in the background by appending an ampersand to the invocation, like so:
cat /sys/kernel/debug/tracing/trace_pipe &
Now, the command returns immediately, and every time a line is readable from the file, it gets printed to the terminal (and messes up whatever you were typing).
You could also use tail -F if the file itself doesn't block.

What is the difference between the jobs and ps commands in linux?

Please tell me the differences in information displayed by two commands jobs and ps in unix operating system?
jobs is a shell builtin. It tells you about the jobs that the current shell is managing. It can give you information that is internal to the shell, like the job numbers (which you can use in shortcuts like fg %2) and the original command line as it appeared before variable expansions.
ps is an external command which can tell you about all the processes running on the system. (By default it only shows a small subset, but there are options to select larger sets of processes to display.) It doesn't know about the shell-internal stuff.
jobs: shows the current jobs living in this terminal
as example ->
gedit &
jobs
this will show u gedit is running atm.
if you close the terminal, gedit dies too, you can use disown so it wont die.
ps is a totally different thing, its a process table display tool.

Resources