Redirect new xterm's output back to the original terminal - linux

For example, I have a very simple script, ping.sh:
#!/bin/bash
/usr/bin/xterm -e ping localhost
Right now, the output of the ping only shows up in the new xterm. I would like the output to show in both the original terminal (stdout of ping.sh) as well as in the new xterm. Is there a way to do this?
PS: I'm struggling with a title for this.

Seems like a weird thing to do, but this might work:
#!/bin/bash
f=$(mktemp)
touch "$f"
tail -f "$f" &
/usr/bin/xterm -e "sh -c 'ping localhost 2>&1 | tee -a $f'"

Alternatively, it's possible to get the file name of the terminal connected to standard input using the command tty, then use tee in the new terminal to copy the output to the old terminal.
/usr/bin/xterm -e "ping localhost | tee $(tty)"
Of course, this only works if the script is not called with redirected stdin.
In case the script is called with redirected stdin, solutions in shell - How to get the real name of the controlling terminal? - Unix & Linux Stack Exchange can be used. readlink /proc/self/fd/1, or ps (require some output parsing)

Related

Use bash in conky.config

You can use bash code, and call bash scripts, in conky.text. Is there any way to use it in conky.config?
The reason I want this is to have window specifications depending on whether I have an external monitor connected or not.
So I want logic similar to this:
if xrandr -q | grep -oP 'HDMI2\sconnected' > /dev/null ; then
x=-900
else
x=0
fi
gap_x=$x
I personally do not encourage the following solution, but if all else fails, this will at least work very well.
Make a copy of your .conkyrc file, let's call it .conkyrc_dual, and make the bash file below:
#!/bin/bash
pkill conky
if xrandr -q | grep -oP 'HDMI2\sconnected' > /dev/null ; then
conky -c ~/.conkyrc_dual
notify-send 'Conky' 'Dual monitors'
else
conky
notify-send 'Conky' 'Single monitor'
fi
Now run this file when you want to start conky.
You could also have a bash script use sed to edit the gap_x variable in your .conkyrc file as needed before starting conky. That way, you'd only need a single config file. Keep a backup of .conkyrc, of course, just in case something goes terribly awry.

using sed and pstree to display the type of terminal being used

I've been trying to display the type of terminal being used as the name only. For instance if I was using konsole it would display konsole. So far I've been using this command.
pstree -A -s $$
That outputs this.
systemd---konsole---bash---pstree
I have the following that can extract konsole from that line
pstree -A -s $$ | sed 's/systemd---//g;s/---.*//g' | head -1
and that outputs konsole properly. But some people have output from just the pstree command that can look like this.
systemd---kdeinit4---terminator---bash---pstree
or this
systemd---kdeinit4---lxterminal---bash---pstree
and then when I add the sed command it extracts kdeinit4 instead of terminator. I can think of a couple scenarios to extract the type of terminal but none that don't contain conditional statements to check for specific types of terminals. The problem I'm having is I can't accurately predict how many non or non-relative things may be infront or behind of the terminal name or what they will be nor can I accurately predict what the terminal name will be. Does anyone have any ideas on a solution to this?
You could use
ps -p "$PPID" -o comm=
Or
ps -p "$PPID" -o fname=
If your shell does not have PPID variable set you could get it with
ps -p "$(ps -p "$$" -o ppid= | sed 's|\s\+||')" -o fname=
Another theory is that the parent process of the current shell that doesn't belong to the same tty as the shell could actually be the one that produces the virtual terminal, so we could find it like this as well:
#!/bin/bash
shopt -s extglob
SHELLTTY=$(exec ps -p "$$" -o tty=)
P=$$
while read P < <(exec ps -p "$P" -o ppid=) && [[ $P == +([[:digit:]]) ]]; do
if read T < <(exec ps -p "$P" -o tty=) && [[ $T != "$SHELLTTY" ]]; then
ps -p "$P" -o comm=
break
fi
done
I don't know how to isolate the terminal name on your system, but as a parsing exercise, and assuming the terminal is directly running your bash you could pipe the pstree output through:
awk -F"---bash---" ' NF == 2 { count = split( $1, arr, "---" ); print arr [count]; }'
This will find the word prior to the "---bash---" which in your examples is
konsole
terminator
lxterminal
If you want different shell types, you could expand the field separator to include them like:
awk -F"---(bash|csh)---" ' NF == 2 { count = split( $1, arr, "---" ); print arr[count]; }'
Considering an imaginary line like:
systemd---imaginary---monkey---csh---pstree
the awk would find "monkey" as the terminal name as well as anything from your test set.
No guarantees here, but I think this will work most of the time, on linux:
ps -ocomm= $(lsof -tl /proc/$$/fd/0 | grep -Fxf <(lsof -t /dev/ptmx))
A little explanation is probably in order, but see man ps, man lsof and (especially) man pts for information.
/dev/ptmx is a pseudo-tty master (on modern linux systems, and some other unix(-like) systems). A program will have one of these open if it is a terminal emulator, a telnet/ssh daemon, or some other program which needs a captive terminal (screen, for example). The emulator writes to the pseudo-tty master what it wants to "type", and reads the result from the pseudo-tty slave.
/proc/$$/fd/0 is stdin of process $$ (i.e. the shell in which the command is executed). If stdin has not been redirected, this will be a symlink to some slave pseudotty, /dev/pts/#. That is the other side of the /dev/ptmx device, and consequently all of the programs listed above which have /dev/ptmx open will also have some /dev/pts/# slave open as well. (You might think that you could use /dev/stdin or /dev/fd/0 instead of /proc/$$/fd/0, but those would be opened by lsof itself, and consequently would be its stdin; because of the way lsof is implemented, that won't work.) The -l option to lsof causes it to follow symlinks, so that will cause it to show the process which have the same pts open as the current shell.
The -t option to lsof causes it to produce "terse" output, consisting only of pids, one per line. The -Fx options to grep cause it to match strings, rather than regex, and to force a full line match; the -f FILE option causes it to accept the strings to match from FILE (which in this case is a process substitution), one per line.
Finally, ps -ocomm= prints out the "command" (chopped, by default, to 8 characters) corresponding to a pid.
In short, the command finds a list of terminal emulators and other master similar programs which have a master pseudo-tty, and a list of processes which use the pseudo-tty slave; finds the intersection between the two, and then looks up the command name for whatever results.
curTerm=$(update-alternatives --query x-terminal-emulator | grep '^Best:')
curTerm=${curTerm##*/}
printf "%s\n" "$curTerm"
And the result is
terminator
Of course it can be different.
Now you can use $curTerm variable in your sed command.
But I am not sure if this is going to work properly with symlinks.

What is meant by 'output to stdout'

New to bash programming. I am not sure what is meant by 'output to stdout'. Does it mean print out to the command line?
If I have a simple bash script:
#!/bin/bash
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello'
it outputs a string to the terminal. Does this mean it's 'outputting to stdout' ?
Thanks
Yes, stdout is the terminal (unless it's redirected to a file using the > operator or into the stdin of another process using |)
In your specific example, you're actually redirecting using | grep ... through grep then to the terminal.
Every process on a Linux system (and most others) has at least 3 open file descriptors:
stdin (0)
stdout (1)
stderr (2)
Regualary every of this file descriptors will point to the terminal from where the process was started. Like this:
cat file.txt # all file descriptors are pointing to the terminal where you type the command
However, bash allows to modify this behaviour using input / output redirection:
cat < file.txt # will use file.txt as stdin
cat file.txt > output.txt # redirects stdout to a file (will not appear on terminal anymore)
cat file.txt 2> /dev/null # redirects stderr to /dev/null (will not appear on terminal anymore
The same is happening when you are using the pipe symbol like:
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello'
What is actually happening is that the stdout of the wget process (the process before the | ) is redirected to the stdin of the grep process. So wget's stdout isn't a terminal anymore while grep's output is the current terminal. If you want to redirect grep's output to a file for example, then use this:
wget -q http://192.168.0.1/test -O - | grep -m 1 'Hello' > output.txt
Unless redirected, standard output is the text terminal which initiated the program.
Here's a wikipedia article: http://en.wikipedia.org/wiki/Standard_streams#Standard_output_.28stdout.29

Limitation for piped redirection to file on shell?

I'm trying to do the following:
uname>>1.txt | echo #####>>1.txt | echo uname>>1.txt &
to get the following output:
uname
## ## ## ## ##
Linux (or whatever the uname is)
But instead all I get as output is:
uname
However if I try just:
uname>>1.txt | echo uname>>1.txt &
Then I do get the following output:
uname
Linux
Wondering if there is some limitation to this sort of piped redirection?
=======================================================================
I'll be calling this shell command from within a tcl script. Well actually there are a list of commands being executed from within the tcl script, and the outputs need to be formatted in the following way <------->
I wanted to run them in background to decrease the execution time, as the outputs of these commands are not related to each other.
I thought the commands in () would output the formatted output to 1.txt as a background process.
Would you suggest another way of doing this?
There are a number of problems here.
In general it's a bad idea to combine output redirection and pipes. Once redirected, there's nothing left to pipe.
Piping to echo doesn't make a bit of sense.
Use parentheses to put a suite of commands in the background.
You shouldn't be putting this in the background.
In general commands run from left to right, not right to left.
What you want is
(echo uname > 1.txt; echo ------ >>1.txt; uname >>1.txt)
Update (per comments and changes to the question)
You are continuing to invoke what is essentially undefined behavior with this command:
uname>>1.txt | echo uname>>1.txt &
The pipe from uname is invalid because there's nothing to pipe once you have redirected output. The pipe to echo is invalid because doesn't read from standard input. Which of the uname or echo commands prints it's output first to the file 1.txt is up for grabs here. This is apparently what you want:
bash -c 'echo uname >> 1.txt; echo ------ >> 1.txt; uname >> 1.txt'
Note the -c option to bash. This tells bash that the argument following -c is a string that contains shell commands.

How to log output in bash and see it in the terminal at the same time?

I have some scripts where I need to see the output and log the result to a file, with the simplest example being:
$ update-client > my.log
I want to be able to see the output of the command while it's running, but also have it logged to the file. I also log stderr, so I would want to be able to log the error stream while seeing it as well.
update-client 2>&1 | tee my.log
2>&1 redirects standard error to standard output, and tee sends its standard input to standard output and the file.
Just use tail to watch the file as it's updated. Background your original process by adding & after your above command After you execute the command above just use
$ tail -f my.log
It will continuously update. (note it won't tell you when the file has finished running so you can output something to the log to tell you it finished. Ctrl-c to exit tail)
You can use the tee command for that:
command | tee /path/to/logfile
The equivelent without writing to the shell would be:
command > /path/to/logfile
If you want to append (>>) and show the output in the shell, use the -a option:
command | tee -a /path/to/logfile
Please note that the pipe will catch stdout only, errors to stderr are not processed by the pipe with tee. If you want to log errors (from stderr), use:
command 2>&1 | tee /path/to/logfile
This means: run command and redirect the stderr stream (2) to stdout (1). That will be passed to the pipe with the tee application.
Learn about this at askubuntu site
another option is to use block based output capture from within the script (not sure if that is the correct technical term).
Example
#!/bin/bash
{
echo "I will be sent to screen and file"
ls ~
} 2>&1 | tee -a /tmp/logfile.log
echo "I will be sent to just terminal"
I like to have more control and flexibility - so I prefer this way.

Resources