process substitution in bash, sometimes I have to press "Enter" - linux

I'm just learning to use process substitution in bash. Here's the command:
echo TEXT > >(tee log)
This is a pointless command but the thing is I have to press Enter after I run it. Why is that?
Sometimes this happens with more useful commands like:
ls SOME_NON_EXISTING_FILE 2> >(tee log)

Actually Enter is not really needed, you can just enter next command e.g. date and check. What is happening that because of process substitution your command exits first and then output gets written on your terminal, that is the reason you get false impression of need to press an Enter.

Related

Linux Shell script executed but not return to command prompt

I have a script that runs one of the following line
sudo -u $USER $SUDOCMD &>>stdout.log
The sudo command is a realtime process that print out lots of stuff to the console.
After running the script each time, the script does not return to the command prompt. You have to press enter or ctrl + c to get back to the command prompt.
Is there a way to to do it automatically, so that I can get a return value from the script to decide whether the script runs ok or failed.
thanks.
What is probably happening here is that your script is printing binary data to the TTY rather than text to standard output/error, and this is hiding your prompt. You can for example try this:
$ PS1='\$ '
$ (printf "first line\nsecond line\r" > $(tty)) &>> output.log
The second command will result in two lines of output, the second one being "mixed in" with your prompt:
first line
$ cond line
As you can see the cursor is on the "c", but if you start typing the rest of the line is overwritten. What has happened here is the following:
You pressed Enter to run the command, so the cursor moved a line down.
The tty command prints the path to the terminal file, something like "/dev/pts/1". Writing to this file means that the output does not go to standard output (which is usually linked to the terminal) but directly to the terminal.
The subshell (similar to running the command in a shell script) ensures that the first redirect isn't overridden by the second one. So the printf output goes directly to the terminal, and nothing goes to the output log.
The terminal now proceeds to print the printf output, which ends in a carriage return. Carriage return moves the cursor to the start of the line you've already written to, so that is where your prompt appears.
By the way:
&>> redirects both standard output and standard error, contrary to your filename.
Use More Quotes™
I would recommend reading up on how to put a command in a variable

How to inject EOF for stdin on Git bash on windows 10?

I have tried the following keys: Ctrl+D, Ctrl+C, and Ctrl+Z. The programming I am running on bash doesn't stop unless EOF is reached.
Following worked for me:
Ctrl+Z, and then press Enter.
There is not exactly an EOF character, only a control character simulating "exit". And the CMD around a bash does not interpret CTRL-Z correctly.
In a git bash, that would be: Alt-Space, followed by C.
Done in a bash without any program running, that would exit the console.
Try it to see if your program intercept the control character and does exit its execution. But it might close the bash Windows all together.

Process substitution >(cmd) doesn't output correctly

I am trying to learn about process substitution and when I execute this:
$ tee >(wc -l) <<< $'aaa\nbbb'
aaa
bbb
$ 2
bash prints the number after the next prompt and waits for me to press enter.
I am using bash 4.4.12, experiencing the same issue with bash 4.3.48. There is no issue in bash 4.3.30 and the command correctly outputs:
$ tee >(wc -l) <<< $'aaa\nbbb'
aaa
bbb
2
What could be the possible issue?
It's a quirk / design flaw with process substitution; bash only waits for the main command to terminate. It doesn't wait for substituted processes to end. It creates a race condition when tee ends: does wc finish before bash prints its next prompt? Sometimes it does, sometimes it doesn't.
See this Unix.SE answer by Stéphane Chazelas for a detailed explanation and possible workaround.
When you use process substitution, the process you're substituting runs in the background, and the shell doesn't wait for it to finish before displaying the prompt.
Sometimes it will display its output before the shell displays the next prompt, sometimes it will be a little slower. It has nothing to do with the version of bash, it's just purely by chance which process is faster.
The shell is waiting for you to press enter because it's already displayed the prompt and it's waiting for you to type another command. The shell doesn't know anything about the 2 that was displayed while you were at the command prompt -- that's output, not part of your input.
This is generally not an issue because you don't usually use process substitution with programs that display output to the user interactively.

Bash completion sometimes meshes up my terminal when the completion function reads a file

So I've been having a problem with some cli programs. Sometimes when I kill the running process with Ctrl+C, it leaves the terminal in a weird state (e.g. echo is turned off). Now that is to be expected for many cases, as killing a process does not give it a chance to restore the terminal's state. But I've discovered that for many other cases, bash completion is the culprit. As an example, try the following:
Start a new bash session as follows: bash --norc to ensure that no completions are loaded.
Define a completion function: _completion_test() { grep -q foo /dev/null; return 1; }.
Define a completion that uses the above function: complete -F _completion_test rlwrap.
Type exactly the following: r l w r a p Space c a t Tab Enter (i.e. rlwrap cat followed by a Tab and then by an Enter).
Wait for a second and then kill the process with Ctrl+C.
The echo of the terminal should have not been turned off. So if you type any character, it will not be echoed by the terminal.
What is really weird is that if I remove the seemingly harmless grep -q foo /dev/null from the completion function, everything works correctly. In fact, adding a grep -q foo /dev/null (or even something even simpler such as cat /dev/null) to any completion function that was installed in my system, causes the same issue. I have also reproduced the problem with programs that don't use readline and without Ctrl+C (e.g. find /varTab| head, with the above completion defined for find).
Why does this happen?
Edit: Just to clarify, the above is a contrived example. In reality, what I am trying to do, is more like this:
_completion_test() {
if grep -q "$1" /some/file; then
#do something
else
#do something else
fi
}
For a more concrete example, try the following:
_completion_test() {
if grep -q foo /dev/null; then
COMPREPLY=(cats)
else
return 1
fi
}
But the mere fact that I am calling grep, causes the problem. I don't see why I can't call grep in this case.
Well, the answer to this is very simple; it's a bug:
This happens when a programmable completion function calls an external command during the execution of a completion function. Bash saves the tty state after every successful job completes, so it can restore it if a job is killed by a signal and leaves the terminal in an undesired state. In this case, we need to suppress that if the job that completes is run during programmable completion, since the terminal settings at that time are as readline sets them for line editing. This fix will be in the release version of bash-4.4.
You're just doing a wrong implementation of the completion function. See the manual
-F function
The shell function function is executed in the current shell environment. When it is executed, $1 is the name of the command
whose arguments are being completed, $2 is the word being completed,
and $3 is the word preceding the word being completed, as described
above (see Programmable Completion). When it finishes, the possible
completions are retrieved from the value of the COMPREPLY array
variable.
for example the following implemenation:
_completion_test() { COMPREPLY=($(cat /dev/null)); return 1; }
doesn't break the terminal.
Regarding your original question why your completion function breaks terminal, I've played a little with strace and I saw that there are ioctl calls with -echo argument. I assume that when you are terminating it with Ctrl+C the ioctl with echo argument just isn't being called in order to restore the original state. Typing stty echo will bring the echo back.

Execute command when vim filter command is interrupted

I use filter commands in vim to call bash scripts that I need to interrupt sometimes. I do this by hitting Ctrl+C in the vim window. The bash script terminates then (at least vim stops the filter command) but the text I passed to the filter command (usually a visual selection) will be missing. I would like vim to return to the state before the filter command if I interrupt execution by Ctrl+C or the bash script finishes with exit state other than 0. Note that I know that I can press u to undo, but I would like to modify the filter command to do this since I could forget to press u and loose the text without noticing.
You can set signal and/or exit handlers in bash. man bash, /trap.*sigspec
Something like:
trap "your_commands" SIGINT
my_program
To make it "preserve" the text, you probably need something like that:
TIN=`mktemp`
TOUT=`mktemp`
cat > $TIN
trap "cat $TIN; rm -f $TIN; rm -f $TOUT" EXIT
do_something < $TIN > $TOUT || cp $TIN $TOUT
mv $TOUT $TIN
Checked in my Vim, seems to be working. (tested with sed and sleep as do_something)

Resources