bash script stdin not detected clarification - linux

This kind of got me confused.
this is my bash script:
filename: reader.sh
READ = $("cat")
echo "$READ"
So the first line reads stdin and the second line prints it.
Nevertheless, I get that when I start my terminal and start pressing keys on my keyboard it will pop up in the terminal due to the fact that the shell redirects stdin and stdout to e.g dev/pts/0, meaning that the file is used as stdin and also stdout.
Afterwards the shell (when return is found by the tty driver) kind of saves the first argument of the command line which is the utility, and looks at the rest of the command linux, then it puts a sort of array or list of arguments in the environment of the program that is being called so it can use arguments.
Why is it that the above bash script can print the output of a file through a piped stdin e.g ./reader.sh < otherfile, but not just ./reader.sh? I would expect in the second example that what's in stdin would be read from what was in dev/pts/0 since that's also just stdin.
Is it because when the arguments are parsed into a list, the dev/pts/0 file gets emptied?

When you use
./reader.sh < otherfile
stdin in the script is connected to the file, not /dev/pts/0. cat inherits this stdin, so it reads from the file.

Related

Linux Shell script executed but not return to command prompt

I have a script that runs one of the following line
sudo -u $USER $SUDOCMD &>>stdout.log
The sudo command is a realtime process that print out lots of stuff to the console.
After running the script each time, the script does not return to the command prompt. You have to press enter or ctrl + c to get back to the command prompt.
Is there a way to to do it automatically, so that I can get a return value from the script to decide whether the script runs ok or failed.
thanks.
What is probably happening here is that your script is printing binary data to the TTY rather than text to standard output/error, and this is hiding your prompt. You can for example try this:
$ PS1='\$ '
$ (printf "first line\nsecond line\r" > $(tty)) &>> output.log
The second command will result in two lines of output, the second one being "mixed in" with your prompt:
first line
$ cond line
As you can see the cursor is on the "c", but if you start typing the rest of the line is overwritten. What has happened here is the following:
You pressed Enter to run the command, so the cursor moved a line down.
The tty command prints the path to the terminal file, something like "/dev/pts/1". Writing to this file means that the output does not go to standard output (which is usually linked to the terminal) but directly to the terminal.
The subshell (similar to running the command in a shell script) ensures that the first redirect isn't overridden by the second one. So the printf output goes directly to the terminal, and nothing goes to the output log.
The terminal now proceeds to print the printf output, which ends in a carriage return. Carriage return moves the cursor to the start of the line you've already written to, so that is where your prompt appears.
By the way:
&>> redirects both standard output and standard error, contrary to your filename.
Use More Quotes™
I would recommend reading up on how to put a command in a variable

Pass File Input and Stdin to gdb

So I want to run a program in gdb with the contents of a file as an argument. Then, when an EOF is hit, I want to be able to enter user input again. For a normal program in a terminal I can do something like this with the following command.
(cat input.txt; cat) | ./program
In gdb I can pass in the file arguments like this, but it continues to enter newlines forever after the end of the file has been reached.
(gdb) run < input.txt
It is almost as if stdin was not passed back to the program, similar to what happens if I simply do
(cat input.txt) | ./program
without the second cat. Is this even possible to do in gdb?
You can run the program in one console and attach to it with gdb from another one when it is waiting for input. Therefore you will be able to enter program input in the 1st console and debug it in the 2nd.

automation input to read command in ubuntu

I want to give input to one shell script from another shell script
#|bin/bash
echo "enter y/n"
read r
echo $r
I am sending input using
echo -e 'y' > /proc/10840/fd/1
But it display only on console. it does not take as input of read command.
The STDIN of the script is bound to its the terminal, so that you cannot write to it from outside. You can use FIFOs for this. The general idea is:
Script starts and creates a FIFO (or the FIFO can be created before from the command line)
Script opens FIFO for readings and reads the data in a loop.
From outside you can write to the FIFO, then the written content will be read by the script in its loop.
Reference: man fifo : http://man7.org/linux/man-pages/man7/fifo.7.html

How to show full output on linux shell?

I have a program that runs and shows a GUI window. It also prints a lot of things on the shell. I need to view the first thing printed and the last thing printed. the problem is that when the program terminates, if I scroll to the top of the window, the stuff printed when it began is removed. So stuff printed during the program is now at the top. So that means I can't view the first thing printed.
Also I tried doing > out.txt, but the problem is that the file only gets closed and readable when I manually close the GUI window. If it gets outed to a file, nothing gets printed on the screen and I have no way to know if the program finished. I can't modify any of the code too.
Is there a way I can see the whole list of text printed on the shell?
Thanks
You can just use tee command to get output/error in a file as well on terminal:
your-command |& tee out.log
Though just keep in mind that this output is line buffered by default (4k in size).
When the output of a program goes to your terminal window, the program generally flushes its output after each newline. This is why you see the output interactively.
When you redirect the output of the program to out.txt, it only flushes its output when its internal buffer is full, which is probably after every 8KiB of output. This is why you don't see anything in the file right away, and you don't see the last things printed by the program until it exits (and flushes its last, partially-full buffer).
You can trick a program into thinking it's sending its output to a terminal using the script command:
script -q -f -c myprogram out.txt
This script command runs myprogram connected to a newly-allocated “pseudo-terminal” (or pty for short). This tricks myprogram into thinking it's talking to a terminal, so it flushes its output on every newline. The script command copies myprogram's output to your terminal window and to the file out.txt.
Note that script will write a header line to out.txt. I can't find a way to disable that on my test Linux system.
In the example above, I assumed your program takes no arguments. If it does, you either need to put the program and arguments in quotes:
script -q -f -c 'myprogram arg1 arg2 arg3' out.txt
Or put the program command line in a shell script and pass that shell script to the script command.

Need explanations for Linux bash builtin exec command behavior

From Bash Reference Manual I get the following about exec bash builtin command:
If command is supplied, it replaces the shell without creating a new process.
Now I have the following bash script:
#!/bin/bash
exec ls;
echo 123;
exit 0
This executed, I got this:
cleanup.sh ex1.bash file.bash file.bash~ output.log
(files from the current directory)
Now, if I have this script:
#!/bin/bash
exec ls | cat
echo 123
exit 0
I get the following output:
cleanup.sh
ex1.bash
file.bash
file.bash~
output.log
123
My question is:
If when exec is invoked it replaces the shell without creating a new process, why when put | cat, the echo 123 is printed, but without it, it isn't. So, I would be happy if someone can explain what's the logic of this behavior.
Thanks.
EDIT:
After #torek response, I get an even harder to explain behavior:
1.exec ls>out command creates the out file and put in it the ls's command result;
2.exec ls>out1 ls>out2 creates only the files, but do not put inside any result. If the command works as suggested, I think the command number 2 should have the same result as command number 1 (even more, I think it should not have had created the out2 file).
In this particular case, you have the exec in a pipeline. In order to execute a series of pipeline commands, the shell must initially fork, making a sub-shell. (Specifically it has to create the pipe, then fork, so that everything run "on the left" of the pipe can have its output sent to whatever is "on the right" of the pipe.)
To see that this is in fact what is happening, compare:
{ ls; echo this too; } | cat
with:
{ exec ls; echo this too; } | cat
The former runs ls without leaving the sub-shell, so that this sub-shell is therefore still around to run the echo. The latter runs ls by leaving the sub-shell, which is therefore no longer there to do the echo, and this too is not printed.
(The use of curly-braces { cmd1; cmd2; } normally suppresses the sub-shell fork action that you get with parentheses (cmd1; cmd2), but in the case of a pipe, the fork is "forced", as it were.)
Redirection of the current shell happens only if there is "nothing to run", as it were, after the word exec. Thus, e.g., exec >stdout 4<input 5>>append modifies the current shell, but exec foo >stdout 4<input 5>>append tries to exec command foo. [Note: this is not strictly accurate; see addendum.]
Interestingly, in an interactive shell, after exec foo >output fails because there is no command foo, the shell sticks around, but stdout remains redirected to file output. (You can recover with exec >/dev/tty. In a script, the failure to exec foo terminates the script.)
With a tip of the hat to #Pumbaa80, here's something even more illustrative:
#! /bin/bash
shopt -s execfail
exec ls | cat -E
echo this goes to stdout
echo this goes to stderr 1>&2
(note: cat -E is simplified down from my usual cat -vET, which is my handy go-to for "let me see non-printing characters in a recognizable way"). When this script is run, the output from ls has cat -E applied (on Linux this makes end-of-line visible as a $ sign), but the output sent to stdout and stderr (on the remaining two lines) is not redirected. Change the | cat -E to > out and, after the script runs, observe the contents of file out: the final two echos are not in there.
Now change the ls to foo (or some other command that will not be found) and run the script again. This time the output is:
$ ./demo.sh
./demo.sh: line 3: exec: foo: not found
this goes to stderr
and the file out now has the contents produced by the first echo line.
This makes what exec "really does" as obvious as possible (but no more obvious, as Albert Einstein did not put it :-) ).
Normally, when the shell goes to execute a "simple command" (see the manual page for the precise definition, but this specifically excludes the commands in a "pipeline"), it prepares any I/O redirection operations specified with <, >, and so on by opening the files needed. Then the shell invokes fork (or some equivalent but more-efficient variant like vfork or clone depending on underlying OS, configuration, etc), and, in the child process, rearranges the open file descriptors (using dup2 calls or equivalent) to achieve the desired final arrangements: > out moves the open descriptor to fd 1—stdout—while 6> out moves the open descriptor to fd 6.
If you specify the exec keyword, though, the shell suppresses the fork step. It does all the file opening and file-descriptor-rearranging as usual, but this time, it affects any and all subsequent commands. Finally, having done all the redirections, the shell attempts to execve() (in the system-call sense) the command, if there is one. If there is no command, or if the execve() call fails and the shell is supposed to continue running (is interactive or you have set execfail), the shell soldiers on. If the execve() succeeds, the shell no longer exists, having been replaced by the new command. If execfail is unset and the shell is not interactive, the shell exits.
(There's also the added complication of the command_not_found_handle shell function: bash's exec seems to suppress running it, based on test results. The exec keyword in general makes the shell not look at its own functions, i.e., if you have a shell function f, running f as a simple command runs the shell function, as does (f) which runs it in a sub-shell, but running (exec f) skips over it.)
As for why ls>out1 ls>out2 creates two files (with or without an exec), this is simple enough: the shell opens each redirection, and then uses dup2 to move the file descriptors. If you have two ordinary > redirects, the shell opens both, moves the first one to fd 1 (stdout), then moves the second one to fd 1 (stdout again), closing the first in the process. Finally, it runs ls ls, because that's what's left after removing the >out1 >out2. As long as there is no file named ls, the ls command complains to stderr, and writes nothing to stdout.

Resources