automation input to read command in ubuntu - linux

I want to give input to one shell script from another shell script
#|bin/bash
echo "enter y/n"
read r
echo $r
I am sending input using
echo -e 'y' > /proc/10840/fd/1
But it display only on console. it does not take as input of read command.

The STDIN of the script is bound to its the terminal, so that you cannot write to it from outside. You can use FIFOs for this. The general idea is:
Script starts and creates a FIFO (or the FIFO can be created before from the command line)
Script opens FIFO for readings and reads the data in a loop.
From outside you can write to the FIFO, then the written content will be read by the script in its loop.
Reference: man fifo : http://man7.org/linux/man-pages/man7/fifo.7.html

Related

bash script stdin not detected clarification

This kind of got me confused.
this is my bash script:
filename: reader.sh
READ = $("cat")
echo "$READ"
So the first line reads stdin and the second line prints it.
Nevertheless, I get that when I start my terminal and start pressing keys on my keyboard it will pop up in the terminal due to the fact that the shell redirects stdin and stdout to e.g dev/pts/0, meaning that the file is used as stdin and also stdout.
Afterwards the shell (when return is found by the tty driver) kind of saves the first argument of the command line which is the utility, and looks at the rest of the command linux, then it puts a sort of array or list of arguments in the environment of the program that is being called so it can use arguments.
Why is it that the above bash script can print the output of a file through a piped stdin e.g ./reader.sh < otherfile, but not just ./reader.sh? I would expect in the second example that what's in stdin would be read from what was in dev/pts/0 since that's also just stdin.
Is it because when the arguments are parsed into a list, the dev/pts/0 file gets emptied?
When you use
./reader.sh < otherfile
stdin in the script is connected to the file, not /dev/pts/0. cat inherits this stdin, so it reads from the file.

Simulating user input in bash script while executing bin file

I'd like to ask what is the most common way to parse user input to the executable program, in particular on Linux.
I tried to invoke bash script that contains the following lines:
BIN_FILE=<filepath>
FLAG=<flag>\n
${BIN_FILE}
echo -ne ${FLAG}
[...]
but since the executed program is a separate thread the echo line of my script is not processed until the program terminates.
In adnvace thank you for your answers! BR -M

Shell script hangs when it should read input only when run from node.js

I have a question. When use commands line to execute a bash file I can get it to work properly. It gets the input and export it to the environment variable.
How can I make it not hang and execute the block in prompt file?
prompt file
#!/bin/bash
echo "Enter your "
read input
echo "You said: $input"
Node.js file:
this is my node file that calls prompt file
checkIntegration(result)
.then(result => {
console.log(result, '123');
shell.exec('. prompt')
})
})
When I run it in my shell, I can enter information which is then printed:
$ . prompt
Enter your
Hello
You said: Hello
However, when I run it from node, I see the prompt but it won't accept any input:
How can I make my program read user input from node's terminal?
Updated.
my folder structure
checker.js
const {exec} = require('child_process');
var shell = require('shelljs');
function getPrompt() {
shell.exec('. prompt');
}
getPrompt()
tokens.txt
GITLAB_URL
GITLAB_TOKEN
GITLAB_CHANNEL
GITLAB_SHOW_COMMITS_LIST
GITLAB_DEBUG
GITLAB_BRANCHES
GITLAB_SHOW_MERGE_DESCRIPTION
SLACK_TOKEN
prompt
#!/bin/bash
SOURCE="$PWD"
SETTINGS_FILE="$SOURCE/tokens.txt"
SETTINGS=`cat "$SETTINGS_FILE"`
for i in ${SETTINGS[#]}
do
echo "Enter your " $i
read input
if [[ ! -z "$input" ]]; then
export "$i"="$input"
fi
done
Your read is trying to read from a pipeline from node, not from the TTY; since Node never writes to that pipe, the read hangs. To read from the TTY instead, modify your script as follows:
#!/bin/bash
# IMPORTANT: Ignore the stdin we were passed, and read from the TTY instead
exec </dev/tty
# Use a BashFAQ #1 "while read" loop
settings_file="$PWD/tokens.txt"
while read -r setting <&3; do
printf 'Enter your %s: ' "$setting" >&2 # prompt w/ trailing newline
IFS= read -r "$setting" # read directly to named variable
export "$setting" # export that variable to the environment
done 3<"$settings_file"
echo "Loop is finished; printing current environment" >&2
env
Note that:
We're using exec </dev/tty at the top of the script to re-open the script reading directly from the TTY.
We're using FD 3 for the settings file to keep it distinct from stdin, so the read -r "$setting" still reads from the TTY (as reopened with the redirection above), whereas read -r setting <&3 reads from the file.
We're using a BashFAQ #1 while read loop to iterate over input. This is less buggy than trying to string-split a variable to get a list of names; see Why you don't read lines with for.
We're running env to provide output with evidence of the changes which we made to the environment -- which is important because of the below.
While this works to read input from the TTY even when your shell is launched from Node, all environment variable changes made with the above code will be lost as soon as the shell exits -- which it does before the shell.exec() call returns. If you want to change environment variables for the node process itself, you need to do that using node primitives.
You don't need bash to do this at all:
# This is node.js code to set the GITLAB_URL environment variable for the Node process
# (and any future children it may launch).
process.env['GITLAB_URL']='http://example.com/'
I'm not familiar with node.js, but reading some documentation, it looks like the command should be child_process.exec('. prompt') This will only work if you are in the same directory as the prompt script.
Alternatively, you can try using another shell version #!/bin/sh

Stream specific numbered Bash file descriptor into variable

I am trying to stream a specific numbered file descriptor into a variable in Bash. I can do this from normal standard in using the following function, but, how do it do it from a specific file descriptor. I need to direct the FD into the sub-shell if I use the same approach. I could always do it reading line by line, but, if I can do it in a continuous stream then that would be massively preferable.
The function I have is:
streamStdInTo ()
{
local STORE_INvar="${1}" ; shift
printf -v "${STORE_INvar}" '%s' "$( cat - )"
}
Yes, I know that this wouldn't work normally as the end of a pipeline would be lost (due to its execution in a sub-shell), however, either in the context of the Bash 4 set +m ; shopt -s lastpipe method of executing the end of a pipeline in the same shell as the start, or, by directing into this via a different file descriptor I am hoping to be able to use it.
So, my question is, How do I use the above but with different file descriptors than the normal?
It's not entirely clear what you mean, but perhaps you are looking for something like:
cat - <&4 # read from fd 4
Or, just call your current function with the redirect:
streamStdInTo foo <&4
edit:
Addressing some questions from the comment, you can use a fifo:
#!/bin/bash
trap 'rm -f $f' 0
f=$(mktemp xxx)
rm $f
mkfifo $f
echo foo > $f &
exec 4< $f
cat - <&4
wait
I think there's a lot of confusion about what exactly you're trying to do. If I understand correctly the end goal here is to run a pipeline and capture the output in a variable, right? Kind of like this:
var=$(cmd1 | cmd2)
Except I guess the idea here is that the name of "$var" is stored in another variable:
varname=var
You can do an end-run around Bash's usual job control situation by using process substitution. So instead of this normal pipeline (which would work in ksh or zsh, but not in bash unless you set lastpipe):
cmd1 | cmd2 | read "$varname"
You would use this command, which is equivalent apart from how the shell handles the job:
read "$varname" < <(cmd1 | cmd2)
With process substitution, "read $varname" isn't run in a pipeline, so Bash doesn't fork to run it. (You could use your streamStdInTo() function there as well, of course)
As I understand it, you wanted to solve this problem by using numeric file descriptors:
cmd1 | cmd2 >&$fd1 &
read "$varname" <&$fd2
To create those file descriptors that connect the pipeline background job to the "read" command, what you need is called a pipe, or a fifo. These can be created without touching the file system (the shell does it all the time!) but the shell doesn't directly expose this functionality, which is why we need to resort to mkfifo to create a named pipe. A named pipe is a special file that exists on the filesystem, but the data you write to it doesn't go to the disk. It's a data queue stored in memory (a pipe). It doesn't need to stay on the filesystem after you've opened it, either, it can be deleted almost immediately:
pipedir=$(mktemp -d /tmp/pipe_maker_XXXX)
mkfifo ${pipedir}/pipe
exec {temp_fd}<>${pipedir}/pipe # Open both ends of the pipe
exec {fd1}>${pipedir}/pipe
exec {fd2}<${pipedir}/pipe
exec {temp_fd}<&- # Close the read/write FD
rm -rf ${pipedir} # Don't need the named FIFO any more
One of the difficulties in working with named pipes in the shell is that attempting to open them just for reading, or just for writing causes the call to block until something opens the other end of the pipe. You can get around that by opening one end in a background job before trying to open the other end, or by opening both ends at once as I did above.
The "{fd}<..." syntax dynamically assigns an unused file descriptor number to the variable $fd and opens the file on that file descriptor. It's been around in ksh for ages (since 1993?), but in Bash I think it only goes back to 4.1 (from 2010).

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

Resources