Simulating user input in bash script while executing bin file - linux

I'd like to ask what is the most common way to parse user input to the executable program, in particular on Linux.
I tried to invoke bash script that contains the following lines:
BIN_FILE=<filepath>
FLAG=<flag>\n
${BIN_FILE}
echo -ne ${FLAG}
[...]
but since the executed program is a separate thread the echo line of my script is not processed until the program terminates.
In adnvace thank you for your answers! BR -M

Related

How to capture error messages from a program that fails only outside the terminal?

On a Linux server, I have a script here that will work fine when I start it from the terminal, but fail when started and then detached by another process. So there is probably a difference in the script's environment to fix.
The trouble is, the other process integrating that script does not provide access to its error messages when the script fails. What is an easy (and ideally generic) way to see the output of such a script when it's failing?
Let's assume I have no easy way to change the code of the process calling this script. The failure happens right at the start of the script's run, so there is not enough time to manually attach to it with strace to see its output.
(The specifics should not matter, but for what it's worth: the failing script is the backup script of Discourse, a widespread open source forum software. Discourse and this script are written in Ruby.)
The idea is to substitute original script with wrapper which calls original script and saves its stdin and stderr to files. Wrapper may be like this:
#!/bin/bash
exec /path/to/original/script "$#" 1> >(tee /tmp/out.log) 2> >(tee /tmp/err.log >&2)
1> >(tee /tmp/out.log) redirects stdout to tee /tmp/out.log input in subshell. tee /tmp/out.log passes it to stdout but saves copy to the file.
2> >(tee /tmp/err.log) redirects stderr to tee /tmp/err.log input in subshell. tee /tmp/err.log >&2 passes it to stderr but saves copy to the file.
If script is invoked multiple times you may want to append stdout and stderr to files. Use tee -a in this case.
The problem is how to force caller to execute wrapper script instead of original one.
If caller invokes script in a way that it is searched in PATH you can put wrapper script to a separate directory and provide modified PATH to the caller. For example, script name is script. Put wrapper to /some/dir/script and run caller as
$ PATH="/some/dir:$PATH" caller
/path/to/original/script in wrapper must be absolute.
If caller invokes script from specific path then you can rename original script e.g. to original-script and name wrapper as script. In this case wrapper should call /path/to/original/original-script.
Another problem may rise if script behaves differently depending on name it's called. In this case exec -a ... may be needed.
You can use a bash script that (1) does "busy waiting" until it sees the targeted process, and then (2) immediately attaches to it with strace and prints its output to the terminal.
#!/bin/sh
# Adapt to a regex that matches only your target process' full command.
name_pattern="bin/ruby.*spawn_backup_restore.rb"
# Wait for a process to start, based on its name, and capture its PID.
# Inspiration and details: https://unix.stackexchange.com/a/410075
pid=
while [ -z "$pid" ] ; do
pid="$(pgrep --full "$name_pattern" | head -n 1)"
# Set delay for next check to 1ms to try capturing all output.
# Remove completely if this is not enough to capture from the start.
sleep 0.001
done
echo "target process has started, pid is $pid"
# Print all stdout and stderr output of the process we found.
# Source and explanations: https://unix.stackexchange.com/a/58601
strace -p "$pid" -s 9999 -e write

Creating a file using exec in Bash

I am trying to use a shell program that I have been left to get running. The program starts as shown below.
#!/usr/bin/env bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>$1/logs/$(date '+%Y%m%d_%H%M%S')_start.log 2>&1
The program is a .sh with the input ($1) being the output folder such that it is looking in /output/logs..., the program fails on the exec line, this is my first time using bash and shell scripting but I think this line essentially redirects all the standard outputs to the log file?
The program error say
cannot create /output/logs/20230203_12345_start.log: directory non-existent
should this line also create the log file if it is non existent? I don't see how you could create the .log file first as otherwise you would get the seconds part of the title wrong?
It's not complaining about the file name, it says directory non-existent. I/O redirection can't make directories. Create it first with mkdir -p "$1"/logs, then your line should work.

automation input to read command in ubuntu

I want to give input to one shell script from another shell script
#|bin/bash
echo "enter y/n"
read r
echo $r
I am sending input using
echo -e 'y' > /proc/10840/fd/1
But it display only on console. it does not take as input of read command.
The STDIN of the script is bound to its the terminal, so that you cannot write to it from outside. You can use FIFOs for this. The general idea is:
Script starts and creates a FIFO (or the FIFO can be created before from the command line)
Script opens FIFO for readings and reads the data in a loop.
From outside you can write to the FIFO, then the written content will be read by the script in its loop.
Reference: man fifo : http://man7.org/linux/man-pages/man7/fifo.7.html

Bash here document - suppress printing code to the screen?

I am writing a script to become a user (let's call it genomics) via the cmd "sudo /etc/bgenomics" (this is setup by our admin) and run some bash code as that user, namely run a cmd, catch the exit code and take the appropriate action.
The problem is the bash code inside the here doc get printed to the screen, which is distracting and looks really unelegant.
Here's an illustration:
#!/bin/bash
name='George'
sudo /etc/bgenomics <<Q
/bin/bash
if (( 2 == 2 )); then
echo "my name is $name"
grep zzz /etc # will return nothing and $? = 1
echo \$? # this should be 1 after the above cmd
fi
Q
The if statement is just there to show how annoying it is when printed.
Right now all of the following is printed to the screen:
Script started, file is /var/tmp/genomicstraces/c060644.20140617143003.11536
Script done, file is /var/tmp/genomicstraces/c060644.20140617143003.11536
brainiac-login-02$brainiac-login-02$/bin/bash
bash-3.2$ if (( 2 == 2 )); then
> echo "my name is George"
> grep zzz /etc # will return nothing and 0 = 1
> echo $? # this should be 1 after the above cmd
> fi
my name is George
1
The only parts I want to see are "my name is George" and "1". Can it be done?
Is another process calling this script? Output shouldn't normally appear unless bash is called with '-x'. Try modifying the first line of your script if you cannot disable echo in the calling process:
#!/bin/bash +x
You may also want to remove the call to /bin/bash after the sudo command unless you really wish to start another shell within your shell.
The here document supplies input to the bgenomics script via its standard input. What happens to that input is up to that script.
If you want the script to print some of its input, and not print some of its input, you have to modify the script.
If bgenomics is actually a wrapper for an interactive shell session (as it seems to be, judging by the Script started and Script done traces), then here documents are not the best way to feed input into it.
A good way is to use the expect utility, which controls interactive programs via a pseudo-terminal device and provides a scripting language with a great deal of control. expect can suppress all unwanted input from an interactive program. It can look for specific outputs from the program, and supply responses. For instance it can look for a login: string coming from the interactive session, and send a user name.
The program bgenomics has an invocation of script in it to record what the script did. Talk to the person in charge of that to understand what their intentions are. Until you understand the purpose of bgenomics you risk screwing up what the author of that is trying to do.
$ script /tmp/junk.txt
Script started, file is /tmp/junk.txt
$ date # this is a child shell of the script command
Tue Jun 17 21:04:14 EDT 2014
$ exit
Script done, file is /tmp/junk.txt

Linux All Output to a File

Is there any way to tell Linux system put all output(stdout,stderr) to a file?
With out using redirection, pipe or modification the how scrips get called.
Just tell the Linux use a file for output.
for example:
script test1.sh:
#!/bin/bash
echo "Testing 123 "
If i run it like "./test1.sh" (with out redirection or pipe)
i'd like to see "Testing 123" in a file (/tmp/linux_output)
Problem: in the system a binary makes a call to a script and this script call many other scrips. it is not possible to modify each call so If i can modify Linux put "output" to a file i can review the logs.
#!/bin/bash
exec >file 2>&1
echo "Testing 123 "
You can read more about exec here
If you are running the program from a terminal, you can use the command script.
It will open up a sub-shell. Do what you need to do.
It will copy all output to the terminal into a file. When you are done, exit the shell. ^D, or exit.
This does not use redirection or pipes.
You could set your terminal's scrollback buffer to a large number of lines and then see all the output from your commands in the buffer - depending on your terminal window and the options in its menus, there may be an option in there to capture terminal I/O to a file.
Your requirement if taken literally is an impractical one, because it is based in a slight misunderstanding. Fundamentally, to get the output to go in a file, you will have to change something to direct it there - which would violate your literal constraint.
But the practical problem is solvable, because unless explicitly counteracted in the child, the output directions configured in a parent process will be inherited. So you only have to setup the redirection once, using either a shell, or a custom launcher program or intermediary. After that it will be inherited.
So, for example:
cat > test.sh
#/bin/sh
echo "hello on stdout"
rm nosuchfile
./test2.sh
And a child script for it to call
cat > test2.sh
#/bin/sh
echo "hello on stdout from script 2"
rm thisfileisnteither
./nonexistantscript.sh
Run the first script redirecting both stdout and stderr (bash version - and you can do this in many ways such as by writing a C program that redirects its outputs then exec()'s your real program)
./test.sh &> logfile
Now examine the file and see results from stdout and stderr of both parent and child.
cat logfile
hello on stdout
rm: nosuchfile: No such file or directory
hello on stdout from script 2
rm: thisfileisnteither: No such file or directory
./test2.sh: line 4: ./nonexistantscript.sh: No such file or directory
Of course if you really dislike this, you can always always modify the kernel - but again, that is changing something (and a very ungainly solution too).

Resources