read command is not taking input from the terminal - linux

I dont know if it is weird that read is not taking the input from the terminal.
The configure script, which is used in source code making process, should ask the user to give the input to select the type of Database either MYSQL or ORACLE(below is the code).
MYSQLLIBPATH="/usr/lib/mysql"
echo "Enter DataBase-Type 1-ORACLE, 2-MySQL (default MySQL):"
read in
echo $? >> /tmp/error.log
if test -z "$in" -o "$in" = "2"
then
DATABASE=-DDB_MYSQL
if true; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
echo "Enter Mysql Library Path: (eg: $MYSQLLIBPATH (default))"
read in
echo $? >> /tmp/error.log
if test -n "$in"
then
MYSQLLIBPATH=`echo $in`
fi
echo "Mysql Lib path is $MYSQLLIBPATH"
else
if false; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
DATABASE=-DDB_ORACLE
LD_PATH=
fi
But, the read command is not asking for the user input. Its failing to take the input from the stdin.
When I checked the status of the command in the error.log it was showing
1
1
Could anyone tell why read is failing to take the input from the stdin.
Are there any builtin variable which can block read taking the input?

Most likely read executes with standard input redirected from a file that has reached EOF. If the above is not the whole of your configure code, check that there are no input redirections. Could the code above be a part of a function which was invoked with some input from a pipe or a file? Otherwise check how configure is executed - are there any redirections?
Otherwise, the universal advice applies: try simplifying and stripping down your code until it is obvious what's happening.
BTW, it is not a good idea to make configure interactive, if you want to have your program packaged for a distribution - it's not easy to control execution of interactive programs. Consider adding support for supplying parameters through command line options.

Related

BASH: Check output of command for $STRING, if exists then store output as variable

Background:
I'm working on some OBDII software and I'm attempting to automate the connection process via bluetooth. I have a working script but I'd like to further automate it so it'll work on all *nix machines and not just mine (right now the bluetooth device's MAC is stored manually in the script).
My Problem:
The output of this command is...
$ hcitool scan
Scanning ...
00:18:56:68:AE:08 OBDII
I need a simple way of piping this into grep (or whatever works) and checking the output for the string "OBDII". If it sees it, then it takes that same line, copies the resulting MAC into a variable while stripping all whitespace and the OBDII identifier at the end, leaving only the MAC to be utilized further down into the script.
What's the simplest way to get this done?
Any help is appreciated!!
There's no reason to only conditionally store the output if the operation is successful -- easier to always store it, and check whether it's empty if you want to know if a match was found.
result=$(hcitool scan | awk '/OBDII/ { print $1 }')
if [[ $result ]]; then
echo "Found a value: $result" >&2
else
echo "No result found" >&2
fi

What does this if-statement from a bash script do?

I am new to bash scripting and learning through some examples. One of the examples that I saw is using an if-statement to test if a previously assigned output file is valid, like this:
if [ -n "$outputFile" ] && ! 2>/dev/null : >> $outputFile ; then
exit 1
fi
I understand what [ -n "$outputFile" ] does but not the rest of the conditional. Can someone explain what ! 2>/dev/null : >> $outputFile mean/does?
I have googled for answers but most links found were explanations on I/O redirection, which are definitely relevant but still unclear about the ! : >> structure.
That's some oddly written code!
The : command is built into bash. It's equivalent to true; it does nothing, successfully.
: >> $outputFile
attempts to do nothing, and appends the (empty) output to $outputFile -- which has already been confirmed to be a non-empty string. The >> redirection operator will create the file if it doesn't already exist.
I/O redirections such as 2>/dev/null can appear anywhere in a simple command; they don't have to be at the end. So the stdout of the : command (which is empty) is appended to $outputFile, and any stderr output is redirected to /dev/null. Any such stderr output would be the result of a failure in the redirection, since the : command itself does nothing and won't fail to do so. I don't know why the redirection of stdout (onto the end of $outputFile and the redirection of stderr (to /dev/null) are on opposite sides of the : command.
The ! operator is a logical "not"; it checks whether the following command succeeded, and inverts the result.
The net result, written in English-ish text is:
if "$outputFile" is set and is not an empty string, and if we don't have permission to write to it, then terminate the script with a status of 1.
In short, it tests whether we're able to write to $outputFile, and bails out if we don't.
The script is attempting to make sure $outputFile is writable in a not-so-obvious way.
: is the null command in bash, it does nothing. The fact that stderr is redirected to /dev/null is simply to suppress the permission denied error, should one occur.
If the file is not writable, then the command fails, which makes the condition true since it's negated with ! and the script exits.

How to run a script using diff for a command?

I'm writing a script to test a program and I'm getting caught at this portion
if (("$x"==22)); then
echo "Checking for whether wrong input file is detected."
if diff ${arr[$x]} <(./compare ); then
echo Output is as expected.
else
echo Output is not as expected. Check for errors.
fi
else
if diff -q ${arr[$x]} <(./compare $i); then
echo Output is as expected.
else
echo Output is not as expected. Check for errors.
fi
fi
So what it's doing is testing my program against known output. However, for the case where I use ./compare without an argument, I want to receive an error message from my program specifying that the argument is missing. The test file it's using, "22" let's call it result22.txt, has the exact same output my program would give from just running ./compare (no arguments). However, when I run it using the script, it says that result22.txt differs from just running ./compare. I'm pretty sure I"m running the script wrong, any ideas?
Additionaly information, i is a known input test file that's from an array, x is an incremental variable to count which loop we're on. So arr[$x] is just accessing the nth file from the known output files.
Compare is my own compare program to run.

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources