How can I store the result of this command as a variable in my bash script? - linux

I'm building a simple tool that will let me know if a site "siim.ml" resolves. If I run the command "ping siim.ml | grep "Name or service not known"" in the linux command line then it only returns text if the site does not resolve. Any working site returns nothing.
Using this I want to check if the result of that command is empty, and if it is I want to perform an action.
Problem is no matter what I do the variable is empty! And it still just prints the result to stdout instead of storing it.
I've already tried switching between `command` and $(command), and removed the pipe w/ the grep, but it has not worked
#!/bin/bash
result=$(ping siim.ml | grep "Name or service not known")
echo "Result var = " $result
if ["$result" = ""]
then
#siim.ml resolved
#/usr/local/bin/textMe/testSite.sh "siim.ml has resolved"
echo "It would send the text"
fi
When I run the script it prints this:
ping: siim.ml: Name or service not known
Result var =
It would send the text

It's almost certainly because that error is going to standard error rather than standard output (the latter which will be captured by $()).
You can combine standard error into the output stream as follows:
result=$(ping siim.ml 2>&1 | grep "Name or service not known")
In addition, you need spaces separating the [ and ] characters from the expression:
if [ "$result" = "" ]

Or even slightly more terse, just check whether ping succeeds, e.g.
if ping -q -c 1 siim.ml &>/dev/null
then
echo "It would send the text"
## set result or whatever else you need on success here
fi
This produces no output due to the redirection to /dev/null and succeeds only if a successful ping of siim.ml succeeds.

Related

Bash : how to grep string from file

am having issue with grep as VESTACP is using it a lot.
i have file mysql.conf
HOST='localhost' USER='root' PASSWORD='xxxxxx' CHARSETS='UTF8,LATIN1,WIN1250,WIN1251,WIN1252,WIN1256,WIN1258,KOI8' MAX_DB='500' U_SYS_USERS='' U_DB_BASES='1' SUSPENDED='no' TIME='05:32:47' DATE='2016-03-20'
now when i run
echo host_str=$(grep "HOST='$1'" $VESTA/conf/mysql.conf)
i get empty result , although there is HOST in mysql.conf file which i pasted above in code
so any idea whats wrong with it
UPDATE :: Vesta db connect code block
host_str=$(grep "HOST='$1'" $VESTA/conf/mysql.conf)
eval $host_str
if [ -z $HOST ] || [ -z $USER ] || [ -z $PASSWORD ]; then
echo "Error: mysql config parsing failed"
log_event "$E_PARSING" "$EVENT"
exit $E_PARSING
fi
and i get
Error: mysql config parsing failed
$1 is the first parameter of your script.
So, host_str=$(grep "HOST='$1'" $VESTA/conf/mysql.conf) gets the line containing some variables from your file according to your parameter, and eval $host_str sets these variables in your script.
Therefore, your script needs an argument to know which host to look for in your file, in your case, it's localhost, so run: ./yourscript.sh localhost.
You probably don't want to use $1. Rather try this:
echo host_str=$(grep -o "HOST='[^']*'" $VESTA/conf/mysql.conf)
The [^']* expands to everything that happens to be in between the single quotes. The option -o makes sure you only get the matching string, not the whole line, if that is what you want.

BASH: Check output of command for $STRING, if exists then store output as variable

Background:
I'm working on some OBDII software and I'm attempting to automate the connection process via bluetooth. I have a working script but I'd like to further automate it so it'll work on all *nix machines and not just mine (right now the bluetooth device's MAC is stored manually in the script).
My Problem:
The output of this command is...
$ hcitool scan
Scanning ...
00:18:56:68:AE:08 OBDII
I need a simple way of piping this into grep (or whatever works) and checking the output for the string "OBDII". If it sees it, then it takes that same line, copies the resulting MAC into a variable while stripping all whitespace and the OBDII identifier at the end, leaving only the MAC to be utilized further down into the script.
What's the simplest way to get this done?
Any help is appreciated!!
There's no reason to only conditionally store the output if the operation is successful -- easier to always store it, and check whether it's empty if you want to know if a match was found.
result=$(hcitool scan | awk '/OBDII/ { print $1 }')
if [[ $result ]]; then
echo "Found a value: $result" >&2
else
echo "No result found" >&2
fi

What does this if-statement from a bash script do?

I am new to bash scripting and learning through some examples. One of the examples that I saw is using an if-statement to test if a previously assigned output file is valid, like this:
if [ -n "$outputFile" ] && ! 2>/dev/null : >> $outputFile ; then
exit 1
fi
I understand what [ -n "$outputFile" ] does but not the rest of the conditional. Can someone explain what ! 2>/dev/null : >> $outputFile mean/does?
I have googled for answers but most links found were explanations on I/O redirection, which are definitely relevant but still unclear about the ! : >> structure.
That's some oddly written code!
The : command is built into bash. It's equivalent to true; it does nothing, successfully.
: >> $outputFile
attempts to do nothing, and appends the (empty) output to $outputFile -- which has already been confirmed to be a non-empty string. The >> redirection operator will create the file if it doesn't already exist.
I/O redirections such as 2>/dev/null can appear anywhere in a simple command; they don't have to be at the end. So the stdout of the : command (which is empty) is appended to $outputFile, and any stderr output is redirected to /dev/null. Any such stderr output would be the result of a failure in the redirection, since the : command itself does nothing and won't fail to do so. I don't know why the redirection of stdout (onto the end of $outputFile and the redirection of stderr (to /dev/null) are on opposite sides of the : command.
The ! operator is a logical "not"; it checks whether the following command succeeded, and inverts the result.
The net result, written in English-ish text is:
if "$outputFile" is set and is not an empty string, and if we don't have permission to write to it, then terminate the script with a status of 1.
In short, it tests whether we're able to write to $outputFile, and bails out if we don't.
The script is attempting to make sure $outputFile is writable in a not-so-obvious way.
: is the null command in bash, it does nothing. The fact that stderr is redirected to /dev/null is simply to suppress the permission denied error, should one occur.
If the file is not writable, then the command fails, which makes the condition true since it's negated with ! and the script exits.

read command is not taking input from the terminal

I dont know if it is weird that read is not taking the input from the terminal.
The configure script, which is used in source code making process, should ask the user to give the input to select the type of Database either MYSQL or ORACLE(below is the code).
MYSQLLIBPATH="/usr/lib/mysql"
echo "Enter DataBase-Type 1-ORACLE, 2-MySQL (default MySQL):"
read in
echo $? >> /tmp/error.log
if test -z "$in" -o "$in" = "2"
then
DATABASE=-DDB_MYSQL
if true; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
echo "Enter Mysql Library Path: (eg: $MYSQLLIBPATH (default))"
read in
echo $? >> /tmp/error.log
if test -n "$in"
then
MYSQLLIBPATH=`echo $in`
fi
echo "Mysql Lib path is $MYSQLLIBPATH"
else
if false; then
MYSQL_TRUE=
MYSQL_FALSE='#'
else
MYSQL_TRUE='#'
MYSQL_FALSE=
fi
DATABASE=-DDB_ORACLE
LD_PATH=
fi
But, the read command is not asking for the user input. Its failing to take the input from the stdin.
When I checked the status of the command in the error.log it was showing
1
1
Could anyone tell why read is failing to take the input from the stdin.
Are there any builtin variable which can block read taking the input?
Most likely read executes with standard input redirected from a file that has reached EOF. If the above is not the whole of your configure code, check that there are no input redirections. Could the code above be a part of a function which was invoked with some input from a pipe or a file? Otherwise check how configure is executed - are there any redirections?
Otherwise, the universal advice applies: try simplifying and stripping down your code until it is obvious what's happening.
BTW, it is not a good idea to make configure interactive, if you want to have your program packaged for a distribution - it's not easy to control execution of interactive programs. Consider adding support for supplying parameters through command line options.

Bash script to capture input, run commands, and print to file

I am trying to do a homework assignment and it is very confusing. I am not sure if the professor's example is in Perl or bash, since it has no header. Basically, I just need help with the meat of the problem: capturing the input and outputting it. Here is the assignment:
In the session, provide a command prompt that includes the working directory, e.g.,
$./logger/home/it244/it244/hw8$
Accept user’s commands, execute them, and display the output on the screen.
During the session, create a temporary file “PID.cmd” (PID is the process ID) to store the command history in the following format (index: command):
1: ls
2: ls -l
If the script is aborted by CTRL+C (signal 2), output a message “aborted by ctrl+c”.
When you quit the logging session (either by “exit” or CTRL+C),
a. Delete the temporary file
b. Print out the total number of the commands in the session and the numbers of successful/failed commands (according to the exit status).
Here is my code so far (which did not go well, I would not try to run it):
#!/bin/sh
trap 'exit 1' 2
trap 'ctrl-c' 2
echo $(pwd)
while true
do
read -p command
echo "$command:" $command >> PID.cmd
done
Currently when I run this script I get
command read: 10: arg count
What is causing that?
======UPDATE=========
Ok I made some progress not quite working all the way it doesnt like my bashtrap or incremental index
#!/bin/sh
index=0
trap bashtrap INT
bashtrap(){
echo "CTRL+C aborting bash script"
}
echo "starting to log"
while :
do
read -p "command:" inputline
if [ $inputline="exit" ]
then
echo "Aborting with Exit"
break
else
echo "$index: $inputline" > output
$inputline 2>&1 | tee output
(( index++ ))
fi
done
This can be achieved in bash or perl or others.
Some hints to get you started in bash :
question 1 : command prompt /logger/home/it244/it244/hw8
1) make sure of the prompt format in the user .bashrc setup file: see PS1 data for debian-like distros.
2) cd into that directory within you bash script.
question 2 : run the user command
1) get the user input
read -p "command : " input_cmd
2) run the user command to STDOUT
bash -c "$input_cmd"
3) Track the user input command exit code
echo $?
Should exit with "0" if everything worked fine (you can also find exit codes in the command man pages).
3) Track the command PID if the exit code is Ok
echo $$ >> /tmp/pid_Ok
But take care the question is to keep the user command input, not the PID itself as shown here.
4) trap on exit
see man trap as you misunderstood the use of this : you may create a function called on the catched exit or CTRL/C signals.
5) increment the index in your while loop (on the exit code condition)
index=0
while ...
do
...
((index++))
done
I guess you have enough to start your home work.
Since the example posted used sh, I'll use that in my reply. You need to break down each requirement into its specific lines of supporting code. For example, in order to "provide a command prompt that includes the working directory" you need to actually print the current working directory as the prompt string for the read command, not by setting the $PS variable. This leads to a read command that looks like:
read -p "`pwd -P`\$ " _command
(I use leading underscores for private variables - just a matter of style.)
Similarly, the requirement to do several things on either a trap or a normal exit suggests a function should be created which could then either be called by the trap or to exit the loop based on user input. If you wanted to pretty-print the exit message, you might also wrap it in echo commands and it might look like this:
_cleanup() {
rm -f $_LOG
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
So after analyzing each of the requirements, you'd need a few counters and a little bit of glue code such as a while loop to wrap them in. The result might look like this:
#/usr/bin/sh
# Define a function to call on exit
_cleanup() {
# Remove the log file as per specification #5a
rm -f $_LOG
# Display success/fail counts as per specification #5b
echo
echo $0 ended with $_success successful commands and $_fail unsuccessful commands.
echo
exit 0
}
# Where are we? Get absolute path of $0
_abs_path=$( cd -P -- "$(dirname -- "$(command -v -- "$0")")" && pwd -P )
# Set the log file name based on the path & PID
# Keep this constant so the log file doesn't wander
# around with the user if they enter a cd command
_LOG=${_abs_path}/$$.cmd
# Print ctrl+c msg per specification #4
# Then run the cleanup function
trap "echo aborted by ctrl+c;_cleanup" 2
# Initialize counters
_line=0
_fail=0
_success=0
while true
do
# Count lines to support required logging format per specification #3
((_line++))
# Set prompt per specification #1 and read command
read -p "`pwd -P`\$ " _command
# Echo command to log file as per specification #3
echo "$_line: $_command" >>$_LOG
# Arrange to exit on user input with value 'exit' as per specification #5
if [[ "$_command" == "exit" ]]
then
_cleanup
fi
# Execute whatever command was entered as per specification #2
eval $_command
# Capture the success/fail counts to support specification #5b
_status=$?
if [ $_status -eq 0 ]
then
((_success++))
else
((_fail++))
fi
done

Resources