Execute cqlsh inside script - linux

I am trying to execute cqlsh inside bash script. My script is below. When ı try to execute sh file, it returns cql command not found
#!/bin/bash
set -x
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
cqlsh -e "SELECT * FROM msg.msg_log limit 1;" > /home/yunus/sh/cqlshcontrol.txt
error1=$( more /home/yunus/sh/cqlshcontrol.txt | wc -l )
if [ $error1 -lt 1 ]; then
curl -S -X POST --data "payload={\"text\": \" Cqlsh not responding, Connection Problem \",\"username\":\"Elevate Cassandra1\",\"icon_emoji\":\"${SLACK_ICON}\"}" https://hooks.slack.com/services/
fi

some suggestions
use [[/]] over [/].
the return value of $() is not an error value and should be named lines or something more meaningful. The lack of another error variable in the code makes the appended number (the 1 in error1) seem even odder.
There's no reason to use more or pipe inside of that subshell. Just run wc -l on your file.
Are you sure cqlsh is in the PATH? Try which cqlsh to find it.
wc will never return a negative value so comparing for equality with zero would be clear and cover just as many potential cases.
otherwise
If that doesn't get you out of your confusion please show the output when you try to run it.

Related

Run bash script with defaults to piped commands set within the script

Two questions about the same thing I think...
Question one:
Is it possible to have a bash script run with default parameters/options? ...in the sense if someone were to run the script:
./somescript.sh
it would actually run with ./somescript.sh | tee /tmp/build.txt?
Question two:
Would it also possible to prepend the script with defaults? For example, if you were to run the script ./somescript.sh
it would actually run
script -q -c "./somescript.sh" /tmp/build.txt | aha > /tmp/build.html?
Any help or guidance is very much appreciated.
You need a wrapper script that handles all such scenario for you.
For example, your wrapper script can take parameters that helps you decide.
./wrapper_script.sh --input /tmp/build.txt --ouput /tmp/build.html
By default, --input and --output can be set to values you want when they are empty.
You can use the builtin $# to know how many arguments you have and take action based on that. If you want to do your second part, for example, you could do something like
if [[ $# -eq 0 ]]; then
script -q -c "$0 /tmp/build.txt | aha /tmp/build.html
exit
fi
# do everything if you have at least one argument
Though this will have problems if your script/path have spaces, so you're probably better putting the real path to your script in the script command instead of $0
You can also use exec instead of running the command and exiting, but make sure you have your quotes in the right place:
if [[ $# -eq 0 ]]; then
exec script -q -c "$0 /tmp/build.txt | aha /tmp/build.html"
fi
# do everything when you have at least 1 argument

Execute a find command with expression from a shell script [duplicate]

This question already has answers here:
Why does shell ignore quoting characters in arguments passed to it through variables? [duplicate]
(3 answers)
Closed 6 years ago.
I'm trying to write a database call from within a bash script and I'm having problems with a sub-shell stripping my quotes away.
This is the bones of what I am doing.
#---------------------------------------------
#! /bin/bash
export COMMAND='psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o ${EXPORT_FILE} 2>&1'
PSQL_RETURN=`${COMMAND}`
#---------------------------------------------
If I use an 'echo' to print out the ${COMMAND} variable the output looks fine:
echo ${COMMAND}
screen output:-
#---------------
psql drupal7 -F , -t --no-align -c "SELECT DISTINCT hostname FROM accesslog;" -o /DRUPAL/INTERFACES/EXPORTS/ip_list.dat 2>&1
#---------------
Also if I cut and paste this screen output it executes just fine.
However, when I try to execute the command as a variable within a sub-shell call, it gives an error message.
The error is from the psql client to the effect that the quotes have been removed from around the ${SQL} string.
The error suggests psql is trying to interpret the terms in the sql string as parameters.
So it seems the string and quotes are composed correctly but the quotes around the ${SQL} variable/string are being interpreted by the sub-shell during the execution call from the main script.
I've tried to escape them using various methods: \", \\", \\\", "", \"" '"', \'"\', ... ...
As you can see from my 'try it all' approach I am no expert and it's driving me mad.
Any help would be greatly appreciated.
Charlie101
Instead of storing command in a string var better to use BASH array here:
cmd=(psql ${DB_NAME} -F , -t --no-align -c "${SQL}" -o "${EXPORT_FILE}")
PSQL_RETURN=$( "${cmd[#]}" 2>&1 )
Rather than evaluating the contents of a string, why not use a function?
call_psql() {
# optional, if variables are already defined in global scope
DB_NAME="$1"
SQL="$2"
EXPORT_FILE="$3"
psql "$DB_NAME" -F , -t --no-align -c "$SQL" -o "$EXPORT_FILE" 2>&1
}
then you can just call your function like:
PSQL_RETURN=$(call_psql "$DB_NAME" "$SQL" "$EXPORT_FILE")
It's entirely up to you how elaborate you make the function. You might like to check for the correct number of arguments (using something like (( $# == 3 ))) before calling the psql command.
Alternatively, perhaps you'd prefer just to make it as short as possible:
call_psql() { psql "$1" -F , -t --no-align -c "$2" -o "$3" 2>&1; }
In order to capture the command that is being executed for debugging purposes, you can use set -x in your script. This will the contents of the function including the expanded variables when the function (or any other command) is called. You can switch this behaviour off using set +x, or if you want it on for the whole duration of the script you can change the shebang to #!/bin/bash -x. This saves you explicitly echoing throughout your script to find out what commands are being run; you can just turn on set -x for a section.
A very simple example script using the shebang method:
#!/bin/bash -x
ec() {
echo "$1"
}
var=$(ec 2)
Running this script, either directly after making it executable or calling it with bash -x, gives:
++ ec 2
++ echo 2
+ var=2
Removing the -x from the shebang or the invocation results in the script running silently.

eval and string "return"

Edit:
I'm creating a bash script to run Netezza queries.
here's an example of what I have to do:
nzsql -host localhost -port 123456 -d db -u usr -pw pwd -A -t -c "insert into TABLE (name,surname) values ('m','sc')"
and it should return
INSERT 0 1
What I need is retrieve the number "1" which means that 1 row was inserted.
For this, I'd need to retrieve the whole string "INSERT 0 1" and work on it.
according to http://www.enzeecommunity.com/thread/2423 this should work:
cmnd_output=`nzsql -host $NZ_HOST -d $NZ_DATABASE -u $NZ_USER -pw $NZ_PASSWORD -A -t -c "insert into TEST values ('test 1')"`
But I can't get it to work with this: ($2 is right because when I run it from the terminal it works just fine)
cmd_out=`$2` or cmd_out=`"$2"` or cmd_out="`$2`" or cmd_out=`"'$2'"`
cmd_out=$($2) or cmd_out="$($2)" or cmd_out=$("$2")
It tells me command not found... just like if there was a "string quote" problem with $2
I've however managed to execute $2 with eval
eval "$2"
and it works great, the command $2 is executed just fine.
But, I can't use eval in this case as I want to store in a variable that "INSERT 0 1".
A simple
variable_int=`$function '$arg1' '$arg2'`
without the eval won't do?
To assign return values from functions to a shell variable, use command substitution
variable=$(function arg1 arg2)
Why do you need eval?
When you run into a problem like this I find it's always very useful to run with the -x option, just change the top sh-bang line like so:
#!/bin/bash -x
That'll print out each line as it's currently interpreted before
executing it. You can see how your variables are being mangled and use that to fix the problem.

Why are commands executed in backquotes giving me different results when done in as script?

I have a script that I mean to be run from cron that ensures that a daemon that I wrote is working. The contents of the script file are similar to the following:
daemon_pid=`ps -A | grep -c fsdaemon`
echo "daemon_pid: " $daemon_pid
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon start
fi
When I execute this script from the command prompt, the line that echoes the value of $daemon_pid is reporting a value of 2. This value is two regardless of whether my daemon is running or not. If, however, I execute the command with back quotes and then examine the $daemon_pid variable, the value of $daemon_pid is now one. I have also tried single stepping through the script using bashdb and, when I examine the variables using that tool, they are what they should be.
My question therefore is: why is there a difference in the behaviour between when the script is executed by the shell versus when the commands in the script are executed manually? I'm sure that there is something very fundamental that I am missing.
You're very likely encountering the grep as part of the 'answer' from ps.
To help fully understand what is happening, turn off the -c option, to see what data is being returned from just ps -A | grep fsdameon.
To solve the issue, some systems have a p(rocess)grep (pgrep). That will work, OR
ps -A | grep -v grep | grep -c fsdaemon
Is a common idiom you will see, but at the expense of another process.
The cleanest solution is,
ps -A | grep -c '[f]sdaemon'
The regular expression syntax should work with all greps, on all systems.
I hope this helps.
The problem is that grep itself shows up... Try running this command with anything after grep -c:
eple:~ erik$ ps -a | grep -c asdfladsf
1
eple:~ erik$ ps -a | grep -c gooblygoolbygookeydookey
1
eple:~ erik$
What does ps -a | grep fsdaemon return? Just look at the processes actually listed... :)
Since this is Linux, why not try the pgrep? This saves you a pipe, and you don't end up with grep reporting back the daemon script itself running.
Aany process with arguments including that name will add to the count - grep, and your script.
psing for a process isn't really reliable, you should use a lock file.
As several people have pointed out already, your process count is inflated because ps | grep detects (1) the script itself and (2) the subprocess created by the backquotes, which inherits the name of the main script. So an easy solution is to change the name of the script to something that doesn't include the name you're looking for. But you can do better.
The "best-practice" solution that I would suggest is to use the facilities provided by your operating system. It's not uncommon for an init script to create a PID file as part of the process of starting your daemon; in other words, instead of just running the daemon itself, you use a wrapper script that starts the daemon and then writes the process ID to a file somewhere. If start-stop-daemon exists on your system (and I think it's fairly common these days), you can use that like so:
start-stop-daemon --start --quiet --background \
--make-pidfile --pidfile /var/run/fsdaemon.pid -- /usr/bin/fsdaemon
(obviously replace the path /usr/bin/fsdaemon as appropriate) to start it, and then
start-stop-daemon --stop --quiet --pidfile /var/run/fsdaemon.pid
to stop it. start-stop-daemon has other options that might be useful to you, which you can investigate by reading the man page.
If you don't have access to start-stop-daemon, you can write a wrapper script to do basically the same thing, something like this to start:
echo "$$" > /var/run/fsdaemon.pid
exec /usr/bin/fsdaemon
and this to stop:
kill $(< /var/run/fsdaemon/pid)
rm /var/run/fsdaemon.pid
(this is pretty crude, of course, but it should normally work).
Anyway, once you have the setup to generate a PID file, whether by using start-stop-daemon or not, you can update your check script to this:
daemon_pid=`ps --no-headers --pid $(< /var/run/fsdaemon.pid) | wc -l`
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon restart
fi
(one would think there would be a concise command to check whether a given PID is running, but I don't know it).
If you don't want to (or can't) create a PID file, I would at least suggest pgrep instead of ps | grep, since pgrep will search directly for a process by name and won't find anything that just happens to include the same string.
daemon_pid=`pgrep -x -c fsdaemon`
if [ $daemon_pid -eq 0 ]; then
echo "restarting fsdaemon"
/etc/init.d/fsdaemon restart
fi
The -x means "match exactly", and -c works as with grep.
By the way, it seems a bit misleading to name your variable daemon_pid when it is actually a count.

How can I use exit codes to run shell scripts sequentially?

Since cruise control is full of bugs that have wasted my entire week, I have decided the existing shell scripts I have are simpler and thus better.
Here is what I have so far
svn update /var/www/k12/
#svn log --revision "HEAD" /var/www/code/ | head -2 | tail -1 | awk '{print $1}' > /var/www/path/version.txt
# upload the files
rsync -ar --verbose --stats --progress --delete --exclude=*.svn /var/www/code/ example.com:/home/path
# bring database up to date
ssh example.com 'php /path/tasks/dbrefactor.php'
# notify me
ssh example.com 'php /path/tasks/build.php'
Only thing is the other day I changed the paths and forgot to update the rsync call. As a result the "notify me" step ran several times while I was figuring stuff out.
I know in linux you can do command1 && command2 and if command 1 "fails" command2 will not run, but how can I observe the "failure/success" exit codes for debugging purposes. Some of the scripts I wrote myself and I'm sure I will need to do something special.
The best option, especially for unattended scripts, is to set the -e shell option:
#!/bin/sh -e
or
set -e
This will cause the shell to stop executing if any (untested) command exits with a nonzero error code.
-e Exit immediately if a simple command (see SHELL GRAMMAR
above) exits with a non-zero status. The shell does not
exit if the command that fails is part of an until or
while loop, part of an if statement, part of a && or ||
list, or if the command's return value is being inverted
via !. A trap on ERR, if set, is executed before the
shell exits.
The exit code of a previous process happens to be in $? variable right after its execution. Usually (that's not required, but it's the convention everyone follows) the exit code of a successful command will be equal to 0, and any other value means an error.
Remember of the caveats! One of them is that after these commands:
svn log --revision "HEAD" /var/www/code/ | head -2 | tail -1 | awk '{print $1}'
echo "$?"
the zero result would most likely be returned, because in the $? the return code of awk is contained. To avoid it, set the pipefail option somewhere above the code:
set -o pipefail 1
The return value of the last-run command is stored in the variable $?. You can use that to determine which command to run next. Overview of special variables.
i think $? contains the last exit code
if [[ -z $? ]]
then
# notify me
ssh example.com 'php /path/tasks/build.php'
fi
I would suggest you can use the exit non zero at the points where the failure is expected and before processing step further you will check
if [ $? -neq 0 ]
then there is a failure.
The $? will always return a non zero number if the last process does not executed successfully.

Resources