shell save MySQL query result to a file - linux

I have a script which runs a MySQL query, something like this:
#!/bin/sh
user="root"
pwd="test"
database="mydb"
command="long...
long... query in
multiple lines"
mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
This query does a backup from a table to another. Is it possible to save the query result in a file without using mysql INTO OUTFILE? I only want to know if the query failed or succeeded.
If it succeeded something like 1 row(s) affected or if it failed Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ...
Update
Solution 1: () act as a chain of commands, so wrapping them in a variable gets the result of those commands. Then simply output the result of that variable in a file.
output=$( mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
)
echo "$output" >> /events/mysql.log
Solution 2: use tee as system command to send the result of the commands into a file, but this needs to be done from the crontab, like this:
*/1 * * * * root sh /events/mysql.sh |tee -a /events/mysql.log
http://forums.mysql.com/read.php?10,391070,391983#msg-391983
My working solution:
user="root"
pwd="root12345"
database="mydb"
command="long ...long query"
mysql -u $user -p$pwd << EOF >> /events/mysql.log 2>&1
use $database;
$command;
EOF

This should work:
output=$( mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
)
echo "$output" >> outfile

The correct way to handle error-messages is to through stderr. Use 2>&1 to catch the error.
So, add this to the end of your script:
>> install.log 2>&1

you can also do it like this:
#!/bin/sh
user="root"
pwd="test"
database="mydb"
command="long...
long... query in
multiple lines"
mysql -u$user -p$pwd -D$database -e "$command" > file

It's easier to use the MySQL tee command to send output.
To get the logging process started, just use the tee command,
tee /tmp/my.out;
#!/bin/sh
user="root"
pwd="test"
database="mydb"
$pathToFile = "/tmp/my.out"
command="long...
long... query in
multiple lines"
mysql -u $user -p$pwd << EOF
tee $pathToFile
use $database;
$command
EOF
EDIT
Since tee is reportedly not working inside script, you could also log output using Tee directly when running the script.
mysql_bash.sh > >(tee -a script.log)

Related

SSH remote execution - How to declare a variable inside EOF block (Bash script)

I have the following code in a bash script:
remote_home=/home/folder
dump_file=$remote_home/my_database_`date +%F_%X`.sql
aws_pem=$HOME/my_key.pem
aws_host=user#host
local_folder=$HOME/db_bk
pwd_stg=xxxxxxxxxxxxxxxx
pwd_prod=xxxxxxxxxxxxxxx
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as $i FROM $i" ; done'
EOF
My loop while is not working because "i" variable is becoming empty. May anyone give me a hand, please? I would like to understand how to handle data in such cases.
The local shell is "expanding" all of the $variable references in the here-document, but AIUI you want $i to be passed through to the remote shell and expanded there. To do this, escape (with a backslash) the $ characters you don't want the local shell to expand. I think it'll look like this:
ssh -i $aws_pem $aws_host << EOF
mysqldump --column-statistics=0 --result-file=$dump_file -u user -p$pwd_prod -h $db_to_bk my_database
mysql -u user -p$pwd_prod -h $db_to_bk -N -e 'SHOW TABLES from my_database' > $remote_home/test.txt
sh -c 'cat test.txt | while read i ; do mysql -u user -p$pwd_prod -h $db_to_bk -D my_database --tee=$remote_home/rows.txt -e "SELECT COUNT(*) as \$i FROM \$i" ; done'
EOF
You can test this by replacing the ssh -i $aws_pem $aws_host command with just cat, so it prints the here-document as it'll be passed to the ssh command (i.e. after the local shell has done its parsing and expansions, but before the remote shell has done its). You should see most of the variables replaced by their values (because those have to happen locally, where those variables are defined) but $i passed literally so the remote shell can expand it.
BTW, you should double-quote almost all of your variable references (e.g. ssh -i "$aws_pem" "$aws_host") to prevent weird parsing problems; shellcheck.net will point this out for the local commands (along with some other potential problems), but you should fix it for the remote commands as well (except $i, since that's already double-quoted as part of the SELECT command).

Run bash script as a sudo user with some other computations

I have some other piece of bash script calling some external other scripts and running as a sudo user. Below is an example which can be used as a sample to reproduce my problem.
#!/bin/bash
databaseName=${1}
tableNames=${2}
IFS=','
read -r -a tableNamesArray <<< "${tableNames}"
#sudo su <<WRAPPER
echo "Tables inside ${databaseName} are ::: ${tableNamesArray[#]}"
echo "-- SQL FILE --" > ${databaseName}_schema.hql
echo "$tableNames"
echo "${tableNamesArray[#]}"
for eachTable in "${tableNamesArray[#]}"
do
echo "$eachTable"
echo "MY_QUERY $databaseName.$eachTable;" >> ${databaseName}_schema.hql
done
#WRAPPER
This script is working fine. I want mydb_schema.hql with some records of SQL script. Assume I am calling this above as ./sample.sh db table1,table2 to get a SQL file with below queries-
-- SQL FILE --
MY_QUERY db.table1;
MY_QUERY db.table2;
Above is working fine and getting all as per my need.
Here is my problem-
#!/bin/bash
databaseName=${1}
tableNames=${2}
IFS=','
read -r -a tableNamesArray <<< "${tableNames}"
sudo su <<WRAPPER
#..Other scripts that I need to call as sudo..
echo "Tables inside ${databaseName} are ::: ${tableNamesArray[#]}"
echo "-- SQL FILE --" > ${databaseName}_schema.hql
echo "$tableNames"
echo "${tableNamesArray[#]}"
for eachTable in "${tableNamesArray[#]}"
do
echo "$eachTable"
echo "MY_QUERY $databaseName.$eachTable;" >> ${databaseName}_schema.hql
done
WRAPPER
I have used like this and I am able to call other scripts as a sudo user in between code. This WRAPPERis working fine to call other scripts as a sudo user but not able to call for loop inside the WRAPPER. Not sure what problem is for me? Any help is appreciated. Thanks

Why bash script breaks if it meets space in this example?

I need to execute following command on multiple servers:
mysql -h 127.0.0.1 -uroot -psecret mydatabase -e 'SELECT 1;'
So, i have test1.sh script, which echo-es dynamic string:
#!/bin/bash
echo -n "mysql -h 127.0.0.1 -uroot -psecret mydatabase -e 'SELECT 1'"
And test2.sh script, who executes the given string:
#!/bin/bash
CMD=`./test1.sh`
$CMD
If i execute `./test2.sh, i will see help output, command will be not executed.
If i remove spaces in mysql query SELECT 1 or the whole -e param, and then execute ./test2.sh script, everything works.
Why this is happening? Can you please describe this magic?
My bash version is 4.2.46.
As long as you control and trust command line coming from test1.sh, you can use dreaded eval in test2.sh like this:
#!/bin/bash
cmd="$(./test1.sh)"
eval "$cmd"
Why and when should eval use be avoided in shell scripts?
Can you try test1.sh script as like this
#!/bin/bash
echo -e "mysql -h 127.0.0.1 -uroot -psecret mydatabase -e"
test2.sh
#!/bin/bash
CMD=$(./test1.sh)
${CMD} "SELECT 1"

Get output of FOR with EOF in bash

I've created a bash script to temporarily help me send some files to a FTP server based on the id of the commit, i get the last commit, track the files and send as listed below.
#!/bin/bash
commit_hash=$(git log --format="%H" -n 1)
[[ -z "$1" ]] || commit_hash=$1
files=$(git diff-tree --no-commit-id --name-only -r $commit_hash)
echo -e $(git log -1 $commit_hash --pretty=format:"%h - %an, %ar : %s");
printf "\n"
HOST=
USER=
PASS=
for file in $files; do
ftp -nv $HOST << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done;
of course it isn't the best approach to do that, but i automated some things that i am currently working on.
it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.
it is possible to catch the ftp output of the heredoc and apply some filters? with pipelines for example, i only want to know if the transfer was completed successfully.
I presume you mean you want to catch the output of the ftp command whose input is redirected from the heredoc; the heredoc itself does not have output in a sense that anything other than the associated command can see.
But you can redirect the command's output. The thing to remember is that the heredoc begins on the next line, not immediately after the associated redirection operator. Thus, you can add a pipeline to another command after the heredoc operator. For example:
$ cat << EOF | grep flag
flag this line
not this line
or this line
flag this
last flag
EOF
Output:
flag this line
flag this
last flag
Do not use a for loop for this. See Bash FAQ 001.
commit_hash=${1:-$(git log --format="%H" -n 1)}
while IFS= read -r file; do
ftp -nv "$HOST" << EOF
user $USER $PASS
cd /www/example
passive
put $file
bye
EOF
done < <(git diff-tree --no-commit-id --name-only -r "$commit_hash")

In bash tee is making function variables local, how do I escape this?

I have stucked with a bash scipt which should write both to stdout and into file. I'm using functions and some variables inside them. Whenever I try to redirect the function to a file and print on the screen with tee I can't use the variables that I used in function, so they become local somehow.
Here is simple example:
#!/bin/bash
LOGV=/root/log
function var()
{
echo -e "Please, insert VAR value:\n"
read -re VAR
}
var 2>&1 | tee $LOGV
echo "This is VAR:$VAR"
Output:
[root#testbox ~]# ./var.sh
Please, insert VAR value:
foo
This is VAR:
[root#testbox ~]#
Thanks in advance!
EDIT:
Responding on #Etan Reisner suggestion to use
var 2>&1 > >(tee $LOGV)
The only problem of this construction is that log file dosn't receive everything...
[root#testbox~]# ./var.sh
Please, insert VAR value:
foo
This is VAR:foo
[root#testbox ~]# cat log
Please, insert VAR value:
This is a variant of BashFAQ #24.
var 2>&1 | tee $LOGV
...like any shell pipeline, has the option to run the function var inside a subprocess -- and, in practice, behaves this way in bash. (The POSIX sh specification leaves the details of which pipeline components, if any, run inside the parent shell undefined).
Avoiding this is as simple as not using a pipeline.
var > >(tee "$LOGV") 2>&1
...uses process substitution (a ksh extension adopted by bash, not present in POSIX sh) to represent the tee subprocess through a filename (in the form /dev/fd/## on modern Linux) which output can be redirected to without moving the function into a pipeline.
If you want to ensure that tee exits before other commands run, use a lock:
#!/bin/bash
logv=/tmp/log
collect_var() {
echo "value for var:"
read -re var
}
collect_var > >(logv="$logv" flock "$logv" -c 'exec tee "$logv"') 2>&1
flock "$logv" -c true # wait for tee to exit
echo "This is var: $var"
Incidentally, if you want to run multiple commands with their output being piped in this way, you should invoke the tee only once, and feed into it as appropriate:
#!/bin/bash
logv=/tmp/log
collect_var() { echo "value for var:"; read -re var; }
exec 3> >(logv="$logv" flock "$logv" -c 'exec tee "$logv"') # open output to log
collect_var >&3 2>&3 # run function, sending stdout/stderr to log
echo "This is var: $var" >&3 # ...and optionally run other commands the same way
exec 3>&- # close output
flock "$logv" -c true # ...and wait for tee to finish flushing and exit.

Resources