Using cqlsh with -f option - cassandra

Can we use cqlsh -f like it?
psql -U maint_sa -p 3254 postgres -f - << EOT
SHOW ALL;
EOT
On cqlsh I have following:
cqlsh -f - << EOT
> DISCRIBE KEYSTORE ks1;
> EOT
Can't open '-': [Errno 2] No such file or directory: '-'

Have you tried echoing it to the command line:
echo "DESC KEYSPACE ks1; exit" | cqlsh

Related

Kubectl with Bash command is always passed in LowerCase and not CamelCase

Consider the Bash code:
function dropMyDB() {
kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "$2";"
}
dropMyDB $1 "myTableNameInCamelCase"
When I execute the code it produces:
ERROR: relation "mytablenameincamelcase" does not exist
command terminated with exit code 1
Which means that the table name is not passed in its CamelCase form.
How can we fix this ?
First
Escape your "$2" because it is inside another double quote
postgres -c "truncate table "$2";"
# to
postgres -c "truncate table $2;"
# or
postgres -c "truncate table \"$2\";"
Second
You can test that the issue is not bash
function dropMyDB() {
echo "kubectl -n $1 exec -ti $1-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c \"truncate table \"$2\";\""
}
dropMyDB $1 "myTableNameInCamelCase"
Then
chmod +x script.sh
./script.sh foo
kubectl -n foo exec -ti foo-CLUSSTER-0 -- psql -d MYDBNAME -U postgres -c "truncate table "myTableNameInCamelCase";"
IMO it's no kubectl's fault:
fun(){ k exec aaaaaaaaaaaaa -it -- echo "$1"; }
fun AdsdDasfsFsdsd
AdsdDasfsFsdsd
But probably psql's one, try it like this:
... psql -d MYDBNAME -U postgres -c "truncate table '$2';"

Running jq on a remote machine over ssh and overwrite the file

I am trying to create a file from the output of jq command over ssh command.
ssh <server-Name> "jq '.credsStore = "ecr-login"' ~/.docker/config.json > ~/.docker/output.json "
It gives me following error:
bash: .docker/output.json: No such file or directory
Am I not running the command properly or is there any other problem?
ssh "$server" "bash -s" <<'EOF'
[[ -e ~/.docker/config.json ]] || {
echo "ERROR: $HOME/.docker/config.json does not exist on the remote server" >&2
exit 1
}
jq '.credsStore = "ecr-login"' \
<~/.docker/config.json \
>~/.docker/output.json
EOF

shell script is not working when it is called using cron scheduler

I have this sh script
SPTH="/home/db_backup/log/log_"$(date +'%d-%m-%Y %H:%M')".txt"
echo "===DUMP started at $(date +'%d-%m-%Y %H:%M:%S')===" >> "$SPTH";
DBNAME="test"
PGPASSWORD=postgres psql -U postgres -d postgres -c "CREATE DATABASE $DBNAME";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public RENAME TO public_old";
for entry in /home/SQL/*.*
do
echo "$entry"
SUBSTRING=$(echo $entry| cut -d'/' -f 4)
#echo $SUBSTRING
SCHEMA=$(echo $SUBSTRING| cut -d'.' -f 1)
echo $SCHEMA
echo "===================== CREATING public SCHEMA =====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "Create Schema public";
echo "===================== RESTORING DATA OF $SCHEMA =====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -f "$entry";
echo "===================== DELETING SCHEMA $SCHEMA=====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "DROP SCHEMA $SCHEMA CASCADE";
echo "===================== RENAMING SCHEMA PUBLIC TO $SCHEMA=====================" >> "$SPTH";
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public RENAME TO $SCHEMA";
echo "===================== RENAMING SQL FILE =====================" >> "$SPTH";
mv /home/SQL/{$SUBSTRING,DONE--$SUBSTRING};
echo "===================== DONE =====================" >> "$SPTH";
done
PGPASSWORD=postgres psql -U postgres -d $DBNAME -c "ALTER SCHEMA public_old RENAME TO public";
echo "===DUMP finished at $(date +'%d-%m-%Y %H:%M:%S')===" >> "$SPTH";
When I am running this command using command
sh script.sh
its working fine, working as I expected and running each and every command. But when I am running this script using cron scheduler commands related to POSTGRES is not executing. I also want log of every command I in this shell script.
I want to run this script using cron scheduler.
Where I am wrong?

Import cassandra schema script doesn't run in docker when using volumes

I have a problem on using cassandra in Docker
I've created a Dockerfile this like
----------------------Dockerfile----------------------------
FROM spotify/cassandra:base
COPY cassandra-schema.cql /tmp/
COPY cassandra-init.sh /usr/local/bin/
COPY cassandra-singlenode.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/cassandra-init.sh
RUN chmod +x /usr/local/bin/cassandra-singlenode.sh
#import schema
RUN /usr/local/bin/cassandra-init.sh
EXPOSE 9160
ENTRYPOINT ["/usr/local/bin/cassandra-singlenode.sh"]
If i have use
docker run --name cassandradb cassandra
everything works properly but if i use
docker run --name cassandradb -v /opt/argus/cassandra/data/:/var/lib/cassandra/data -v /opt/argus/cassandra/commitlog:/var/lib/cassandra/commitlog cassandra
the cassandra starts but the /usr/local/bin/cassandra-init.sh doesn't import my scheam
Any idea?
These are my files contents
------------------cassandra-init.sh-----------------------
echo "===================================================="
echo "starting running cqlsh"
echo "===================================================="
cassandra &
while : ;do
# Get the status of this machine from the Cassandra nodetool
STATUS=`nodetool status | grep 'UN' | awk '{print $1}'`
echo $STATUS
# Once the status is Up and Normal, then we are ready
if [ $STATUS = "UN" ]; then
cqlsh -f /tmp/cassandra-schema.cql
break
fi
sleep 1;
done
----------------------------cassandra-singlenode.sh--------------------------------
echo "=============================================== Change configuration ====================================================="
IP=`hostname --ip-address`
SEEDS=`env | grep CASS[0-9]_PORT_9042_TCP_ADDR | sed 's/CASS[0-9]_PORT_9042_TCP_ADDR=//g' | sed -e :a -e N -e 's/\n/,/' -e ta`
if [ -z "$SEEDS" ]; then
SEEDS=$IP
fi
echo "Listening on: "$IP
echo "Found seeds: "$SEEDS
# Setup Cassandra
CONFIG=/etc/cassandra/
sed -i -e "s/^listen_address.*/listen_address: $IP/" $CONFIG/cassandra.yaml
sed -i -e "s/^rpc_address.*/rpc_address: 0.0.0.0/" $CONFIG/cassandra.yaml
sed -i -e "s/- seeds: \"127.0.0.1\"/- seeds: \"$SEEDS\"/" $CONFIG/cassandra.yaml
sed -i -e "s/# JVM_OPTS=\"$JVM_OPTS -Djava.rmi.server.hostname=<public name>\"/ JVM_OPTS=\"$JVM_OPTS -Djava.rmi.server.hostname=$IP\"/" $CONFIG/cassandra-env.sh
echo "=========================================================================================================================="
echo "starting running cassandra server"
echo "=========================================================================================================================="
cassandra &
while :
do
echo "Cassandra running, Press [CTRL+C] to stop.."
sleep 1
done

shell save MySQL query result to a file

I have a script which runs a MySQL query, something like this:
#!/bin/sh
user="root"
pwd="test"
database="mydb"
command="long...
long... query in
multiple lines"
mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
This query does a backup from a table to another. Is it possible to save the query result in a file without using mysql INTO OUTFILE? I only want to know if the query failed or succeeded.
If it succeeded something like 1 row(s) affected or if it failed Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ...
Update
Solution 1: () act as a chain of commands, so wrapping them in a variable gets the result of those commands. Then simply output the result of that variable in a file.
output=$( mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
)
echo "$output" >> /events/mysql.log
Solution 2: use tee as system command to send the result of the commands into a file, but this needs to be done from the crontab, like this:
*/1 * * * * root sh /events/mysql.sh |tee -a /events/mysql.log
http://forums.mysql.com/read.php?10,391070,391983#msg-391983
My working solution:
user="root"
pwd="root12345"
database="mydb"
command="long ...long query"
mysql -u $user -p$pwd << EOF >> /events/mysql.log 2>&1
use $database;
$command;
EOF
This should work:
output=$( mysql -u $user -p$pwd << EOF
use $database;
$command
EOF
)
echo "$output" >> outfile
The correct way to handle error-messages is to through stderr. Use 2>&1 to catch the error.
So, add this to the end of your script:
>> install.log 2>&1
you can also do it like this:
#!/bin/sh
user="root"
pwd="test"
database="mydb"
command="long...
long... query in
multiple lines"
mysql -u$user -p$pwd -D$database -e "$command" > file
It's easier to use the MySQL tee command to send output.
To get the logging process started, just use the tee command,
tee /tmp/my.out;
#!/bin/sh
user="root"
pwd="test"
database="mydb"
$pathToFile = "/tmp/my.out"
command="long...
long... query in
multiple lines"
mysql -u $user -p$pwd << EOF
tee $pathToFile
use $database;
$command
EOF
EDIT
Since tee is reportedly not working inside script, you could also log output using Tee directly when running the script.
mysql_bash.sh > >(tee -a script.log)

Resources