How to run cql files (.cql) from within cqlsh? - cassandra

The problem that I am having is that I want to run the following command (and I can't):
cqlsh < cql_directory/cql_create_stuff.cql
Because I have not logged in to cqlsh.
So I logged in:
cqlsh -u 'my_username' -p 'my_super_secret_password'
and now I tried doing the command in cqlsh shell but It just responds with a syntax error.
Basically, how do I login into cqlsh and run an external CQL script in my file system?

Use the SOURCE
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/source_r.html
You can use -f option as well to execute commands from file
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/cqlsh.html

Assuming that the path of the file with the CQL commands is /mydir/myfile.cql, there are two ways:
If you are not logged in to cqlsh:
cqlsh -u 'my_username' -p 'my_password' -f /mydir/myfile.cql
If you are logged in to cqlsh:
SOURCE '/mydir/myfile.cql'
Notice the single quotation marks. The shorthand notation for $HOME (for example, '~/mydir/myfile.cql') is also supported.
Both ways also work with relative paths (to the current directory).

Assuming your filename is "tables.cql" and it is placed as: /files/tables.cql;
A - Locally
cqlsh -f /files/tables.cql
B - Connecting To A Docker Container Running Cassandra
Assuming the name of the Docker container that which running Cassandra is "cas" (keep in mind that you can also use the hash id of the docker container if there is no name assigned to it);
docker exec -it cas cqlsh -f /files/tables.cql
As stated on other answers, -u and -p options can be added in order to use the username/password combinations.

This is for Window system
suppose you cassandra dir is
C:\Program Files\DataStax-DDC\apache-cassandra\bin
Suppose directory where your .cql file OR cql query file is
D:\ril\s\developement\new one\excel after parse\Women catalogue template.cql
Now follow below steps for importing cql file
Go on command prompt (cmd)
Go on the directory where cql file is there (cd "..\ril\sizeguide\developement\new one\excel after parse")
Run below command
"c:\Program Files\DataStax-DDC\apache-cassandra\bin\cqlsh.bat" <"Women catalogue template.cql"
And its Done.
Important Note:
Please make sure column value should not have single quote ' character like ('If you don't find a exact match, go for the next large size') other wise it will fail.
If you want single quote to be inserted, please use it two times like below and Cassandra will treat it as one time
('If you don''t find a exact match, go for the next large size')
All text column should be enclosed by single quote '' like 'Sale category'. For empty value, please use two single quote ''.

Related

don't understand atquery command script

I'm using a big-data database. in one of it's tutorial it has recommended me to use below bash scripts if order to running queries:
#!/bin/sh
# this will launch the real atquery program with the given .sql file
# note: please adjust INSTALLNAME, HOST and PORT to reflect your installation
/home/lms/INSTALLNAME/atquery HOST:PORT $*
Then, start runnable .sql files like the following:
#!/usr/local/bin/runatquery
select count(*) from mytable during all
I don't understand $* part of the /home/lms/INSTALLNAME/atquery HOST:PORT $*. what will $* does?
this was suppose to create a shell script in order to run a query, but another problem is this is two file (I supposse because we two #! in that) so how will this two file help me to run queries? I suppose if we had a script with below code in it, it would do this work to me better and without confusion:
!/bin/sh
/home/lms/INSTALLNAME/atquery HOST:PORT -e 'select count(*) from mytable during all'
You have to create that script as recommended (you didn't include that, probably right before the script) as a file with the executable bit on, and changing INSTALLNAME, HOST and PORT as per your system requirements.
The $* expands to all parameters received by the script.
The second file is an example of how you can create scripts that are run by runatquery.

PostgreSQL CSV import from command line

I've been using the psql Postgres terminal to import CSV files into tables using the following
COPY tbname FROM
'/tmp/the_file.csv'
delimiter '|' csv;
which works fine except that I have to be logged into the psql terminal to run it.
I would like to know if anyone knows of a way to do a command similar to this from the Linux shell command line similar to how Postgres allows a shell command like bellow
/opt/postgresql/bin/pg_dump dbname > /tmp/dbname.sql
This allows the dumping of a database from the Linux shell without being logged into psql terminal.
The solution in the accepted answer will only work on the server and when the user executing the query will have permissions to read the file as explained in this SO answer.
Otherwise, a more flexible approach is to replace the SQL's COPY command with the psql's "meta-command" called \copy which which takes all the same options as the "real" COPY, but is run inside the client (with no need for ; at the end):
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
As per docs, the \copy command:
Performs a frontend (client) copy. This is an operation that runs an SQL COPY command, but instead of the server reading or writing the specified file, psql reads or writes the file and routes the data between the server and the local file system. This means that file accessibility and privileges are those of the local user, not the server, and no SQL superuser privileges are required.
In addition, if the the_file.csv contains the header in the first line, it can be recognized by adding header at the end of the above command:
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv header"
As stated in The PostgreSQL Documentation (II. PostgreSQL Client Applications - psql) you can pass a command to psql (PostgreSQL interactive terminal) with the switch -c. Your options are:
1, Client-side CSV: \copy meta-command
perform the SQL COPY command but the file is read on the client and the content routed to the server.
psql -c "\copy tbname FROM '/tmp/the_file.csv' delimiter '|' csv"
(client-side option originally mentioned in this answer)
2. Server-side CSV: SQL COPY command
reads the file on the server (current user needs to have the necessary permissions):
psql -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
the DB roles needed for reading the file on the server:
COPY naming a file or command is only allowed to database superusers
or users who are granted one of the default roles
pg_read_server_files, pg_write_server_files, or
pg_execute_server_program
also the PostgreSQL server process needs to have access to the file.
To complete the previous answer, I would suggest:
psql -d your_dbname --user=db_username -c "COPY tbname FROM '/tmp/the_file.csv' delimiter '|' csv;"
The most flexible way is to use a shell HERE document, which allows you to use shell variables inside your query, even inside (double or single) quotes:
#!/bin/sh
THE_USER=moi
THE_DB=stuff
THE_TABLE=personnel
PSQL=/opt/postgresql/bin/psql
THE_DIR=/tmp
THE_FILE=the_file.csv
${PSQL} -U ${THE_USER} ${THE_DB} <<OMG
COPY ${THE_TABLE} FROM '${THE_DIR}/${THE_FILE}' delimiter '|' csv;
OMG

Cygwin mysqldump complete-insert

I am using actual cygwin version and have installed mysql package 5.5.21. When using mysqldump i have the problem that the insert statement is in one line. i have already tried the following statements but it seems that they do not take any effect on the output.
mysqldump --opt --extended-insert --complete-insert ....
mysqldump -c ...
Does anyone have an idea how i can force mysqldump to create an insert for each data row?
--extended-insert is what causes multiple rows on each line. It's also part of --opt.
Try adding --skip-extended-insert.

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Proper format for a mysqldump dynamic filename in a cron?

I have a crontab set up that errors out every time I attempt to do it. It works fine in the shell. It's the format I'm using when I attempt to automatically insert the date into the filename of the database backup. Does anyone know the syntax I need to use to get cron to let me insert the date into the filename?
mysqldump -hServer -uUser -pPassword Table | gzip >
/home/directory/backups/table.$(date +"%Y-%m-%d").gz
Thanks in advance!
What about something like this for the "command" part of the crontab :
mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | gzip > /tmp/table.`date +"\%Y-\%m-\%d"`.gz
What has changed from OP is the escaping of the date format :
date +"\%Y-\%m-\%d"
(And I used backticks -- but that should do much of a difference)
(Other solution would be to put your original command in a shell-script, and execute this one from the crontab, instead of the command -- would probably be easier to read/write ^^)
The most typical reason for "works in shell but not in cron" is that commands you try to execute are not in PATH.. Reason is that shell invoked from cron aint loading same files as your login shell.
Fix: add absolute path to each command you try to execute.
Second thing i notice in your command. Syntax for running your date command looks like its not very portable. Change that to be in backticks, or run put your whole command to shellscript (also, you can use it to set your path too) and execute that script from cron..
EDIT:
During the writing my original reply my keyboard layout didnt have backticks so check what Pascal wrote.
And example of what you could do with a shellscript:
Copy following to /usr/local/bin/dumptable.sh
#!/bin/sh
/usr/bin/mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | /bin/gzip > /tmp/table.`/bin/date +"\%Y-\%m-\%d"`.gz
and then put the the /usr/local/bin/dumptable.sh into cron..

Resources