I would like to use cqlsh with ssl. I followed the procedure recommended by the datastax documentation, and it worked well.
However, I would like to change the location of the cqlshrc file, and not place it in /myHomeDirectry/.cassandra as described in the cassandra documentation
how can this be done?
thanks for help..
You can do this by specifying the —-cqlshrc option and new cqlshrc file location when running cqlsh from the command line.
bin/cqlsh 192.168.0.1 -u aaron -p flynnL1ves --cqlshrc=../stackoverflow/cqlshrc
Here's a link to the docs on the Apache Cassandra site for more info: https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc
Edit -
The only other way to do this, is to modify this line in bin/cqlsh.py:
# BEGIN history/config definition
HISTORY_DIR = os.path.expanduser(os.path.join('~', '.cassandra'))
Cqlsh stores the cqlsh_history file in ~/.cassandra and it also uses that HISTORY_DIR definition to set the default location of the cqlshrc file. Without specifying the cqlshrc file on the command line, you'll need to override that default location by specifying the directory name(s) in the os.path.join above.
Note that this is definitely one of those "proceed at your own risk" moments.
Related
I'm getting a connection error "unable to connect to any server" when I run .cqlsh command from the bin directory of my node.
I'm using an edited yaml file containing only the following(rest all values present in the default yaml have been omitted) :
cluster name, num tokens, partitioner, data file directories, commitlog directory, commitlog sync, commitlog sync period, saved cache directory, seed provider info, listen address and endpoint snitch.
Is this error because I've not included some important parameter in the yaml like rpc address? Please help.
OS: RHEL 6.9
Cassandra: 3.0.14
The cassandra yaml file can have modified values, but you should not delete the rows and make your own yaml file. And yes, rpc address is needed in yaml file.
In writing the directories like data_file_directories, you should follow the same indentation as:
data_file_directories -
/path/to/access
Cassandra is very strict at it's indentation in yaml file. I once faced an issue due to this wrong indentation in data_file_directories.
Finally, run ./cqlsh , provide ip_address if it is a remote server.
Check the nodetool status and confirm whether the node is up and normal.
Check the following:
Cassandra is running: nodetool status / ps -elf | grep cassa
Port 9042 (default for CQL) is not used by something else: netstat -an | grep 9042
Try running cqlsh `hostname -i`
Good luck.
I am trying to setup environment variables so that any user on a particular server can run commands like nodetool or cqlsh from any where in linux file system . The effort to traverse to bin directory everytime should be saved .
How can we achieve this ? My DSE 4.8 is a tarball install .
Nodetool is usually available to any user that has execution privileges in your linux boxes
For cqlsh, you can set any configuration inside the cqlshrc file (usually found in $HOME/.cassandra/cqlshrc; we have used to enable client-node encryption but has more configurable options
To setup environment variable just follow some steps from root user:
# vi /etc/profile.d/cassandra.sh
Add the following lines to the cassandra.sh file-
export CASSANDRA_HOME=/opt/apache-cassandra-3.0.8
export CASSANDRA_CONF_DIR=/opt/apache-cassandra-3.0.8/conf
Here /opt/ is my directory, where I've extracted my apache-cassandra-3.0.8-bin.tar.gz tarball.
After adding those lines to cassandra.sh, save and exit. Then-
# source /etc/profile.d/cassandra.sh
I am using datastax cassandra 1.2.
The output.log file is getting saved at the location /var/log/cassandra/output.log
How do i change the folder location for the output.log.
I have been successful in changing the system.log folder location via the:
/etc/cassandra/log4j-server.properties
Please help. Thanks
Jaskaran
Cassandra daemon uses jsvc, this will pipe all the standard/error out to a file specified in -outfile (errout to &1 sends to same). This in the Cassandra Debian package is configured in the init script: https://github.com/apache/cassandra/blob/cassandra-1.2/debian/init#L141
When I start zeppelin on AWS, It starts on port 8080, but there is Spark Master so it says port already in use... I tried changing port in zeppelin in config files, in "zeppelin-site.xml.template"
<property>
<name>zeppelin.server.port</name>
<value>8050</value>
<description>Server port.</description>
</property>
I made it too in "zeppelin-env.sh.template" adding the env line for the same port.
When I start zeppelin, I get OK, but if I see the open ports, It doesn't appear 8050 anywhere, so looks like It is still trying to deploy on port 8080, where Spark Master is...
Someone who got zeppelin not ignoring changed port?
Thanks
You likely need to copy the .template files, e.g. copy your modified zeppelin-env.sh.template to zeppelin-env.sh and zeppelin-site.xml.template to zeppelin-site.xml.
From your zeppelin installation dir (example on my computer its: zeppelin-0.7.3-bin-all):
cp conf/zeppelin-env.sh.template conf/zeppelin-env.sh
vi conf/zeppelin-env.sh
Add the following parameter:
export ZEPPELIN_PORT=8180 # Add this line to zeppelin-env.sh
restart zeppelin you should now be able to access it over:
http://localhost:8180
Indeed, both documented ways should work:
create conf/zeppelin-site.xml with zeppelin.server.port property
create conf/zeppelin-env.sh and export ZEPPELIN_PORT env variable
and restarting Zeppelin
The accepted answer seems to be a little old, so I'm putting here the solution I found, in case it's useful for others:
It is possible to define variables (including port) either conf/zeppelin-env.sh or in conf/zeppelin-site.xml. The zeppelin-env takes priority if both are defined.
Source:
http://zeppelin.apache.org/docs/snapshot/install/configuration.html
I have a schema file for Cassandra. I'm using a windows 7 machine (Cassandra on this machien as well - 1 node). I want to load the schema with cqssh. So far I have not been able to find how. I was hoping to be able to pass the file to cqlsh: cqlsh mySchemaFile. However since I run in windows, to start cqlsh I do the following
python "C:\Program Files (x86)\DataStax Community\apache-cassandra\bin\cqlsh" localhost 9160
Even though I have csqsh in my path, when called like this from python it needs the full path.
I tried to add in there the file name but no luck so far.
Is this even possible?
cqlsh takes a file to execute via the -f or --file option, not as a positional argument (like the host and port), so the correct form would be:
python "C:\Program Files (x86)\DataStax Community\apache-cassandra\bin\cqlsh" localhost 9160 -f mySchemaFile
Note: I'm not 100% sure about whether you'd use -f or \f in Windows.