Cassandra get OS type using CQL - cassandra

Is it possible to get the OS Cassandra is installed on using CQL?
For example in PostgreSQL you can run the query select version(); which will return:
PostgreSQL 11.12 (Debian 11.12-0+deb10u1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
Essentially, it returns the version of PostgreSQL and the OS type installed on.

Unfortunately, it isn't possible to do this in Cassandra through CQL.
It isn't particularly relevant since some deployments are serverless like K8ssandra.io where Cassandra is deployed in Kubernetes.
There are also some architectures where the coordinators are Stargate.io nodes which are not Cassandra nodes in the cluster. Astra DB is an example of this where it's serverless and truly cloud-native. Cheers!

Are you using Cassandra 4.0? If so, you should be able to query the os.name key from the system_views.system_properties table:
> SELECT * FROm system_properties WHERE name='os.name';
name | value
---------+----------
os.name | Mac OS X
(1 rows)
Although, as Erick pointed out the info isn't terribly useful. You could be running in an Alpine Linux container on an Ubuntu host, but Cassandra would only have visibility to Alpine.

Related

Why is so slow when use Spark Cassandra Connector in java code with Cassandra cluster?

we have tested a lot in many scenes in small data.
if use cassandra installed without cluster,then everything is ok,but if we use cassandra in cluster,then it will cost more then about 15 seconds at the same function.
Our java code is just as the sample code.Purely, call the dataset.collectAsList() or dataset.head(10)。
But if we use scala ,the same logic in spark-shell don't have the problem.
We have test a lot jdks and systems.Mac OS is fine, but window OS and linux OS like centos both have this problem.
collectAsList or head function,will try to getHostName,this is a expensive operation.So we can't use Ip to connect cassandra cluster, we have to use HOSTNAME to connect it.And it works!!!! the code of spark cassandra connector have to fix this problems.

Cassandra ODBC Datastax working with Cassandra 2.1 but not 3.0 in Windows

I have tried connecting to a remote Cassandra from Windows 10 using the latest Simba-Datastax ODBC Driver (trial version). I was successful with Cassandra 2.1 (I connected to a Cassandra docker actually) but failed with Cassandra 3.0.15 and 3.11. I have installed the driver and I am able to see it in the Windows Data sources tool (64 bits), under the System DSN tab.
When I specify the host, port and keyspace of my Cassandra 3.0 docker (exactly the same values that work allright for me with the Cassandra 2.1 docker) and press the "Test..." button to launch the connectivity test, I am getting a strange error that "not even procol version 1 is available".
According to this web site, Simba says the driver is compatible with Cassandra 3.X. Could you think of any reason why this fails but 2.1 is successful? :-(
PS: I see other people complaining here but with a different error message (No hosts available for the control connection)
I fixed it! I think I was using a wrong version of the driver - I was using Datastax driver which apparently does not work for Cassandra 3.X. I have now downloaded the latest version of the ODBC driver from the Simba website (30-day trial version) and it is working :-)
The confusion came from the fact that I thought the Datastax driver and the Simba driver were the same as I read somewhere that "Simba and Datastax have partnered to develop a driver...".
Thank you very much Aaron anyway.

Migrating from Cassandra 2.2.0 to DSE 4.8.5 (Cassandra 2.1.3)

I have been building an application using Apache Cassandra 2.2.0 for sometime now. We plan to start using the DataStax Enterprise 4.8.5 (this is built on Apache Cassandra 2.1.3).
Problem is as this is a downgrade of Cassandra version, 2.2.0 -> 2.1.3, I am not able to read the SSTables created by Cassandra version 2.2.0.
What can I do to have my old data available with DSE 4.8.5?
This is not supported. You should consider contacting Datastax for advice (that's one of the advantages of paying for DSE, you get someone to talk to about topics like this).
You'd almost certainly have to export the data and re-import it (either using sstable2json or COPY TO+COPY FROM to export it to a CSV using CQLSH, or using something like Spark or CQLSSTableWriter to recreate 2.1 sstables.

Running remote cqlsh to execute commands on Cassandra Cluster

So I have a Cassandra cluster of 6 nodes on my Ubuntu machines, now I have got another machine running Windows Server 2008. I have installed DataStax Apache Cassandra on this new Windows machine, and I want to be able to run all the CQL commands from Windows machine onto Ubuntu machines. So its like remote command execution.
I tried opening cqlsh in cmd using cqlsh with the IP of my one of the nodes and port like cqlsh 192.168.4.7 9160
But I can't seem to make it work. Also I don't want to add the new machine to my existing cluster Please suggest.
Provided version 3.1.1 is not supported by this server (supported: 2.0.0, 3.0.5)
any workaround u could suggest?
Basically, you have two options here. The harder one would be to upgrade your cluster (the tough, long-term solution). But there have been many improvements since 1.2.9 that you could take advantage of. Not to mention bugs fixed long ago that you may be running into.
The other, quicker option would be to install 1.2.9 on your Windows machine. Probably the easiest way to do this, would be to zip-up your Cassandra dir on Ubuntu (minus the data, commitlog, and saved caches dirs of course), copy it to your Windows machine, and expand it. Then the cqlsh versions would match-up, and you could solve your immediate problem.

Two node Cassandra cluster (at latest rev) fails with "Not a time-based UUID"

I've been running a one node cassandra cluster (on windows) at 1.1.3.
I added a 2nd node (on ubuntu) also at 1.1.3.
When I start the first node all is well, but when I start the second node I get an error on the first node: "UnsupportedOperationException: Not a time-based UUID.
Researching this error it seems like an error one might get if mixing older and newer cassandras in the same cluster but thats not the case here.
The cassandra.yaml files on the two machines are vanilla (i.e. unchanged from download except for relevant IP addresses).
Any suggestions appreciated.
AFAIK, mixing nodes with different OS in the same cluster isn't supported.
Read this answer by jbellis, one of the Cassandra creators
The reason why this is happening is because of little-endian vs big-endian on Windows machines vs Unix.

Resources