OpsCenter with HTTPS kills session when clicked on Spark Console - cassandra

I have a DataStax Enterprise cluster running on 2 AWS nodes. DSE is installed in enterprise mode and one of the nodes is configured in Analytics mode.
Everything was working normal until I followed the steps outlined here to enable HTTPS for OpsCenter: http://docs.datastax.com/en/opscenter/5.0/opsc/configure/opscEnablingAuth.html
OpsCenter authentication is now working fine. However if I click the Spark Console hyperlink of the Analytics node the raw text of the Spark job details will show but the page's CSS and images are gone, looking at Chrome's developer tools it looks like I'm getting an access denied on these resources. Also, as soon as I click the link and the Spark Console popup opens, the OpsCenter tab will kill my session and log me out. I was able to observe the same behavior with Chrome and IE.
Instance: m3.large
AMI: DataStax Auto-Clustering AMI 2.6.3-1404-hvm - ami-8b392cbb

I've reproduced this issue using OpsCenter 5.2 and DSE 4.7. I've created a ticket in our internal tracking system to address this issue; the reference number for that ticket is OPSC-6606.
Thanks for bringing this issue to our attention!

Related

How do I connect to Cassandra with Dbeaver Community edition?

Has anyone had any success with connecting to a Cassandra cluster using DBeaver Community Edition? I've tried to follow this post, but haven't had any success. I have to have authentication enabled, and I get an error saying:
Authentication error on host /x.x.x.x:9042: Host /x.x.x.x:9042 requires authentication, but no authenticator found in Cluster configuration
Overview
DataStax offers the JDBC driver from Magnitude (formerly Simba) to users at no cost so you should be able to use it with DBeaver.
These are the high-level steps for connecting to a Cassandra cluster with DBeaver:
Download the Simba JDBC driver from DataStax
Import the Simba JDBC driver
Create a new connection to your cluster
Download the driver
Go to https://downloads.datastax.com/#odbc-jdbc-drivers.
Select Simba JDBC Driver for Apache Cassandra.
Select JDBC 4.2.
Accept the license terms (click the checkbox).
Hit the blue Download button.
Once the download completes, unzip the downloaded file.
Import the driver
In DBeaver, go to the Driver Manager and import the Simba JDBC driver as follows:
Click the New button
In the Libraries tab, click the Add File button
Locate the directory where you unzipped the download and add the CassandraJDBC42.jar file.
Click the Find Class button which should identify the driver class as com.simba.cassandra.jdbc42.Driver.
In the Settings tab, set the following:
Driver Name: Cassandra
Driver Type: Generic
Class Name: com.simba.cassandra.jdbc42.Driver
URL Template: jdbc:cassandra://{host}[:{port}];AuthMech=1 (set authentication mechanism to 0 if your cluster doesn't have authentication enabled)
Default Port: 9042
Click the OK button to save the driver.
At this point, you should see Cassandra as one of the drivers in the list.
Connect to your cluster
In DBeaver, create a new database connection as follows:
Select Cassandra from the drivers list.
In the Main tab of the JDBC connection settings, set the following:
Host: node_ip_address (this could be any node in your cluster)
Port: 9042 (or whatever you've set as rpc_port in cassandra.yaml)
Username: your_db_username
Password: your_db_password
Click on the Test Connection button to confirm that the driver configuration is working.
Click on the Finish button to save the connection settings.
At this point, you should be able to browse the keyspaces and tables in your Cassandra cluster. Cheers!
👉 Please support the Apache Cassandra community by hovering over cassandra then click on the Watch tag button. 🙏 Thanks!
Erick Ramirez answer mostly worked for me. I did manage to get a connection, but I never figured out how to get dbeaver to properly work with dates. By default they were displayed in local time, and queries with filters on exact timestamps did not work.
What did work very well for me was the Cassandra integration in JetBrains Rider. (I guess it's the same as for JetBrains IntelliJ)

not able to connect cluster to opscenter

hyy folks,
i have setup cassandra gui using this doc https://docs.datastax.com/en/install/6.8/install/opscInstallRHEL.html
but as soon as i hit the url it will open this page
here i chosen manage existing clsuter and go to next page
there it is shown like this
there i paste my private ip
but as soon i hit next it showing this error
how can i add my cluster in opscenter
sudo service opscenterd status
it is showing in Running state
i entered username and password then it is showing this
OpsCenter only works with DataStax Enterprise.
You cannot use OpsCenter to connect to:
open-source Apache Cassandra,
other distributions which are forks of Cassandra, or
versions from cloud vendors which run a CQL API engine to make the database "look and feel" like Cassandra.
Cheers!

How to detect in Hadoop cluster if any Datanode drive (Storage) failed

I am trying to detect the drive failure in Datanode in a Hadoop Cluster. Cloudera Manager API don't have any specific API for that. CM API are only related to Name node or restart services. Are there any suggestions here? Thanks a lot!
If you have access to NameNode UI, the JMX page will give you this information. If you hit the JMX page directly it'll be a JSON formatted page, which can be parsed easily.
We use HortonWorks primarily, haven't touched Cloudera in a long time, but I assume that can be made available somehow.

Spark jobs not showing up in Hadoop UI in Google Cloud

I created a cluster in Google Cloud and submitted a spark job. Then I connected to the UI following these instructions: I created an ssh tunnel and used it to open the Hadoop web interface. But the job is not showing up.
Some extra information:
If I connect to the master node of the cluster via ssh and run spark-shell, this "job" does show up in the hadoop web interface.
I'm pretty sure I did this before and I could see my jobs (both running and already finished). I don't know what happened in between for them to stop appearing.
The problem was that I was running my jobs in local mode. My code had a .master("local[*]") that was causing this. After removing it, the jobs showed up in the Hadoop UI as before.

Opscenter doesn't show my key spaces

I have Cassandra cluster (ver 2.0.12) and Datastax Agents 5.0.1. Also I using OpsCenter 5.1.0. In "Explorer" tab I see no keyspaces.
Query from CLI:
SELECT keyspace_name FROM system.schema_keyspaces;
show my keyspaces. I try URL:
http://<cluster_url>:8888/<cluster_name>/keyspaces
that show me JSON output which conatain keyspaces info(i think), but "Explorer" tab still empty.
Opscenter does not necessarily automatically connects to the local node to get the cluster information. You can review this piece of documentation to check what is in your configuration files, and update them properly. There can be multiple reasons why OpsCenter can't connect to your local instance.
Or you can use the wizard to add a cluster to manage. This should populate data properly into the config files.

Resources