Has anyone tried to use Shark/Spark on DataStax Enterprise? - cassandra

I've been trying to achieve this without success. I tried to use the included hive disitribution on dse with shark, however, shark provides with a patched up and older version of Hive (0.9 I believe), which makes shark execution impossible due to incompatibilities. I also tried to use the patched up hive version from shark instead of dse's, recycling the dse hive configuration (in order to make available CFS to shark's hive distribution) only to discover a long list of dependencies from the full dse classpath (hive, cassandra, hadoop, etc.).
It is possible to achieve this with C* by following the instructions on this blog.
Am I being stubborn by trying to use CFS? Is there a way with or without CFS on dse?
Thanks!
Here are some shark-env.sh highlights:
export HIVE_HOME="/home/cassserv/hive-0.9.0-bin/" #choosing this when using hive distro.
#export HIVE_HOME="/usr/share/dse/hive/" #choosing this when using dse distro.
export HIVE_CONF_DIR="/home/cassserv/hive-0.9.0-bin/conf" #edited dse hive-site.xml conf file
#export HIVE_CONF_DIR="/etc/dse/hive" #original dse hive-site.xml conf file
Edited hive-site.xml highlights:
<property>
<name>hive.hwi.war.file</name>
<!--<value>lib/hive-hwi.war</value>-->
<value>lib/hive-hwi-0.9.0-shark-0.8.1.war</value><!--edited to use sharks distro-->
<description>This sets the path to the HWI war file, relative to ${HIVE_HOME}</description>
</property>
<property>
<name>hadoop.bin.path</name>
<!--<value>${dse.bin}/dse hadoop</value>-->
<value>/usr/share/dse hadoop</value><!--edited to override variable-->
</property>
Here's shark's output while trying to use sharks patched hive distro with dse's hive configuration. That missing class is in dse.jar file:
Exception in thread "main" org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore class not found)
I'm trying to figure out if I can do something like this in the edited hive-site.xml:
<property>
<name>fs.cfs.impl</name>
<value>org.apache.cassandra.hadoop.fs.CassandraFileSystem</value>
</property>
<property>
<name>hive.metastore.rawstore.impl</name>
<!--<value>com.datastax.bdp.hadoop.hive.metastore.CassandraHiveMetaStore</value>--> <value>org.apache.hadoop.hive.metastore.ObjectStore</value>
<description>Use the Apache Cassandra Hive RawStore implementation</description>
</property>
in order to remove any dependency from the dse libraries. Also, might not use dse's hadoop distro.

DSE 4.5 has Spark and Shark 0.9 integrated. You don't need to setup anything, it works out-of-the-box the same way pig/hive worked before.

Related

Read data from Cassandra in spark-shell

I want to read data from cassandra node in my client node on :
This is what i tried :
spark-shell --jars /my-dir/spark-cassandra-connector_2.11-2.3.2.jar.
val df = spark.read.format("org.apache.spark.sql.cassandra")\
.option("keyspace","my_keyspace")\
.option("table","my_table")\
.option("spark.cassandra.connection.host","Hostname of my Cassandra node")\
.option("spark.cassandra.connection.port","9042")\
.option("spark.cassandra.auth.password","mypassword)\
.option("spark.cassandra.auth.username","myusername")\
.load
I'm getting this error: java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.sql.cassandra.DefaultSource$
and
java.lang.NoClassDefFoundError: org/apache/commons/configuration/ConfigurationException.
Am I missing any properties? What this error is for ? How would I resolve this ?
Spark-version:2.3.2, DSE version 6.7.8
The Spark Cassandra Connector itself depends on the number of other dependencies, that could be missing here - this happens because you're providing only one jar, and not all required dependencies.
Basically, in your case you need to have following choice:
If you're running this on the DSE node, then you can use built-in Spark, if the cluster has Analytics enabled - in this case, all jars and properties are already provided, and you only need to provide username and password when starting spark shell via dse -u user -p password spark
if you're using external Spark, then it's better to use so-called BYOS (bring your own spark) - special version of the Spark Cassandra Connector with all dependencies bundled inside, and you can download jar from DataStax's Maven repo, and use with --jars
you can still use open source Spark Cassandra Connector, but in this case, it's better to use --packages com.datastax.spark:spark-cassandra-connector_2.11:2.3.2 so Spark will able to fetch all dependencies automatically.
P.S. In case of open source Spark Cassandra Connector I would recommend to use version 2.5.1 or higher, although it requires Spark 2.4.x (although 2.3.x may work) - this version has improved support for DSE, plus a lot of the new functionality not available in the earlier versions. Plus for that version there is a version that includes all required dependencies (so-called assembly) that you can use with --jars if your machine doesn't have access to the internet.

Sparklyr not connecting to my hive warehouse

I'm doing a very silly thing and trying to install a Yarn/Hive/Spark/R platform from scratch, not using Hortonworks or Cloudera. I've gotten many pieces figured out but am stuck trying to get my sparklyr to connect to my Hive warehouse.
I am using Rstudio on one machine and connecting to yarn-client located on a separate cluster. I've put hive-site.xml pretty much everywhere, the local $SPARK_HOME/conf and each of the hadoop nodes' $SPARK_HOME/conf and $HADOOP_CONF_DIR. In hive-site.xml I've included the param:
<property>
<name>spark.sql.warehouse.dir</name>
<value>hdfs://<driver node>/user/hive/warehouse/</value>
<description>The loation of the hive warehouse</description>
</property>
I feel that that should make it pretty clear that I'm trying to use hive but when I run this code:
DBI::dbGetQuery(sc, "CREATE DATABASE test")
DBI::dbGetQuery(sc, "use test")
iris_spark_table <- copy_to(sc, iris, overwrite = TRUE)
sdf_copy_to(sc, iris_spark_table)
DBI::dbGetQuery(sc, "create table iris_hive as SELECT * FROM iris_spark_table")
I get this error:
org.apache.hadoop.hive.ql.metadata.HiveException:
java.io.IOException:
Mkdirs failed to create file:/<my-r-code's-working-dir>/spark-warehouse/test.db/iris_hive/.hive-staging_hive_2018-08-05_14-18-58_646_6160231583951115949-1/-ext-10000/_temporary/0/_temporary/attempt_20180805141859_0013_m_000000_3
(exists=false, cwd=file:/tmp/hadoop-hadoop/nm-local-dir/usercache/dzafar/appcache/application_1533357216333_0015/container_1533357216333_0015_01_000002)
What am I missing??? Thanks in advance!!!
First of all Spark specific properties should be placed in Spark configuration files. It means you should put
spark.sql.warehouse.dir
in $SPARK_HOME/conf/spark-defaults.conf
Additionally you might have a problem with hdfs-site.xml not being present on the search path.

'hive on spark' in datastax enterprise DSE?

DSE 6 comes pre-bundled Cassandra and SparkSql. Has anyone also setup 'Hive on Spark' there? I wonder about spark version conflicts being an issue. Reason i wan't this is that Hive seems to allow masking/authorization with Ranger but SparkSQL doesn't
Answer not directly related to setting Hive, etc. but DSE has security (authentication/authorization/...) built-in (see FAQ), and it's supported by the all components, including Spark SQL. If you want to have more granular permissions, you can set row-level access control.

How to specify which hive metastore to connect to?

Going back a few versions of Spark it used to be required to put the
hive-site.xml
in the $SPARK_HOME/conf directory. Is that still the case?
The motivation for this question: we are unable to see hive tables that are defined within the metastore instance for which we did copy the hive-site.xml to the conf dir.
I have verified that the hive-site.xml is still used. It is selected from the classpath of spark. This may be set up via
export SPARK_CLASSPATH=/path/to/conf/dir

How can i access cfs url from a remote non dse (datastax) node

im am trying to do... from my prog.
val file = sc.textFile("cfs://ip/.....")
but i get java.io.IOException: No FileSystem for scheme: cfs exception...
How should i modify the core-site.xml and where? It should be on dse nodes or should i add it as a resource in my jar.
I use maven to build my jar and execute the jobs remotely...from a non dse node which does not have cassandra or spark or something similar... Other type of flows without cfs files work ok... so the jar is ok so far...
Thnx!
There is some info in the middle of this page about Spark using Hadoop for some operations, such as CFS access: http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkCassProps.html
I heard about a problem using Hive from a non-DSE node that was solved by adding a property file to core-site.xml. This is really a long-shot since it's Spark, but if you're willing to experiment, try adding the IP address of the remote machine to the core-site.xml file.
<property>
<name>cassandra.host</name>
<value>192.168.2.100</value>
<property>
Find the core-site.xml in /etc/dse/hadoop/conf/ or install_location/resources/hadoop/conf/, depending on the type of installation.
I assume you started the DSE cluster in hadoop and spark mode: http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/spark/sparkStart.html
Been quite some time.
The integration is done as usual with any integration of a hadoop client to a compatible hadoop fs.
Copy core-site.xml (append the dse-core-default.xml there) along with dse.yaml, cassandra.yaml and then it requires a proper dependency set-up in the class path eg. dse.jar, cassandra-all, etc.
Note: this is not officially supported so better use other way.

Resources