Hive tables Not Visible in Tableau - apache-spark

I have created a table, ztest7 in the default database in my hive. I am able to query it using beeline. In tableau, I can query it using a custom sql.
However the table does NOT show when I search for it.
Am I missing something here?
Tableau Desktop Version = v10.1.1
Hive = v2.0.1
Spark = v2.1.0
Best Regards

I have the same issue with Tableau Desktop 10 (mac) to Hive (2.1.1) via Spark SQL 2.1 (on centos 7 server)
This is what I got from Tableau Support:
In Tableau Desktop, the ability to connect to Spark SQL without a
defining a default schema is not currently built into the product.
As a preliminary step, to define a default schema, configure the Spark
SQL hivemetastore to utilize a SchemaRDD or DataFrame. This must be
defined in the Hive Metastore for Tableau Desktop to be able to access
it. Pure schema-less Spark RDD's can not be queried by Spark SQL
because of the lack of a schema. RDDs can be converted into
SchemaRDDs, which have additional schema metadata as Spark SQL
provides access to SchemaRDDs. When a SchemaRDD is created, it is only
available in the local namespace or context, and is unavailable to
external services accessing Spark through ODBC and the Spark Thrift
Server. For Tableau to have access, the SchemaRDD needs to be
registered in a catalog that is available outside of just the local
context; the Hive Metastore is currently the only supported service.
I don't know how to check/implement this.
PS: I'd have posted this as a comment because I am not allowed to as I am new to Stack Overflow.

In the file labeled Table on the left side of the screen, Try selecting contains, entering part of your table name and hitting enter

I ran into similar issue. In my case, I had loaded tables using HIVE but the tableau connection to the data source was made using Impala as shown in the image below.
To fix the issue of not seeing the tables in tableau dropdown, try running INVALIDATE METADATA database.table_name in the impala interface. This fixed the problem for me.
To know why this fixes the issue, refer this link.

Related

SnowFlake Datawarehouse : 'show tables' & create table using spark

I have 2 questions w.r.t spark and Snowflake datawarehouse.
1) Is there any way to query/create snowflake tables like hive/spark(either new or old versions of spark)
val hive_tables=hiveContext.sql("show tables").foreach(println)
2) hiveContext.sql("create table....")
first question is about knowing what tables are present for that particular user for the particular role. The reason why I am asking question is via web ui of snowflake I am able to query the table but through spark I am not able to query
Exception in thread "main" net.snowflake.client.jdbc.SnowflakeSQLException: SQL compilation error:
Object 'mytable' does not exist.
You should double check things like database/schema/role in your JDBC connection settings. If you don't see a table via JDBC, one of these might be the culprit.
You can validate the current settings by running e.g. show roles, show schemas and show databases on the established JDBC connection.
In general, I highly recommend using Spark-Snowflake connector for communicating with Snowflake from Spark. It also provides Utils.runQuery() for running simple queries like DDL.

Unable to write on hive on HDP2.4.0.0-169 sandbox when running scala application job from eclipse

I am facing a weird issue which working with HDP2.4.0.0-169 sandbox.
I have HDP with host name - sandbox.hortonworks.com and ip 192.168.159.129, I have all default hadoop and other services up and running on that.
I have written a spark code for creating a table in hive and reading the content any existing table of hive present on HDP. I also have a code for writing data/inserting data into newly created hive table.
As soon as I run this code from my Eclipse using run as Scala Application option, it creates the table. It also reads the table but it is not able to write anything in any new or existing table created. This seems to be very weird to me as I can create table but can not write anything in it.
It gives me following error
Exception while executing hive query.java.net.UnknownHostException:
sandbox.hortonworks.com
I have an entry for sandbox.hortonworks.com in my windows hosts file
as well but unable to figure out why it is not allowing me to write
any data in hive table when I can create a table?
Is there any user's read/write permission issue?
If yes, then why it is allowing me to create and read data from hive using same user from eclipse?
It is only not allowing me to insert data into those hive tables.
Any quick pointer/reference would be appreciated.
Regards,
Bhupesh
Got it.
By mistake the entry made in C:\Windows\System32\drivers\etc\hosts file was wrong.
It should be
192.168.159.129 sandbox.hortonworks.com

Tableau connection to Spark SQL

I am trying to connect Tableau Desktop 10 (mac) to Spark SQL 2.1 (on centos 7 server). I am connecting via Simba ODBC driver with Authentication = Username and Username = . It doesn't give any error but I don't see the tables which are available in Hive. After searching and choosing 'default' schema, and searching for tables, I only see default (default.default) table. However, when I use beeline on the server to connect to Spark SQL, the hive tables are visible.
If I use the custom SQL feature I can query the tables and use the data, but I still have no way to list the tables in Tableau.
I am not sure if the issue is on Tableau side or Spark side. I'd greatly appreciate any help with troubleshooting this issue.
The reason for this behaviour is following:
In spark 2.0, show tables output format is : 'tableName', 'isTemporary'
and
In Spark 2.1 show tables output format is 'database', 'tablename', 'isTemporary'
Now Tableau 10.2.3 or greater are able to parse the output from spark2.1, but 10.2.1 and less are unable to parse this new output format.

connecting to spark data frames in tableau

We are trying to generate reports in tableau by spark SQL connectivity, But i found out that we are ultimately connecting to hive meta-store.
If this is the case what are the advantages of this new spark SQL connection. Is there a way to connect to spark data frames that are persisted, from tableau using spark SQL.
The problem here is a Tableau problem more than a Spark problem. Spark SQL Connector launches a Spark job each time you connect to a database. Part of that Spark job loads the underlying Hive table into the distributed memory that Spark manages, and each time you make a change or select on a graph, the refresh has to go a level deeper to Hive metastore to get the data, through Spark. That is how Tableau is designed. The only option here is to change Tableau for Spotfire (or some other tool) where by pre-caching the underlying Hive table, the Spark SQL Connector can query it directly from Spark distributed memory, skipping the load step.
Disclosure: I am in no way associated with Spotfire makers

Spark Sql JDBC Support

Currently we are building a reporting platform as a data store we used Shark. Since the development of Shark is stopped so we are in the phase of evaluating Spark SQL. Based on the use cases we have we had few questions.
1) We have data from various sources( MySQL, Oracle, Cassandra, Mongo). We would like to know how can we get this data into Spark SQL? Does there exist any utility which we can use? Does this utility support continuous refresh of data (sync of new add/update/delete on data store to Spark SQL?
2) Is the a way to create multiple database in Spark SQL?
3) For Reporting UI we use Jasper, we would like to connect from Jasper to Spark SQL. When we did our initial search we got to know currently there is no support for consumer to connect Spark SQL through JDBC, but in future releases you would like the add the same. We would like to know by when Spark SQL would have a stable release which would have JDBC Support? Meanwhile we took the source code from https://github.com/amplab/shark/tree/sparkSql but we had some difficulty in setting it up locally and evaluating it . It would be great if you can help us with setup instructions.(I can share the issue we are facing please let me know where can I post the error logs)
4) We would also require a SQL prompt where we can execute queries, currently Spark Shell provides SCALA prompt where SCALA code can be executed, from SCALA code we can fire SQL queries. Like Shark we would like to have SQL prompt in Spark SQL. When we did our search we found that in future release of Spark this would be added. It would be great if you can tell us which release of Spark would address the same.
as for
3) Spark 1.1 provides better support for SparkSQL ThriftServer interface, which you may want to use for JDBC interfacing. Hive JDBC clients that support v. 0.12.0 are able to connect and interface with such server.
4) Spark 1.1 also provides a SparkSQL CLI interface that can be used for entering queries. In the same fashion that Hive CLI or Impala Shell.
Please, provide more details about what you are trying to achieve for 1 and 2.
I can answer (1):
Apache Sqoop was made specifically to solve this problem for the relational databases. The tool was made for HDFS, HBase, and Hive -- as such it can be used to make data available to Spark, via HDFS and the Hive metastore.
http://sqoop.apache.org/
I believe Cassandra is available to SparkContext via this connector from DataStax: https://github.com/datastax/spark-cassandra-connector -- which I have never used.
I'm not aware of any connector for MongoDB.
1) We have data from various sources( MySQL, Oracle, Cassandra, Mongo)
You have to use different driver for each case. For cassandra there is datastax driver (but i encountered some compatibility problems with SparkSQL). For any SQL system you can use JdbcRDD. The usage is straightforward, look at the scala example:
test("basic functionality") {
sc = new SparkContext("local", "test")
val rdd = new JdbcRDD(
sc,
() => { DriverManager.getConnection("jdbc:derby:target/JdbcRDDSuiteDb") },
"SELECT DATA FROM FOO WHERE ? <= ID AND ID <= ?",
1, 100, 3,
(r: ResultSet) => { r.getInt(1) } ).cache()
assert(rdd.count === 100)
assert(rdd.reduce(_+_) === 10100)
}
But notion that it's just an RDD, so you should work with this data through map-reduce api, not in SQLContext.
Does there exist any utility which we can use?
There is Apache Sqoop project but it's in active development state. The current stable version even doesn't save files in parquet format.
Spark SQL is a capability of the Spark framework. It shouldn't be compared to Shark because Shark is a service. (Recall that with Shark, you run a ThriftServer that you can then connect to from your Thrift app or even ODBC.)
Can you elaborate on what you mean by "get this data into Spark SQL"?
There are a couple of Spark - MongoDB connectors:
- the mongodb connector for hadoop (which doesn't actually need Hadoop at all!) https://databricks.com/blog/2015/03/20/using-mongodb-with-spark.html
the Stratio mongodb connector https://github.com/Stratio/spark-mongodb
If your data is huge and need to perform a lot of transformations then Spark SQL can be used for ETL purpose, else presto could solve all your problems. Addressing your queries one by one:
As your data is in MySQL, Oracle, Cassandra, Mongo all these can be integrated in Presto as it has connectors https://prestodb.github.io/docs/current/connector.html for all these databases.
Once you install Presto in cluster mode you can query all these databases together in one platform, which also provides to join a table from Cassandra and other tables from Mongo, this flexibility is unparalleled.
Presto can be used to connect to Apache Superset https://superset.incubator.apache.org/ which is open source and provides all sets Dashboarding. Also Presto can be connected to Tableau.
You can install MySQL workbench with presto connecting details which helps in providing a UI for all your databases at one place.

Resources