I am using sparkSql to get data from postgresql but problem occurring whenever database connection lost due to some issues like (database client machine shutdown etc).
I wanted to make it re-connect automatically, how i can make it can you suggest me any way using sparkSql only
Note:- know this features are there in spring boot and hibernate but i am not considering this things
Related
Pretty new to Databricks.
I've got a requirement to access data in the Lakehouse using a JDBC driver. This works fine.
I now want to stub the Lakehouse using a docker image for some tests I want to write. Is it possible to get a Databricks / spark docker image with a database in it? I would also want to bootstrap the database on startup to create a bunch of tables.
No - Databricks is not a database but a hosted service (PaaS). You can theoretically you can use OSS Spark with Thriftserver started on it, but the connections strings and other functionality would be very different, so it makes no sense to spend time on it (imho). Real solution would depend on the type of tests that you want to do.
Regarding bootstrapping database & create a bunch of tables - just issue these commands, like, create database if not exists or create table if not exists when you application starts up (see documentation for an exact syntax)
Is there a way to connect to Cassandra using anorm framework in scala.?
We are using Cassandra as primary DB and evaluating connection options our application needs. As part of that, I am checking the possibility to establish connectivity with Cassandra using anorm frame framework. I could not find any concrete examples yet.
Here is the short story:
A BI tool (PowerBI) connects to Spark cluster and uses HiveThriftServer2 application to get aggregated data via hive queries.
However, each query takes a lot of time since every time it reads data from files. I would like to cache my table in this application and looking for the way to send query "cache table myTable" through same channel, so next queries will run quick.
What would be a solution to send hive query to specific application? If it matters, the application is a thrift service of Spark.
Thanks a lot!
Looks like I succeed to do it, by installing Spark Odbc driver and using it to connect to thift server and send the sql query "cache table xxx". I wonder if there is more elegant way
I am using Weblogic 12c in a cluster environment,
I am using Cassandra for database operations.
My requirement is to execute a batch which picks up records from DB, process them and upload to a webservice.
For this I am looking at Quartz JDBCJobStore implementation.
For normal sql database we can achieve it by JDBC JobStore
However I am struggling to get on how to implement on Cassandra
I want to create JDBC JobStore for NoSQL database like Cassandra
Any help would be great on this.
It would be helpful if some example implementation of quartz.properties and table script is given
Update: This delivers an answer for a SQL DB. I want the same for NoSQL DB
I have been evaluating Hadoop on azure HDInsight to find a big data solution for our reporting application. The key part of this technology evaluation is that the I need to integrate with MSSQL Reporting Services as that is what our application already uses. We are very short on developer resources so the more I can make this into an engineering exercise the better. What I have tried so far
Use an ODBC connection from MSSQL mapped to the Hive on HDInsight.
Use an ODBC connection from MSSQL using HBASE on HDInsight.
Use SPARKQL locally on the azure HDInsight Remote desktop
What I have found is that HBASE and Hive are far slower to use with our reports. For test data I used a table with 60k rows and found that the report on MSSQL ran in less than 10 seconds. I ran the query on the hive query console and on the ODBC connection and found that it took over a minute to execute. Spark was faster (30 seconds) but there is no way to connect to it externally since ports cannot be opened on the HDInsight cluster.
Big data and Hadoop are all new to me. My question is, am I looking for Hadoop to do something it is not designed to do and are there ways to make this faster?I have considered caching results and periodically refreshing them, but it sounds like a management nightmare. Kylin looks promising but we are pretty married to windows azure, so I am not sure that is a viable solution.
Look at this documentation on optimizing Hive queries: https://azure.microsoft.com/en-us/documentation/articles/hdinsight-hadoop-optimize-hive-query/
Specifically look at ORC and using Tez. I would create a cluster that has Tez on by default and then store your data in ORC format. Your queries should be much more performant then.
If going through Spark is fast enough, you should consider using the Microsoft Spark ODBC driver. I am using it and the performance is not comparable to what you'll get with MSSQL, other RDBMS or something like ElasticSearch but it does work pretty reliably.