insert data into Microsoft SQL server using Spark - apache-spark

I am trying to insert data into sql server using spark using the below Jdbc methods.
Option 1:
prop.put("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
dataf.write.mode(org.apache.spark.sql.SaveMode.Append).jdbc(url,table_name, prop)
Table is already created. Appending new data.Job Error-ed with the below exception
Exception in thread "main"
com.microsoft.sqlserver.jdbc.SQLServerException: CREATE TABLE
permission denied in database
Question is : Why create table permission is required for appending the data?
Option2:
prop.put("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable(dataf, url, table_name, prop)
Above command working from spark-shell. when the same is used in scala code and packaged with dependencies giving below exception
Exception in thread "main" java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
I tried setting driver class-path and executor class-path and also --jars still no luck. Included sqljdbc4.jar in driver-classpath and --jars.
Copied sqljdbc4.jar to all worker nodes as well still no luck.
Any Ideas on this?

After Lot of searching and Testing, I found the answer. It might be useful for someone.
Option 1: This is because of bug in spark 1.5.X. the same was resolved
in 1.6.x and later. Because of the bug, It always try to create a new
table.
Option2: This causes because , driver name on classpath given
priority than properties we are passing as argument. Workaround for
this is to create connection and then invoke savetable.
workaround if you are using spark 1.5.x or lower.
JdbcUtils.createConnection(url, prop)
JdbcUtils.saveTable()

Related

How to disable 'spark.security.credentials.${service}.enabled' in Structured streaming while connecting to a kafka cluster

I am trying to read data from a secured Kafka cluster using spark structured streaming.
Also I am using the below library to read the data - "spark-sql-kafka-0-10_2.12":"3.0.0-preview" since it has the feature to specify our custom group id (instead of spark setting its own custom group id)
Dependency used in code:
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql-kafka-0-10_2.12</artifactId>
<version>3.0.0-preview</version>
I am getting the below error - even after specifying the required JAAS configuration in spark options.
Caused by: java.lang.IllegalArgumentException: requirement failed: Delegation token must exist for this connector.
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.kafka010.KafkaTokenUtil$.isConnectorUsingCurrentToken(KafkaTokenUtil.scala:299)
at org.apache.spark.sql.kafka010.KafkaDataConsumer.getOrRetrieveConsumer(KafkaDataConsumer.scala:533)
at org.apache.spark.sql.kafka010.KafkaDataConsumer.$anonfun$get$1(KafkaDataConsumer.scala:275)
Following document specifies that we can disable the feature of obtaining delegation token - https://spark.apache.org/docs/3.0.0-preview/structured-streaming-kafka-integration.html
I tried setting this property spark.security.credentials.kafka.enabled to false in spark config, but it is still failing with the same error.
Apparently there seems to be a bug on the preview release and has been fixed on the GA Spark 3.x release.
Reference :
https://issues.apache.org/jira/plugins/servlet/mobile#issue/SPARK-30495
Now, we can specify our custom consumer group name while fetching the data from Kafka (Even though it's not recommended and we will see a warning message while specifying it).

Instrumenting Spark JDBC with javaagent

I am attempting to instrument JDBC calls using the Kamon JDBC Kanela agent in my Spark app.
I am able to successfully instrument JDBC calls in a non-spark test app by passing in -javaagent:kanela-agent-1.0.1.jar on the command line when I run the app from the JAR. When I do this, I see the Kanela banner display in the console, and can see that my failed statement processor is getting called when there is a SQL error.
From my research, I should be able to inject a javaagent into the executor of a Spark app by passing in the following to spark-submit: --conf "spark.executor.extraJavaOptions=-javaagent:kanela-agent-1.0.1.jar". However, when I do this, although the Kamon banner IS displaying on the console upon my call to Kamon.init(), my failed statement processor is NOT getting called when there is a SQL error.
Things I'm wondering:
Is there something about the way that spark-jdbc makes these JDBC calls that would prevent a javaagent from "seeing" them?
Does my call to Kamon.init() somehow only apply to code in the Spark driver, and not the executor?
Any other reason that you can think of that would be preventing this from working?

SparkSQL Hive Error : "HikariCP" not found in the CLASSPATH

I configured Hive with mySQL as my metastore. I can enter hive shell and create tables successfully.
Spark version: 2.4.0
Hive version: 3.1.1
When I try to run a SparkSQL program using spark submit, I'm getting the below error.
2019-03-02 15:43:41 WARN HiveMetaStore:622 - Retrying creating default database after error: Error creating transactional connection factory
javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
......
......
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
......
......
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient;
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "HikariCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "HikariCP" was not found in the CLASSPATH!
Please let me know if anyone can help me in this regard.
I don't know if you have already solved this problem. There is my advice.
the default database connection is HikariCP in the hive-site.xml. You can search for this in the hive-site.xml: datanucleus.connectionPoolingType. The value is HikariCP. So you need to change it to dbcp since you use Mysql as your metastore.
And at last, don't forget about adding the mysql-connector-java-5.x.x.jar to the path like
/home/hadoop/spark-2.3.0-bin-hadoop2.7/jars

how to use presto to query hive data

I just installed presto and when I use the presto-cli to query hive data, I get the following error:
$ ./presto --server node6:8080 --catalog hive --schema default
presto:default> show tables;
Query 20131113_150006_00002_u8uyp failed: Table hive.information_schema.tables does not exist
The config.properties is:
coordinator=true
datasources=jmx,hive
http-server.http.port=8080
presto-metastore.db.type=h2
presto-metastore.db.filename=/root/h2
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=`http://node6:8080`
And the hive.properties is:
connector.name=hive-cdh4
hive.metastore.uri=thrift://node6:9083
The hadoop distribution I used is CDH 4.4. I believe it's properly installed and hive can process queries successfully on its own.
Can anyone help me work it out? Any ideas will be appreciated.
As recommended by the Getting Started, I created a controller (jmx only) and a separate worker (jmx,hive), each on separate machines.
What finally solved this for me was to specify the worker's hostname and http-server.http.port as the --server argument to presto. When specifying the controller, it didn't work.
This all makes sense, but I am still wondering what will happen when I have two Presto-Hive workers...
Add more line to etc/catalog/hive.properties
"hive.config.resources=/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml"
ofcourse check values of path before do it.
presto-metastore.db.filename= <- is this the value for Hive Warehouse
Directory ?
=> this presto's metastore,not hive.
I just figured out what was wrong in my case:
you also have to add following line to $HIVE_HOME/conf/hive-env.sh for informing hive to open thrift port(same as mentioned under hive.metastore.uris property in hive-site.xml file). This port is used by hive client to connect to Metastore through RPC.
export METASTORE_PORT=9084
in the hive-env.sh file in the conf folder.
This should sync your hive with presto.

populate_io_cache_on_flush is not a column defined in this metadata

While connecting to Cassandra 1.2.1 using Data-stax Java driver version 1.0.2, I am getting the error:
Exception in thread "main" java.lang.IllegalArgumentException: populate_io_cache_on_flush is not a column defined in this metadata
at com.datastax.driver.core.ColumnDefinitions.getIdx(ColumnDefinitions.java:268)
at com.datastax.driver.core.Row.isNull(Row.java:84)
at com.datastax.driver.core.TableMetadata$Options.<init>(TableMetadata.java:440)
at com.datastax.driver.core.TableMetadata.build(TableMetadata.java:107)
at com.datastax.driver.core.Metadata.buildTableMetadata(Metadata.java:124)
at com.datastax.driver.core.Metadata.rebuildSchema(Metadata.java:88)
at com.datastax.driver.core.ControlConnection.refreshSchema(ControlConnection.java:265)
at com.datastax.driver.core.ControlConnection.tryConnect(ControlConnection.java:220)
at below line:
cluster = Cluster.builder().addContactPoint("localhost").build();
I tried deleted folder \var\lib\cassandra and then restart the cassandra server too which means there is no previous data. The server starts without any error but I am still getting the above error when I am trying to connect to it.
Ohk. Just discovered that it went away when I use latest version of Cassandra(1.2.8). So it might be because of version incompatibility.

Resources