Is Datastax UUIDs a wrapper for java.util.UUID [duplicate] - apache-spark

below is the code block and the error recieved
> creating a temporary views
sqlcontext.sql("""CREATE TEMPORARY VIEW temp_pay_txn_stage
USING org.apache.spark.sql.cassandra
OPTIONS (
table "t_pay_txn_stage",
keyspace "ks_pay",
cluster "Test Cluster",
pushdown "true"
)""".stripMargin)
sqlcontext.sql("""CREATE TEMPORARY VIEW temp_pay_txn_source
USING org.apache.spark.sql.cassandra
OPTIONS (
table "t_pay_txn_source",
keyspace "ks_pay",
cluster "Test Cluster",
pushdown "true"
)""".stripMargin)
querying the views as below to be able to get new records from stage not present in source .
Scala> val df_newrecords = sqlcontext.sql("""Select UUID(),
| |stage.order_id,
| |stage.order_description,
| |stage.transaction_id,
| |stage.pre_transaction_freeze_balance,
| |stage.post_transaction_freeze_balance,
| |toTimestamp(now()),
| |NULL,
| |1 from temp_pay_txn_stage stage left join temp_pay_txn_source source on stage.order_id=source.order_id and stage.transaction_id=source.transaction_id where
| |source.order_id is null and source.transaction_id is null""")`
org.apache.spark.sql.AnalysisException: Undefined function: 'uuid()'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 1 pos 7
i am trying to get the UUIDs generated , but getting this error.

Here is a Simple Example How you can generate timeuuid :
import org.apache.spark.sql.SQLContext
val sqlcontext = new SQLContext(sc)
import sqlcontext.implicits._
//Import UUIDs that contains the method timeBased()
import com.datastax.driver.core.utils.UUIDs
//user define function timeUUID which will retrun time based uuid
val timeUUID = udf(() => UUIDs.timeBased().toString)
//sample query to test, you can change it to yours
val df_newrecords = sqlcontext.sql("SELECT 1 as data UNION SELECT 2 as data").withColumn("time_uuid", timeUUID())
//print all the rows
df_newrecords.collect().foreach(println)
Output :
[1,9a81b3c0-170b-11e7-98bf-9bb55f3128dd]
[2,9a831350-170b-11e7-98bf-9bb55f3128dd]
Source : https://stackoverflow.com/a/37232099/2320144
https://docs.datastax.com/en/drivers/java/2.0/com/datastax/driver/core/utils/UUIDs.html#timeBased--

Related

Spark - get tables from database

I have to perform an operation on all the tables from given databases(s) and so I am using following code.
However, it gives me views as well, is there a way I can filter only tables?
code
def getTables(databaseName: String)(implicit spark: SparkSession): Array[String] = {
val tables = spark.sql(s"show tables from ${databaseName}").collect().map(_(1).asInstanceOf[String])
logger.debug(s"${tables.mkString(",")} found")
tables
}
also, `show views shows error"
scala> spark.sql("show views from gshah03;").show
org.apache.spark.sql.catalyst.parser.ParseException:
missing 'FUNCTIONS' at 'from'(line 1, pos 11)
== SQL ==
show views from gshah03;
-----------^^^
at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:241)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:117)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:643)
... 49 elided
Try this-
val df = spark.range(1, 5)
df.createOrReplaceTempView("df_view")
println(spark.catalog.currentDatabase)
val db: Database = spark.catalog.getDatabase(spark.catalog.currentDatabase)
val tables: Dataset[Table] = spark.catalog.listTables(db.name)
tables.show(false)
/**
* default
* +-------+--------+-----------+---------+-----------+
* |name |database|description|tableType|isTemporary|
* +-------+--------+-----------+---------+-----------+
* |df_view|null |null |TEMPORARY|true |
* +-------+--------+-----------+---------+-----------+
*/

How to get the value of the location for a Hive table using a Spark object?

I am interested in being able to retrieve the location value of a Hive table given a Spark object (SparkSession). One way to obtain this value is by parsing the output of the location via the following SQL query:
describe formatted <table name>
I was wondering if there is another way to obtain the location value without having to parse the output. An API would be great in case the output of the above command changes between Hive versions. If an external dependency is needed, which would it be? Is there some sample spark code that can obtain the location value?
Here is the correct answer:
import org.apache.spark.sql.catalyst.TableIdentifier
lazy val tblMetadata = spark.sessionState.catalog.getTableMetadata(new TableIdentifier(tableName,Some(schema)))
You can also use .toDF method on desc formatted table then filter from dataframe.
DataframeAPI:
scala> :paste
spark.sql("desc formatted data_db.part_table")
.toDF //convert to dataframe will have 3 columns col_name,data_type,comment
.filter('col_name === "Location") //filter on colname
.collect()(0)(1)
.toString
Result:
String = hdfs://nn:8020/location/part_table
(or)
RDD Api:
scala> :paste
spark.sql("desc formatted data_db.part_table")
.collect()
.filter(r => r(0).equals("Location")) //filter on r(0) value
.map(r => r(1)) //get only the location
.mkString //convert as string
.split("8020")(1) //change the split based on your namenode port..etc
Result:
String = /location/part_table
First approach
You can use input_file_name with dataframe.
it will give you absolute file-path for a part file.
spark.read.table("zen.intent_master").select(input_file_name).take(1)
And then extract table path from it.
Second approach
Its more of hack you can say.
package org.apache.spark.sql.hive
import java.net.URI
import org.apache.spark.sql.catalyst.catalog.{InMemoryCatalog, SessionCatalog}
import org.apache.spark.sql.catalyst.parser.ParserInterface
import org.apache.spark.sql.internal.{SessionState, SharedState}
import org.apache.spark.sql.SparkSession
class TableDetail {
def getTableLocation(table: String, spark: SparkSession): URI = {
val sessionState: SessionState = spark.sessionState
val sharedState: SharedState = spark.sharedState
val catalog: SessionCatalog = sessionState.catalog
val sqlParser: ParserInterface = sessionState.sqlParser
val client = sharedState.externalCatalog match {
case catalog: HiveExternalCatalog => catalog.client
case _: InMemoryCatalog => throw new IllegalArgumentException("In Memory catalog doesn't " +
"support hive client API")
}
val idtfr = sqlParser.parseTableIdentifier(table)
require(catalog.tableExists(idtfr), new IllegalArgumentException(idtfr + " done not exists"))
val rawTable = client.getTable(idtfr.database.getOrElse("default"), idtfr.table)
rawTable.location
}
}
Here is how to do it in PySpark:
(spark.sql("desc formatted mydb.myschema")
.filter("col_name=='Location'")
.collect()[0].data_type)
Use this as re-usable function in your scala project
def getHiveTablePath(tableName: String, spark: SparkSession):String =
{
import org.apache.spark.sql.functions._
val sql: String = String.format("desc formatted %s", tableName)
val result: DataFrame = spark.sql(sql).filter(col("col_name") === "Location")
result.show(false) // just for debug purpose
val info: String = result.collect().mkString(",")
val path: String = info.split(',')(1)
path
}
caller would be
println(getHiveTablePath("src", spark)) // you can prefix schema if you have
Result (I executed in local so file:/ below if its hdfs hdfs:// will come):
+--------+------------------------------------+-------+
|col_name|data_type |comment|
+--------+--------------------------------------------+
|Location|file:/Users/hive/spark-warehouse/src| |
+--------+------------------------------------+-------+
file:/Users/hive/spark-warehouse/src
USE ExternalCatalog
scala> spark
res15: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession#4eba6e1f
scala> val metastore = spark.sharedState.externalCatalog
metastore: org.apache.spark.sql.catalyst.catalog.ExternalCatalog = org.apache.spark.sql.hive.HiveExternalCatalog#24b05292
scala> val location = metastore.getTable("meta_data", "mock").location
location: java.net.URI = hdfs://10.1.5.9:4007/usr/hive/warehouse/meta_data.db/mock

Vizualization with zeppelin using cassandra and spark

I'm new with zeppelin, but it look like interesting.
I'd like to do some visualization with cassandra's data reading with spark within zeppelin. But I can't do it, yet!
This is my code:
import org.apache.spark.sql.cassandra._
import org.apache.spark.sql
val createDDL = """CREATE TEMPORARY VIEW keyspaces9
USING org.apache.spark.sql.cassandra
OPTIONS (
table "foehis",
keyspace "tfm",
pushdown "true")"""
spark.sql(createDDL)
spark.sql("SELECT hoclic,hodtac,hohrac,hotpac FROM keyspaces").show
And I get:
res41: org.apache.spark.sql.DataFrame = []
+------+--------+------+------+
|hoclic| hodtac|hohrac|hotpac|
+------+--------+------+------+
| 1011|10180619| 510| ENPR|
| 1011|20140427| 800| ANDE|
| 1011|20140427| 800| ANDE|
| 1011|20170522| 1100| ANDE|
| 1011|20170522| 1100| ANDE|
....
But I don't have the ability to make a viz
How do I convert that data into a table for zeppelin?
Register the DataFrame as a Table using df.registerTempTable.
In your case, register 'keyspaces' dataframe as table and then you can execute the SQL queries on the table and create visualizations.
Sample code:

INSERT IF NOT EXISTS ELSE UPDATE in Spark SQL

Is there any provision of doing "INSERT IF NOT EXISTS ELSE UPDATE" in Spark SQL.
I have Spark SQL table "ABC" that has some records.
And then i have another batch of records that i want to Insert/update in this table based on whether they exist in this table or not.
is there a SQL command that i can use in SQL query to make this happen?
In regular Spark this could be achieved with a join followed by a map like this:
import spark.implicits._
val df1 = spark.sparkContext.parallelize(List(("id1", "orginal"), ("id2", "original"))).toDF("df1_id", "df1_status")
val df2 = spark.sparkContext.parallelize(List(("id1", "new"), ("id3","new"))).toDF("df2_id", "df2_status")
val df3 = df1
.join(df2, 'df1_id === 'df2_id, "outer")
.map(row => {
if (row.isNullAt(2))
(row.getString(0), row.getString(1))
else
(row.getString(2), row.getString(3))
})
This yields:
scala> df3.show
+---+--------+
| _1| _2|
+---+--------+
|id3| new|
|id1| new|
|id2|original|
+---+--------+
You could also use select with udfs instead of map, but in this particular case with null-values, I personally prefer the map variant.
you can use spark sql like this :
select * from (select c.*, row_number() over (partition by tac order by tag desc) as
TAG_NUM from (
select
a.tac
,a.name
,0 as tag
from tableA a
union all
select
b.tac
,b.name
,1 as tag
from tableB b) c ) d where TAG_NUM=1
tac is column you want to insert/update by.
I know it's a bit late to share my code, but to add or update my database, i did a fuction that looks like this :
import pandas as pd
#Returns a spark dataframe with added and updated datas
#key parameter is the primary key of the dataframes
#The two parameters dfToUpdate and dfToAddAndUpdate are spark dataframes
def AddOrUpdateDf(dfToUpdate,dfToAddAndUpdate,key):
#Cast the spark dataframe dfToUpdate to pandas dataframe
dfToUpdatePandas = dfToUpdate.toPandas()
#Cast the spark dataframe dfToAddAndUpdate to pandas dataframe
dfToAddAndUpdatePandas = dfToAddAndUpdate.toPandas()
#Update the table records with the latest records, and adding new records if there are new records.
AddOrUpdatePandasDf = pd.concat([dfToUpdatePandas,dfToAddAndUpdatePandas]).drop_duplicates([key], keep = 'last').sort_values(key)
#Cast back to get a spark dataframe
AddOrUpdateDf = spark.createDataFrame(AddOrUpdatePandasDf)
return AddOrUpdateDf
As you can see, we need to cast the spark dataframes to pandas dataframe to be able to do the pd.concat and especially the drop_duplicates with the "keep = 'last'", then we cast back to spark dataframe and return it.
I don't think this is the best way to handle the AddOrUpdate, but at least, it works.

create hive external table with schema in spark

I am using spark 1.6 and I aim to create external hive table like what I do in hive script. To do this, I first read in the partitioned avro file and get the schema of this file. Now I stopped here, I get no idea how to apply this schema to my creating table. I use scala. Need help guys.
finally, I make it myself with old-fashioned way. With the help of code below:
val rawSchema = sqlContext.read.avro("Path").schema
val schemaString = rawSchema.fields.map(field => field.name.replaceAll("""^_""", "").concat(" ").concat(field.dataType.typeName match {
case "integer" => "int"
case smt => smt
})).mkString(",\n")
val ddl =
s"""
|Create external table $tablename ($schemaString) \n
|partitioned by (y int, m int, d int, hh int, mm int) \n
|Stored As Avro \n
|-- inputformat 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' \n
| -- outputformat 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' \n
| Location 'hdfs://$path'
""".stripMargin
take care no column name can start with _ and hive can't parse integer. I would like to say that this way is not flexible but work. if anyone get better idea, plz comment.
I didn't see a way to automatically infer schema for external tables. So I created case for the string type. You could add case for your data type. But I'm not sure how many columns you have. I apologize as this might not be a clean approach.
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{Row, SaveMode};
import org.apache.spark.sql.types.{StructType,StructField,StringType};
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = hiveContext.read.format("com.databricks.spark.avro").load("people.avro")
val schema = results.schema.map( x => x.name.concat(" ").concat( x.dataType.toString() match { case "StringType" => "STRING"} ) ).mkString(",")
val hive_sql = "CREATE EXTERNAL TABLE people_and_age (" + schema + ") ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/user/ravi/people_age'"
hiveContext.sql(hive_sql)
results.saveAsTable("people_age",SaveMode.Overwrite)
hiveContext.sql("select * from people_age").show()
Try the below code.
val htctx= new HiveContext(sc)
htctx.sql(create extetnal table tablename schema partitioned by attribute row format serde serde.jar field terminated by value location path)

Resources