Get all record from nth bucket in Hive sql - apache-spark

How to get all record from nth bucket in hive.
Select * from bucketTable from bucket 9;

You can achieve this with different ways:
Approach-1: By getting the table stored location from desc formatted <db>.<tab_name>
Then read the 9th bucket file directly from HDFS filesystem.
(or)
Approach-2: Using input_file_name()
Then filter only 9th bucket data by using filename
Example:
Approach-1:
Scala:
val df = spark.sql("desc formatted <db>.<tab_name>")
//get table location in hdfs path
val loc_hdfs = df.filter('col_name === "Location").select("data_type").collect.map(x => x(0)).mkString
//based on your table format change the read format
val ninth_buk = spark.read.orc(s"${loc_hdfs}/000008_0*")
//display the data
ninth_buk.show()
Pyspark:
from pyspark.sql.functions import *
df = spark.sql("desc formatted <db>.<tab_name>")
loc_hdfs = df.filter(col("col_name") == "Location").select("data_type").collect()[0].__getattr__("data_type")
ninth_buk = spark.read.orc(loc_hdfs + "/000008_0*")
ninth_buk.show()
Approach-2:
Scala:
val df = spark.read.table("<db>.<tab_name>")
//add input_file_name
val df1 = df.withColumn("filename",input_file_name())
#filter only the 9th bucket filename and select only required columns
val ninth_buk = df1.filter('filename.contains("000008_0")).select(df.columns.head,df.columns.tail:_*)
ninth_buk.show()
pyspark:
from pyspark.sql.functions import *
df = spark.read.table("<db>.<tab_name>")
df1 = df.withColumn("filename",input_file_name())
ninth_buk = df1.filter(col("filename").contains("000008_0")).select(*df.columns)
ninth_buk.show()
Approach-2 will not be recommended if you have huge data as we need to filter through whole dataframe..!!
In Hive:
set hive.support.quoted.identifiers=none;
select `(fn)?+.+` from (
select *,input__file__name fn from table_name)e
where e.fn like '%000008_0%';

If it is a ORC table
SELECT * FROM orc.<bucket_HDFS_path>

select * from bucketing_table tablesample(bucket n out of y on clustered_criteria_column);
where bucketing_table is your bucket table name
n => nth bucket
y => total no. of buckets

Related

Casting date to integer returns null in Spark SQL

I want to convert a date column into integer using Spark SQL.
I'm following this code, but I want to use Spark SQL and not PySpark.
Reproduce the example:
from pyspark.sql.types import *
import pyspark.sql.functions as F
# DUMMY DATA
simpleData = [("James",34,"2006-01-01","true","M",3000.60),
("Michael",33,"1980-01-10","true","F",3300.80),
("Robert",37,"1992-07-01","false","M",5000.50)
]
columns = ["firstname","age","jobStartDate","isGraduated","gender","salary"]
df = spark.createDataFrame(data = simpleData, schema = columns)
df = df.withColumn("jobStartDate", df['jobStartDate'].cast(DateType()))
df = df.withColumn("jobStartDateAsInteger1", F.unix_timestamp(df['jobStartDate']))
display(df)
What I want is to do the same transformation, but using Spark SQL. I am using the following code:
df.createOrReplaceTempView("date_to_integer")
%sql
select
seg.*,
CAST (jobStartDate AS INTEGER) as JobStartDateAsInteger2 -- return null value
from date_to_integer seg
How to solve it?
First you need to CAST your jobStartDate to DATE and then use UNIX_TIMESTAMP to transform it to UNIX integer.
SELECT
seg.*,
UNIX_TIMESTAMP(CAST (jobStartDate AS DATE)) AS JobStartDateAsInteger2
FROM date_to_integer seg

Pyspark - truncate a dataframe based on index

Want to get n items in a specific range of a dataframe
data = [("Java", "123456"), ("Python", "123456"), ("Go", "123456"), ("Scala", "123456"), ("TypeScript", "123456")]
rdd = spark.sparkContext.parallelize(data)
df = rdd.toDF()
How to truncate the df in order to keep [2nd - 4th] items ? (for example)

How to iterate through dataframe without converting to dataset in spark?

I have a dataframe through which I want to iterate, but I dont want to convert dataframe to dataset.
We have to convert spark scala code to pyspark and pyspark does not support dataset.
I have tried the following code with by converting to dataset
data in file:
abc,a
mno,b
pqr,a
xyz,b
val a = sc.textFile("<path>")
//creating dataframe with column AA,BB
val b = a.map(x => x.split(",")).map(x =>(x(0).toString,x(1).toString)).toDF("AA","BB")
b.registerTempTable("test")
case class T(AA:String, BB: String)
//creating dataset from dataframe
val d = b.as[T].collect
d.foreach{ x=>
var m = spark.sql(s"select * from test where BB = '${x.BB}'")
m.show()
}
Without converting to dataset it gives error i.e. with
val d = b.collect
d.foreach{ x=>
var m = spark.sql(s"select * from test where BB = '${x.BB}'")
m.show()
}
it gives error:
error: value BB is not member of org.apache.spark.sql.ROW
You cannot loop dataframe as you have given in the above code. Use dataframe's rdd.collect to loop dataframe.
import spark.implicits._
val df = Seq(("abc","a"), ("mno","b"), ("pqr","a"),("xyz","b")).toDF("AA", "BB")
df.registerTempTable("test")
df.rdd.collect.foreach(x => {
val BBvalue = x.mkString(",").split(",")(1)
var m = spark.sql(s"select * from test where BB = '$BBvalue'")
m.show()
})
Inside the loop I used mkString to convert an rdd row to string and then split the column values with comma and use the index of column for accessing the value. For example, in the above code I have used (1) which means, column BB column index is 2.
Please let me know if you have any questions.

Partition column in hive external table from spark

Creating external table with partitions from spark
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SaveMode
val spark = SparkSession.builder().master("local[*]").appName("splitInput").enableHiveSupport().getOrCreate()
val sparkDf = spark.read.option("header","true").option("inferSchema","true").csv("input/candidate/event=ABCD/CandidateScheduleData_3007_2018.csv")
var newDf = sparkDf
for(col <- sparkDf.columns){ newDf = newDf.withColumnRenamed(col,col.replaceAll("\\s", "_")) }
newDf.write.mode(SaveMode.Overwrite).option("path","/output/candidate/event=ABCD/").partitionBy("CenterCode","ExamDate").saveAsTable("abc.candidatelist")
Everything works fine except the partition column ExamDate format created as
ExamDate=30%2F07%2F2018 instead of ExamDate=30-07-2018
How to replace%2F with - in ExamDate format.
%2F is percent encoded /. This means that data is exactly in 30/07/2018 format. You can either:
Parse it to_date using specified format.
Manually format columns with required format.

Save and append a file in HDFS using PySpark

I have a data frame in PySpark called df. I have registered this df as a temptable like below.
df.registerTempTable('mytempTable')
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
Now from this temp table I will get certain values, like max_id of a column id
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
max_id = sqlContext.sql("select nvl(max(id),0) as maxval from mytempTable").collect()[0].asDict()['maxval']
Now I will collect all these values like below.
test = ("{},{},{}".format(date,min_id,max_id))
I found that test is not a data frame but it is a str string
>>> type(test)
<type 'str'>
Now I want save this test as a file in HDFS. I would also like to append data to the same file in hdfs.
How can I do that using PySpark?
FYI I am using Spark 1.6 and don't have access to Databricks spark-csv package.
Here you go, you'll just need to concat your data with concat_ws and right it as a text:
query = """select concat_ws(',', date, nvl(min(id), 0), nvl(max(id), 0))
from mytempTable"""
sqlContext.sql(query).write("text").mode("append").save("/tmp/fooo")
Or even a better alternative :
from pyspark.sql import functions as f
(sqlContext
.table("myTempTable")
.select(f.concat_ws(",", f.first(f.lit(date)), f.min("id"), f.max("id")))
.coalesce(1)
.write.format("text").mode("append").save("/tmp/fooo"))

Resources